00:00:00.001 Started by upstream project "spdk-dpdk-per-patch" build number 296 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.101 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.102 The recommended git tool is: git 00:00:00.102 using credential 00000000-0000-0000-0000-000000000002 00:00:00.104 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.144 Fetching changes from the remote Git repository 00:00:00.146 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.185 Using shallow fetch with depth 1 00:00:00.185 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.185 > git --version # timeout=10 00:00:00.213 > git --version # 'git version 2.39.2' 00:00:00.213 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.240 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.240 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.602 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.612 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.623 Checking out Revision 58e4f482292076ec19d68e6712473e60ef956aed (FETCH_HEAD) 00:00:07.623 > git config core.sparsecheckout # timeout=10 00:00:07.632 > git read-tree -mu HEAD # timeout=10 00:00:07.645 > git checkout -f 58e4f482292076ec19d68e6712473e60ef956aed # timeout=5 00:00:07.669 Commit message: "packer: Fix typo in a package name" 00:00:07.669 > git rev-list --no-walk 58e4f482292076ec19d68e6712473e60ef956aed # timeout=10 00:00:07.770 [Pipeline] Start of Pipeline 00:00:07.780 [Pipeline] library 00:00:07.781 Loading library shm_lib@master 00:00:07.781 Library shm_lib@master is cached. Copying from home. 00:00:07.795 [Pipeline] node 00:00:07.803 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.805 [Pipeline] { 00:00:07.812 [Pipeline] catchError 00:00:07.813 [Pipeline] { 00:00:07.824 [Pipeline] wrap 00:00:07.831 [Pipeline] { 00:00:07.839 [Pipeline] stage 00:00:07.841 [Pipeline] { (Prologue) 00:00:08.046 [Pipeline] sh 00:00:08.334 + logger -p user.info -t JENKINS-CI 00:00:08.354 [Pipeline] echo 00:00:08.356 Node: CYP9 00:00:08.363 [Pipeline] sh 00:00:08.670 [Pipeline] setCustomBuildProperty 00:00:08.683 [Pipeline] echo 00:00:08.685 Cleanup processes 00:00:08.690 [Pipeline] sh 00:00:08.980 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.980 647432 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.995 [Pipeline] sh 00:00:09.284 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.284 ++ grep -v 'sudo pgrep' 00:00:09.284 ++ awk '{print $1}' 00:00:09.284 + sudo kill -9 00:00:09.284 + true 00:00:09.300 [Pipeline] cleanWs 00:00:09.311 [WS-CLEANUP] Deleting project workspace... 00:00:09.311 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.318 [WS-CLEANUP] done 00:00:09.322 [Pipeline] setCustomBuildProperty 00:00:09.335 [Pipeline] sh 00:00:09.623 + sudo git config --global --replace-all safe.directory '*' 00:00:09.745 [Pipeline] httpRequest 00:00:11.053 [Pipeline] echo 00:00:11.055 Sorcerer 10.211.164.101 is alive 00:00:11.064 [Pipeline] retry 00:00:11.065 [Pipeline] { 00:00:11.077 [Pipeline] httpRequest 00:00:11.081 HttpMethod: GET 00:00:11.082 URL: http://10.211.164.101/packages/jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:00:11.083 Sending request to url: http://10.211.164.101/packages/jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:00:11.086 Response Code: HTTP/1.1 200 OK 00:00:11.086 Success: Status code 200 is in the accepted range: 200,404 00:00:11.086 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:00:12.140 [Pipeline] } 00:00:12.157 [Pipeline] // retry 00:00:12.163 [Pipeline] sh 00:00:12.452 + tar --no-same-owner -xf jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:00:12.468 [Pipeline] httpRequest 00:00:12.850 [Pipeline] echo 00:00:12.852 Sorcerer 10.211.164.101 is alive 00:00:12.862 [Pipeline] retry 00:00:12.864 [Pipeline] { 00:00:12.877 [Pipeline] httpRequest 00:00:12.882 HttpMethod: GET 00:00:12.882 URL: http://10.211.164.101/packages/spdk_1042d663d395fdb56d1a03c64ee259fa9237faa3.tar.gz 00:00:12.883 Sending request to url: http://10.211.164.101/packages/spdk_1042d663d395fdb56d1a03c64ee259fa9237faa3.tar.gz 00:00:12.900 Response Code: HTTP/1.1 200 OK 00:00:12.900 Success: Status code 200 is in the accepted range: 200,404 00:00:12.901 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_1042d663d395fdb56d1a03c64ee259fa9237faa3.tar.gz 00:00:47.347 [Pipeline] } 00:00:47.365 [Pipeline] // retry 00:00:47.373 [Pipeline] sh 00:00:47.665 + tar --no-same-owner -xf spdk_1042d663d395fdb56d1a03c64ee259fa9237faa3.tar.gz 00:00:50.983 [Pipeline] sh 00:00:51.271 + git -C spdk log --oneline -n5 00:00:51.271 1042d663d env_dpdk: align dpdk headers with upstream 00:00:51.271 f417ec25e pkgdep/git: Add patches to ICE driver for changes in >= 6.11 kernels 00:00:51.271 b83903543 pkgdep/git: Add small patch to irdma for >= 6.11 kernels 00:00:51.271 214b0826b nvmf: clear visible_ns flag when no_auto_visible is unset 00:00:51.272 bfd014b57 nvmf: add function for setting ns visibility 00:00:51.286 [Pipeline] sh 00:00:51.574 + git -C spdk/dpdk fetch https://review.spdk.io/gerrit/spdk/dpdk refs/changes/02/25102/2 00:00:52.959 From https://review.spdk.io/gerrit/spdk/dpdk 00:00:52.959 * branch refs/changes/02/25102/2 -> FETCH_HEAD 00:00:52.973 [Pipeline] sh 00:00:53.261 + git -C spdk/dpdk checkout FETCH_HEAD 00:00:54.203 Previous HEAD position was 8d8db71763 eal/alarm_cancel: Fix thread starvation 00:00:54.203 HEAD is now at 39efe3d81c dmadev: fix calloc parameters 00:00:54.213 [Pipeline] } 00:00:54.229 [Pipeline] // stage 00:00:54.240 [Pipeline] stage 00:00:54.242 [Pipeline] { (Prepare) 00:00:54.256 [Pipeline] writeFile 00:00:54.269 [Pipeline] sh 00:00:54.554 + logger -p user.info -t JENKINS-CI 00:00:54.569 [Pipeline] sh 00:00:54.861 + logger -p user.info -t JENKINS-CI 00:00:54.874 [Pipeline] sh 00:00:55.162 + cat autorun-spdk.conf 00:00:55.162 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:55.162 SPDK_TEST_NVMF=1 00:00:55.162 SPDK_TEST_NVME_CLI=1 00:00:55.162 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:55.162 SPDK_TEST_NVMF_NICS=e810 00:00:55.162 SPDK_TEST_VFIOUSER=1 00:00:55.162 SPDK_RUN_UBSAN=1 00:00:55.162 NET_TYPE=phy 00:00:55.171 RUN_NIGHTLY= 00:00:55.176 [Pipeline] readFile 00:00:55.199 [Pipeline] withEnv 00:00:55.202 [Pipeline] { 00:00:55.214 [Pipeline] sh 00:00:55.503 + set -ex 00:00:55.503 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:55.503 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:55.503 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:55.503 ++ SPDK_TEST_NVMF=1 00:00:55.503 ++ SPDK_TEST_NVME_CLI=1 00:00:55.503 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:55.503 ++ SPDK_TEST_NVMF_NICS=e810 00:00:55.503 ++ SPDK_TEST_VFIOUSER=1 00:00:55.503 ++ SPDK_RUN_UBSAN=1 00:00:55.503 ++ NET_TYPE=phy 00:00:55.503 ++ RUN_NIGHTLY= 00:00:55.503 + case $SPDK_TEST_NVMF_NICS in 00:00:55.503 + DRIVERS=ice 00:00:55.503 + [[ tcp == \r\d\m\a ]] 00:00:55.503 + [[ -n ice ]] 00:00:55.503 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:55.503 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:55.503 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:55.503 rmmod: ERROR: Module irdma is not currently loaded 00:00:55.503 rmmod: ERROR: Module i40iw is not currently loaded 00:00:55.503 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:55.503 + true 00:00:55.503 + for D in $DRIVERS 00:00:55.503 + sudo modprobe ice 00:00:55.503 + exit 0 00:00:55.513 [Pipeline] } 00:00:55.530 [Pipeline] // withEnv 00:00:55.536 [Pipeline] } 00:00:55.552 [Pipeline] // stage 00:00:55.563 [Pipeline] catchError 00:00:55.564 [Pipeline] { 00:00:55.579 [Pipeline] timeout 00:00:55.579 Timeout set to expire in 1 hr 0 min 00:00:55.581 [Pipeline] { 00:00:55.595 [Pipeline] stage 00:00:55.597 [Pipeline] { (Tests) 00:00:55.612 [Pipeline] sh 00:00:55.905 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:55.905 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:55.905 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:55.905 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:55.905 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:55.905 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:55.905 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:55.905 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:55.905 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:55.905 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:55.905 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:55.905 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:55.905 + source /etc/os-release 00:00:55.905 ++ NAME='Fedora Linux' 00:00:55.905 ++ VERSION='39 (Cloud Edition)' 00:00:55.905 ++ ID=fedora 00:00:55.905 ++ VERSION_ID=39 00:00:55.905 ++ VERSION_CODENAME= 00:00:55.905 ++ PLATFORM_ID=platform:f39 00:00:55.905 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:00:55.905 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:55.905 ++ LOGO=fedora-logo-icon 00:00:55.905 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:00:55.905 ++ HOME_URL=https://fedoraproject.org/ 00:00:55.905 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:00:55.905 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:55.905 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:55.905 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:55.905 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:00:55.905 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:55.905 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:00:55.905 ++ SUPPORT_END=2024-11-12 00:00:55.905 ++ VARIANT='Cloud Edition' 00:00:55.905 ++ VARIANT_ID=cloud 00:00:55.905 + uname -a 00:00:55.905 Linux spdk-cyp-09 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:00:55.905 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:59.208 Hugepages 00:00:59.208 node hugesize free / total 00:00:59.208 node0 1048576kB 0 / 0 00:00:59.208 node0 2048kB 0 / 0 00:00:59.208 node1 1048576kB 0 / 0 00:00:59.208 node1 2048kB 0 / 0 00:00:59.208 00:00:59.208 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:59.208 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:00:59.208 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:00:59.208 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:00:59.208 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:00:59.208 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:00:59.208 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:00:59.208 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:00:59.208 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:00:59.208 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:00:59.208 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:00:59.208 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:00:59.208 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:00:59.208 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:00:59.208 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:00:59.208 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:00:59.208 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:00:59.208 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:00:59.208 + rm -f /tmp/spdk-ld-path 00:00:59.208 + source autorun-spdk.conf 00:00:59.208 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:59.208 ++ SPDK_TEST_NVMF=1 00:00:59.208 ++ SPDK_TEST_NVME_CLI=1 00:00:59.208 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:59.208 ++ SPDK_TEST_NVMF_NICS=e810 00:00:59.208 ++ SPDK_TEST_VFIOUSER=1 00:00:59.208 ++ SPDK_RUN_UBSAN=1 00:00:59.208 ++ NET_TYPE=phy 00:00:59.208 ++ RUN_NIGHTLY= 00:00:59.208 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:59.208 + [[ -n '' ]] 00:00:59.208 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:59.208 + for M in /var/spdk/build-*-manifest.txt 00:00:59.208 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:00:59.208 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:59.208 + for M in /var/spdk/build-*-manifest.txt 00:00:59.208 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:59.208 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:59.208 + for M in /var/spdk/build-*-manifest.txt 00:00:59.208 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:59.208 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:59.208 ++ uname 00:00:59.208 + [[ Linux == \L\i\n\u\x ]] 00:00:59.208 + sudo dmesg -T 00:00:59.208 + sudo dmesg --clear 00:00:59.208 + dmesg_pid=649024 00:00:59.208 + [[ Fedora Linux == FreeBSD ]] 00:00:59.208 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:59.208 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:59.208 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:59.208 + [[ -x /usr/src/fio-static/fio ]] 00:00:59.208 + export FIO_BIN=/usr/src/fio-static/fio 00:00:59.208 + FIO_BIN=/usr/src/fio-static/fio 00:00:59.208 + sudo dmesg -Tw 00:00:59.208 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:59.208 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:59.208 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:59.208 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:59.208 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:59.208 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:59.208 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:59.208 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:59.208 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:59.208 Test configuration: 00:00:59.208 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:59.208 SPDK_TEST_NVMF=1 00:00:59.208 SPDK_TEST_NVME_CLI=1 00:00:59.208 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:59.208 SPDK_TEST_NVMF_NICS=e810 00:00:59.208 SPDK_TEST_VFIOUSER=1 00:00:59.208 SPDK_RUN_UBSAN=1 00:00:59.208 NET_TYPE=phy 00:00:59.470 RUN_NIGHTLY= 11:45:35 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:00:59.470 11:45:35 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:59.470 11:45:35 -- scripts/common.sh@15 -- $ shopt -s extglob 00:00:59.470 11:45:35 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:59.470 11:45:35 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:59.470 11:45:35 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:59.470 11:45:35 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:59.470 11:45:35 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:59.470 11:45:35 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:59.470 11:45:35 -- paths/export.sh@5 -- $ export PATH 00:00:59.470 11:45:35 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:59.470 11:45:35 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:59.470 11:45:35 -- common/autobuild_common.sh@486 -- $ date +%s 00:00:59.470 11:45:35 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1729503935.XXXXXX 00:00:59.470 11:45:35 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1729503935.yRtAGS 00:00:59.470 11:45:35 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:00:59.470 11:45:35 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:00:59.470 11:45:35 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:59.470 11:45:35 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:59.470 11:45:35 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:59.470 11:45:35 -- common/autobuild_common.sh@502 -- $ get_config_params 00:00:59.470 11:45:35 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:00:59.470 11:45:35 -- common/autotest_common.sh@10 -- $ set +x 00:00:59.471 11:45:35 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:59.471 11:45:35 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:00:59.471 11:45:35 -- pm/common@17 -- $ local monitor 00:00:59.471 11:45:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:59.471 11:45:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:59.471 11:45:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:59.471 11:45:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:59.471 11:45:35 -- pm/common@21 -- $ date +%s 00:00:59.471 11:45:35 -- pm/common@25 -- $ sleep 1 00:00:59.471 11:45:35 -- pm/common@21 -- $ date +%s 00:00:59.471 11:45:35 -- pm/common@21 -- $ date +%s 00:00:59.471 11:45:35 -- pm/common@21 -- $ date +%s 00:00:59.471 11:45:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1729503935 00:00:59.471 11:45:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1729503935 00:00:59.471 11:45:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1729503935 00:00:59.471 11:45:35 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1729503935 00:00:59.471 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1729503935_collect-cpu-load.pm.log 00:00:59.471 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1729503935_collect-vmstat.pm.log 00:00:59.471 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1729503935_collect-cpu-temp.pm.log 00:00:59.471 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1729503935_collect-bmc-pm.bmc.pm.log 00:01:00.415 11:45:36 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:00.415 11:45:36 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:00.415 11:45:36 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:00.415 11:45:36 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:00.415 11:45:36 -- spdk/autobuild.sh@16 -- $ date -u 00:01:00.415 Mon Oct 21 09:45:36 AM UTC 2024 00:01:00.415 11:45:36 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:00.415 v25.01-pre-77-g1042d663d 00:01:00.415 11:45:36 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:00.415 11:45:36 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:00.415 11:45:36 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:00.415 11:45:36 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:00.415 11:45:36 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:00.415 11:45:36 -- common/autotest_common.sh@10 -- $ set +x 00:01:00.415 ************************************ 00:01:00.415 START TEST ubsan 00:01:00.415 ************************************ 00:01:00.415 11:45:36 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:00.415 using ubsan 00:01:00.415 00:01:00.415 real 0m0.001s 00:01:00.415 user 0m0.001s 00:01:00.415 sys 0m0.000s 00:01:00.415 11:45:36 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:00.415 11:45:36 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:00.415 ************************************ 00:01:00.415 END TEST ubsan 00:01:00.415 ************************************ 00:01:00.415 11:45:36 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:00.415 11:45:36 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:00.415 11:45:36 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:00.415 11:45:36 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:00.415 11:45:36 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:00.415 11:45:36 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:00.415 11:45:36 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:00.415 11:45:36 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:00.415 11:45:36 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:00.676 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:00.676 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:00.937 Using 'verbs' RDMA provider 00:01:16.793 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:29.164 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:29.737 Creating mk/config.mk...done. 00:01:29.737 Creating mk/cc.flags.mk...done. 00:01:29.737 Type 'make' to build. 00:01:29.737 11:46:06 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:01:29.737 11:46:06 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:29.737 11:46:06 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:29.737 11:46:06 -- common/autotest_common.sh@10 -- $ set +x 00:01:29.737 ************************************ 00:01:29.737 START TEST make 00:01:29.737 ************************************ 00:01:29.737 11:46:06 make -- common/autotest_common.sh@1125 -- $ make -j144 00:01:30.309 make[1]: Nothing to be done for 'all'. 00:01:31.691 The Meson build system 00:01:31.691 Version: 1.5.0 00:01:31.691 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:31.691 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:31.691 Build type: native build 00:01:31.691 Project name: libvfio-user 00:01:31.691 Project version: 0.0.1 00:01:31.691 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:31.691 C linker for the host machine: cc ld.bfd 2.40-14 00:01:31.691 Host machine cpu family: x86_64 00:01:31.691 Host machine cpu: x86_64 00:01:31.691 Run-time dependency threads found: YES 00:01:31.691 Library dl found: YES 00:01:31.691 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:31.691 Run-time dependency json-c found: YES 0.17 00:01:31.691 Run-time dependency cmocka found: YES 1.1.7 00:01:31.691 Program pytest-3 found: NO 00:01:31.691 Program flake8 found: NO 00:01:31.691 Program misspell-fixer found: NO 00:01:31.691 Program restructuredtext-lint found: NO 00:01:31.691 Program valgrind found: YES (/usr/bin/valgrind) 00:01:31.691 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:31.691 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:31.691 Compiler for C supports arguments -Wwrite-strings: YES 00:01:31.691 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:31.691 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:31.691 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:31.691 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:31.691 Build targets in project: 8 00:01:31.691 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:31.691 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:31.691 00:01:31.691 libvfio-user 0.0.1 00:01:31.691 00:01:31.691 User defined options 00:01:31.691 buildtype : debug 00:01:31.691 default_library: shared 00:01:31.691 libdir : /usr/local/lib 00:01:31.691 00:01:31.691 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:31.951 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:32.211 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:32.212 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:32.212 [3/37] Compiling C object samples/null.p/null.c.o 00:01:32.212 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:32.212 [5/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:32.212 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:32.212 [7/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:32.212 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:32.212 [9/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:32.212 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:32.212 [11/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:32.212 [12/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:32.212 [13/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:32.212 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:32.212 [15/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:32.212 [16/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:32.212 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:32.212 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:32.212 [19/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:32.212 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:32.212 [21/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:32.212 [22/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:32.212 [23/37] Compiling C object samples/server.p/server.c.o 00:01:32.212 [24/37] Compiling C object samples/client.p/client.c.o 00:01:32.212 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:32.212 [26/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:32.212 [27/37] Linking target samples/client 00:01:32.212 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:32.472 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:32.472 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:32.472 [31/37] Linking target test/unit_tests 00:01:32.472 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:32.472 [33/37] Linking target samples/null 00:01:32.472 [34/37] Linking target samples/server 00:01:32.472 [35/37] Linking target samples/shadow_ioeventfd_server 00:01:32.472 [36/37] Linking target samples/lspci 00:01:32.472 [37/37] Linking target samples/gpio-pci-idio-16 00:01:32.472 INFO: autodetecting backend as ninja 00:01:32.472 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:32.733 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:32.995 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:32.995 ninja: no work to do. 00:01:39.590 The Meson build system 00:01:39.590 Version: 1.5.0 00:01:39.590 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:39.590 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:39.590 Build type: native build 00:01:39.590 Program cat found: YES (/usr/bin/cat) 00:01:39.590 Project name: DPDK 00:01:39.590 Project version: 23.11.0 00:01:39.590 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:39.590 C linker for the host machine: cc ld.bfd 2.40-14 00:01:39.590 Host machine cpu family: x86_64 00:01:39.590 Host machine cpu: x86_64 00:01:39.590 Message: ## Building in Developer Mode ## 00:01:39.590 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:39.590 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:39.590 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:39.590 Program python3 found: YES (/usr/bin/python3) 00:01:39.590 Program cat found: YES (/usr/bin/cat) 00:01:39.590 Compiler for C supports arguments -march=native: YES 00:01:39.590 Checking for size of "void *" : 8 00:01:39.590 Checking for size of "void *" : 8 (cached) 00:01:39.590 Library m found: YES 00:01:39.590 Library numa found: YES 00:01:39.590 Has header "numaif.h" : YES 00:01:39.590 Library fdt found: NO 00:01:39.590 Library execinfo found: NO 00:01:39.590 Has header "execinfo.h" : YES 00:01:39.590 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:39.590 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:39.590 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:39.590 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:39.590 Run-time dependency openssl found: YES 3.1.1 00:01:39.590 Run-time dependency libpcap found: YES 1.10.4 00:01:39.590 Has header "pcap.h" with dependency libpcap: YES 00:01:39.590 Compiler for C supports arguments -Wcast-qual: YES 00:01:39.590 Compiler for C supports arguments -Wdeprecated: YES 00:01:39.590 Compiler for C supports arguments -Wformat: YES 00:01:39.590 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:39.590 Compiler for C supports arguments -Wformat-security: NO 00:01:39.590 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:39.590 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:39.590 Compiler for C supports arguments -Wnested-externs: YES 00:01:39.590 Compiler for C supports arguments -Wold-style-definition: YES 00:01:39.590 Compiler for C supports arguments -Wpointer-arith: YES 00:01:39.590 Compiler for C supports arguments -Wsign-compare: YES 00:01:39.590 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:39.590 Compiler for C supports arguments -Wundef: YES 00:01:39.590 Compiler for C supports arguments -Wwrite-strings: YES 00:01:39.590 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:39.590 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:39.590 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:39.590 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:39.590 Program objdump found: YES (/usr/bin/objdump) 00:01:39.590 Compiler for C supports arguments -mavx512f: YES 00:01:39.590 Checking if "AVX512 checking" compiles: YES 00:01:39.590 Fetching value of define "__SSE4_2__" : 1 00:01:39.590 Fetching value of define "__AES__" : 1 00:01:39.590 Fetching value of define "__AVX__" : 1 00:01:39.590 Fetching value of define "__AVX2__" : 1 00:01:39.590 Fetching value of define "__AVX512BW__" : 1 00:01:39.590 Fetching value of define "__AVX512CD__" : 1 00:01:39.590 Fetching value of define "__AVX512DQ__" : 1 00:01:39.590 Fetching value of define "__AVX512F__" : 1 00:01:39.590 Fetching value of define "__AVX512VL__" : 1 00:01:39.590 Fetching value of define "__PCLMUL__" : 1 00:01:39.590 Fetching value of define "__RDRND__" : 1 00:01:39.590 Fetching value of define "__RDSEED__" : 1 00:01:39.590 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:39.590 Fetching value of define "__znver1__" : (undefined) 00:01:39.590 Fetching value of define "__znver2__" : (undefined) 00:01:39.590 Fetching value of define "__znver3__" : (undefined) 00:01:39.590 Fetching value of define "__znver4__" : (undefined) 00:01:39.590 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:39.590 Message: lib/log: Defining dependency "log" 00:01:39.590 Message: lib/kvargs: Defining dependency "kvargs" 00:01:39.590 Message: lib/telemetry: Defining dependency "telemetry" 00:01:39.590 Checking for function "getentropy" : NO 00:01:39.590 Message: lib/eal: Defining dependency "eal" 00:01:39.590 Message: lib/ring: Defining dependency "ring" 00:01:39.590 Message: lib/rcu: Defining dependency "rcu" 00:01:39.590 Message: lib/mempool: Defining dependency "mempool" 00:01:39.590 Message: lib/mbuf: Defining dependency "mbuf" 00:01:39.590 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:39.590 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:39.590 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:39.590 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:39.590 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:39.590 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:39.590 Compiler for C supports arguments -mpclmul: YES 00:01:39.590 Compiler for C supports arguments -maes: YES 00:01:39.590 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:39.590 Compiler for C supports arguments -mavx512bw: YES 00:01:39.590 Compiler for C supports arguments -mavx512dq: YES 00:01:39.590 Compiler for C supports arguments -mavx512vl: YES 00:01:39.590 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:39.590 Compiler for C supports arguments -mavx2: YES 00:01:39.590 Compiler for C supports arguments -mavx: YES 00:01:39.590 Message: lib/net: Defining dependency "net" 00:01:39.590 Message: lib/meter: Defining dependency "meter" 00:01:39.590 Message: lib/ethdev: Defining dependency "ethdev" 00:01:39.590 Message: lib/pci: Defining dependency "pci" 00:01:39.591 Message: lib/cmdline: Defining dependency "cmdline" 00:01:39.591 Message: lib/hash: Defining dependency "hash" 00:01:39.591 Message: lib/timer: Defining dependency "timer" 00:01:39.591 Message: lib/compressdev: Defining dependency "compressdev" 00:01:39.591 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:39.591 Message: lib/dmadev: Defining dependency "dmadev" 00:01:39.591 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:39.591 Message: lib/power: Defining dependency "power" 00:01:39.591 Message: lib/reorder: Defining dependency "reorder" 00:01:39.591 Message: lib/security: Defining dependency "security" 00:01:39.591 Has header "linux/userfaultfd.h" : YES 00:01:39.591 Has header "linux/vduse.h" : YES 00:01:39.591 Message: lib/vhost: Defining dependency "vhost" 00:01:39.591 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:39.591 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:39.591 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:39.591 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:39.591 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:39.591 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:39.591 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:39.591 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:39.591 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:39.591 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:39.591 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:39.591 Configuring doxy-api-html.conf using configuration 00:01:39.591 Configuring doxy-api-man.conf using configuration 00:01:39.591 Program mandb found: YES (/usr/bin/mandb) 00:01:39.591 Program sphinx-build found: NO 00:01:39.591 Configuring rte_build_config.h using configuration 00:01:39.591 Message: 00:01:39.591 ================= 00:01:39.591 Applications Enabled 00:01:39.591 ================= 00:01:39.591 00:01:39.591 apps: 00:01:39.591 00:01:39.591 00:01:39.591 Message: 00:01:39.591 ================= 00:01:39.591 Libraries Enabled 00:01:39.591 ================= 00:01:39.591 00:01:39.591 libs: 00:01:39.591 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:39.591 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:39.591 cryptodev, dmadev, power, reorder, security, vhost, 00:01:39.591 00:01:39.591 Message: 00:01:39.591 =============== 00:01:39.591 Drivers Enabled 00:01:39.591 =============== 00:01:39.591 00:01:39.591 common: 00:01:39.591 00:01:39.591 bus: 00:01:39.591 pci, vdev, 00:01:39.591 mempool: 00:01:39.591 ring, 00:01:39.591 dma: 00:01:39.591 00:01:39.591 net: 00:01:39.591 00:01:39.591 crypto: 00:01:39.591 00:01:39.591 compress: 00:01:39.591 00:01:39.591 vdpa: 00:01:39.591 00:01:39.591 00:01:39.591 Message: 00:01:39.591 ================= 00:01:39.591 Content Skipped 00:01:39.591 ================= 00:01:39.591 00:01:39.591 apps: 00:01:39.591 dumpcap: explicitly disabled via build config 00:01:39.591 graph: explicitly disabled via build config 00:01:39.591 pdump: explicitly disabled via build config 00:01:39.591 proc-info: explicitly disabled via build config 00:01:39.591 test-acl: explicitly disabled via build config 00:01:39.591 test-bbdev: explicitly disabled via build config 00:01:39.591 test-cmdline: explicitly disabled via build config 00:01:39.591 test-compress-perf: explicitly disabled via build config 00:01:39.591 test-crypto-perf: explicitly disabled via build config 00:01:39.591 test-dma-perf: explicitly disabled via build config 00:01:39.591 test-eventdev: explicitly disabled via build config 00:01:39.591 test-fib: explicitly disabled via build config 00:01:39.591 test-flow-perf: explicitly disabled via build config 00:01:39.591 test-gpudev: explicitly disabled via build config 00:01:39.591 test-mldev: explicitly disabled via build config 00:01:39.591 test-pipeline: explicitly disabled via build config 00:01:39.591 test-pmd: explicitly disabled via build config 00:01:39.591 test-regex: explicitly disabled via build config 00:01:39.591 test-sad: explicitly disabled via build config 00:01:39.591 test-security-perf: explicitly disabled via build config 00:01:39.591 00:01:39.591 libs: 00:01:39.591 metrics: explicitly disabled via build config 00:01:39.591 acl: explicitly disabled via build config 00:01:39.591 bbdev: explicitly disabled via build config 00:01:39.591 bitratestats: explicitly disabled via build config 00:01:39.591 bpf: explicitly disabled via build config 00:01:39.591 cfgfile: explicitly disabled via build config 00:01:39.591 distributor: explicitly disabled via build config 00:01:39.591 efd: explicitly disabled via build config 00:01:39.591 eventdev: explicitly disabled via build config 00:01:39.591 dispatcher: explicitly disabled via build config 00:01:39.591 gpudev: explicitly disabled via build config 00:01:39.591 gro: explicitly disabled via build config 00:01:39.591 gso: explicitly disabled via build config 00:01:39.591 ip_frag: explicitly disabled via build config 00:01:39.591 jobstats: explicitly disabled via build config 00:01:39.591 latencystats: explicitly disabled via build config 00:01:39.591 lpm: explicitly disabled via build config 00:01:39.591 member: explicitly disabled via build config 00:01:39.591 pcapng: explicitly disabled via build config 00:01:39.591 rawdev: explicitly disabled via build config 00:01:39.591 regexdev: explicitly disabled via build config 00:01:39.591 mldev: explicitly disabled via build config 00:01:39.591 rib: explicitly disabled via build config 00:01:39.591 sched: explicitly disabled via build config 00:01:39.591 stack: explicitly disabled via build config 00:01:39.591 ipsec: explicitly disabled via build config 00:01:39.591 pdcp: explicitly disabled via build config 00:01:39.591 fib: explicitly disabled via build config 00:01:39.591 port: explicitly disabled via build config 00:01:39.591 pdump: explicitly disabled via build config 00:01:39.591 table: explicitly disabled via build config 00:01:39.591 pipeline: explicitly disabled via build config 00:01:39.591 graph: explicitly disabled via build config 00:01:39.591 node: explicitly disabled via build config 00:01:39.591 00:01:39.591 drivers: 00:01:39.591 common/cpt: not in enabled drivers build config 00:01:39.591 common/dpaax: not in enabled drivers build config 00:01:39.591 common/iavf: not in enabled drivers build config 00:01:39.591 common/idpf: not in enabled drivers build config 00:01:39.591 common/mvep: not in enabled drivers build config 00:01:39.591 common/octeontx: not in enabled drivers build config 00:01:39.591 bus/auxiliary: not in enabled drivers build config 00:01:39.591 bus/cdx: not in enabled drivers build config 00:01:39.591 bus/dpaa: not in enabled drivers build config 00:01:39.591 bus/fslmc: not in enabled drivers build config 00:01:39.591 bus/ifpga: not in enabled drivers build config 00:01:39.591 bus/platform: not in enabled drivers build config 00:01:39.591 bus/vmbus: not in enabled drivers build config 00:01:39.591 common/cnxk: not in enabled drivers build config 00:01:39.591 common/mlx5: not in enabled drivers build config 00:01:39.591 common/nfp: not in enabled drivers build config 00:01:39.591 common/qat: not in enabled drivers build config 00:01:39.591 common/sfc_efx: not in enabled drivers build config 00:01:39.591 mempool/bucket: not in enabled drivers build config 00:01:39.591 mempool/cnxk: not in enabled drivers build config 00:01:39.591 mempool/dpaa: not in enabled drivers build config 00:01:39.591 mempool/dpaa2: not in enabled drivers build config 00:01:39.591 mempool/octeontx: not in enabled drivers build config 00:01:39.591 mempool/stack: not in enabled drivers build config 00:01:39.591 dma/cnxk: not in enabled drivers build config 00:01:39.591 dma/dpaa: not in enabled drivers build config 00:01:39.591 dma/dpaa2: not in enabled drivers build config 00:01:39.591 dma/hisilicon: not in enabled drivers build config 00:01:39.591 dma/idxd: not in enabled drivers build config 00:01:39.591 dma/ioat: not in enabled drivers build config 00:01:39.591 dma/skeleton: not in enabled drivers build config 00:01:39.591 net/af_packet: not in enabled drivers build config 00:01:39.591 net/af_xdp: not in enabled drivers build config 00:01:39.591 net/ark: not in enabled drivers build config 00:01:39.591 net/atlantic: not in enabled drivers build config 00:01:39.591 net/avp: not in enabled drivers build config 00:01:39.591 net/axgbe: not in enabled drivers build config 00:01:39.591 net/bnx2x: not in enabled drivers build config 00:01:39.591 net/bnxt: not in enabled drivers build config 00:01:39.591 net/bonding: not in enabled drivers build config 00:01:39.591 net/cnxk: not in enabled drivers build config 00:01:39.591 net/cpfl: not in enabled drivers build config 00:01:39.591 net/cxgbe: not in enabled drivers build config 00:01:39.591 net/dpaa: not in enabled drivers build config 00:01:39.591 net/dpaa2: not in enabled drivers build config 00:01:39.591 net/e1000: not in enabled drivers build config 00:01:39.591 net/ena: not in enabled drivers build config 00:01:39.591 net/enetc: not in enabled drivers build config 00:01:39.591 net/enetfec: not in enabled drivers build config 00:01:39.591 net/enic: not in enabled drivers build config 00:01:39.591 net/failsafe: not in enabled drivers build config 00:01:39.591 net/fm10k: not in enabled drivers build config 00:01:39.591 net/gve: not in enabled drivers build config 00:01:39.591 net/hinic: not in enabled drivers build config 00:01:39.591 net/hns3: not in enabled drivers build config 00:01:39.591 net/i40e: not in enabled drivers build config 00:01:39.591 net/iavf: not in enabled drivers build config 00:01:39.591 net/ice: not in enabled drivers build config 00:01:39.591 net/idpf: not in enabled drivers build config 00:01:39.591 net/igc: not in enabled drivers build config 00:01:39.591 net/ionic: not in enabled drivers build config 00:01:39.591 net/ipn3ke: not in enabled drivers build config 00:01:39.591 net/ixgbe: not in enabled drivers build config 00:01:39.591 net/mana: not in enabled drivers build config 00:01:39.591 net/memif: not in enabled drivers build config 00:01:39.591 net/mlx4: not in enabled drivers build config 00:01:39.591 net/mlx5: not in enabled drivers build config 00:01:39.591 net/mvneta: not in enabled drivers build config 00:01:39.591 net/mvpp2: not in enabled drivers build config 00:01:39.591 net/netvsc: not in enabled drivers build config 00:01:39.591 net/nfb: not in enabled drivers build config 00:01:39.591 net/nfp: not in enabled drivers build config 00:01:39.591 net/ngbe: not in enabled drivers build config 00:01:39.591 net/null: not in enabled drivers build config 00:01:39.591 net/octeontx: not in enabled drivers build config 00:01:39.591 net/octeon_ep: not in enabled drivers build config 00:01:39.591 net/pcap: not in enabled drivers build config 00:01:39.591 net/pfe: not in enabled drivers build config 00:01:39.591 net/qede: not in enabled drivers build config 00:01:39.591 net/ring: not in enabled drivers build config 00:01:39.592 net/sfc: not in enabled drivers build config 00:01:39.592 net/softnic: not in enabled drivers build config 00:01:39.592 net/tap: not in enabled drivers build config 00:01:39.592 net/thunderx: not in enabled drivers build config 00:01:39.592 net/txgbe: not in enabled drivers build config 00:01:39.592 net/vdev_netvsc: not in enabled drivers build config 00:01:39.592 net/vhost: not in enabled drivers build config 00:01:39.592 net/virtio: not in enabled drivers build config 00:01:39.592 net/vmxnet3: not in enabled drivers build config 00:01:39.592 raw/*: missing internal dependency, "rawdev" 00:01:39.592 crypto/armv8: not in enabled drivers build config 00:01:39.592 crypto/bcmfs: not in enabled drivers build config 00:01:39.592 crypto/caam_jr: not in enabled drivers build config 00:01:39.592 crypto/ccp: not in enabled drivers build config 00:01:39.592 crypto/cnxk: not in enabled drivers build config 00:01:39.592 crypto/dpaa_sec: not in enabled drivers build config 00:01:39.592 crypto/dpaa2_sec: not in enabled drivers build config 00:01:39.592 crypto/ipsec_mb: not in enabled drivers build config 00:01:39.592 crypto/mlx5: not in enabled drivers build config 00:01:39.592 crypto/mvsam: not in enabled drivers build config 00:01:39.592 crypto/nitrox: not in enabled drivers build config 00:01:39.592 crypto/null: not in enabled drivers build config 00:01:39.592 crypto/octeontx: not in enabled drivers build config 00:01:39.592 crypto/openssl: not in enabled drivers build config 00:01:39.592 crypto/scheduler: not in enabled drivers build config 00:01:39.592 crypto/uadk: not in enabled drivers build config 00:01:39.592 crypto/virtio: not in enabled drivers build config 00:01:39.592 compress/isal: not in enabled drivers build config 00:01:39.592 compress/mlx5: not in enabled drivers build config 00:01:39.592 compress/octeontx: not in enabled drivers build config 00:01:39.592 compress/zlib: not in enabled drivers build config 00:01:39.592 regex/*: missing internal dependency, "regexdev" 00:01:39.592 ml/*: missing internal dependency, "mldev" 00:01:39.592 vdpa/ifc: not in enabled drivers build config 00:01:39.592 vdpa/mlx5: not in enabled drivers build config 00:01:39.592 vdpa/nfp: not in enabled drivers build config 00:01:39.592 vdpa/sfc: not in enabled drivers build config 00:01:39.592 event/*: missing internal dependency, "eventdev" 00:01:39.592 baseband/*: missing internal dependency, "bbdev" 00:01:39.592 gpu/*: missing internal dependency, "gpudev" 00:01:39.592 00:01:39.592 00:01:39.592 Build targets in project: 84 00:01:39.592 00:01:39.592 DPDK 23.11.0 00:01:39.592 00:01:39.592 User defined options 00:01:39.592 buildtype : debug 00:01:39.592 default_library : shared 00:01:39.592 libdir : lib 00:01:39.592 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:39.592 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:39.592 c_link_args : 00:01:39.592 cpu_instruction_set: native 00:01:39.592 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:39.592 disable_libs : port,lpm,ipsec,regexdev,dispatcher,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:39.592 enable_docs : false 00:01:39.592 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:39.592 enable_kmods : false 00:01:39.592 max_lcores : 128 00:01:39.592 tests : false 00:01:39.592 00:01:39.592 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:39.592 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:39.592 [1/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:39.592 [2/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:39.592 [3/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:39.592 [4/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:39.592 [5/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:39.592 [6/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:39.592 [7/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:39.592 [8/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:39.592 [9/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:39.592 [10/264] Linking static target lib/librte_kvargs.a 00:01:39.592 [11/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:39.592 [12/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:39.592 [13/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:39.592 [14/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:39.592 [15/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:39.592 [16/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:39.592 [17/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:39.592 [18/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:39.592 [19/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:39.592 [20/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:39.592 [21/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:39.592 [22/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:39.592 [23/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:39.592 [24/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:39.592 [25/264] Linking static target lib/librte_log.a 00:01:39.592 [26/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:39.592 [27/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:39.592 [28/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:39.592 [29/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:39.592 [30/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:39.852 [31/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:39.852 [32/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:39.852 [33/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:39.852 [34/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:39.852 [35/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:39.852 [36/264] Linking static target lib/librte_pci.a 00:01:39.852 [37/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:39.852 [38/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:39.852 [39/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:39.852 [40/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:39.852 [41/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:39.852 [42/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:39.852 [43/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:39.852 [44/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:39.852 [45/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:39.852 [46/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.112 [47/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.112 [48/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:40.112 [49/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:40.112 [50/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:40.112 [51/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:40.112 [52/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:40.112 [53/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:40.112 [54/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:40.112 [55/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:40.112 [56/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:40.112 [57/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:40.112 [58/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:40.112 [59/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:40.112 [60/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:40.112 [61/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:40.112 [62/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:40.112 [63/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:40.112 [64/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:40.112 [65/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:40.112 [66/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:40.112 [67/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:40.112 [68/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:40.112 [69/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:40.112 [70/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:40.112 [71/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:40.112 [72/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:40.112 [73/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:40.112 [74/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:40.112 [75/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:40.112 [76/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:40.112 [77/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:40.112 [78/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:40.112 [79/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:40.112 [80/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:40.112 [81/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:40.112 [82/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:40.112 [83/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:40.112 [84/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:40.112 [85/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:40.112 [86/264] Linking static target lib/librte_meter.a 00:01:40.112 [87/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:40.112 [88/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:40.112 [89/264] Linking static target lib/librte_telemetry.a 00:01:40.112 [90/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:40.112 [91/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:40.112 [92/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:40.112 [93/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:40.112 [94/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:40.112 [95/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:40.112 [96/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:40.112 [97/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:40.112 [98/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:40.112 [99/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:40.112 [100/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:40.112 [101/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:40.113 [102/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:40.113 [103/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:40.113 [104/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:40.113 [105/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:40.113 [106/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:40.113 [107/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:40.113 [108/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:40.113 [109/264] Linking static target lib/librte_cmdline.a 00:01:40.113 [110/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:40.113 [111/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:40.113 [112/264] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:40.113 [113/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:40.113 [114/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:40.113 [115/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:40.113 [116/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:40.113 [117/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:40.113 [118/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:40.113 [119/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:40.113 [120/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:40.113 [121/264] Linking static target lib/librte_ring.a 00:01:40.113 [122/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:40.113 [123/264] Linking static target lib/librte_timer.a 00:01:40.113 [124/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:40.113 [125/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.113 [126/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:40.113 [127/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:40.113 [128/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:40.113 [129/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:40.113 [130/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:40.113 [131/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:40.113 [132/264] Linking static target lib/librte_security.a 00:01:40.113 [133/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:40.113 [134/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:40.113 [135/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:40.113 [136/264] Linking static target lib/librte_dmadev.a 00:01:40.113 [137/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:40.113 [138/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:40.113 [139/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:40.113 [140/264] Linking static target lib/librte_rcu.a 00:01:40.113 [141/264] Linking target lib/librte_log.so.24.0 00:01:40.113 [142/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:40.113 [143/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:40.113 [144/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:40.113 [145/264] Linking static target lib/librte_compressdev.a 00:01:40.113 [146/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:40.113 [147/264] Linking static target lib/librte_net.a 00:01:40.113 [148/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:40.374 [149/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:40.374 [150/264] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:40.374 [151/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:40.374 [152/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:40.374 [153/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:40.374 [154/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:40.374 [155/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:40.374 [156/264] Linking static target lib/librte_mempool.a 00:01:40.374 [157/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:40.374 [158/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:40.375 [159/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:40.375 [160/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:40.375 [161/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:40.375 [162/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:40.375 [163/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:40.375 [164/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:40.375 [165/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:40.375 [166/264] Linking static target lib/librte_power.a 00:01:40.375 [167/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:40.375 [168/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:40.375 [169/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:40.375 [170/264] Linking static target lib/librte_reorder.a 00:01:40.375 [171/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:40.375 [172/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:40.375 [173/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:40.375 [174/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:40.375 [175/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:40.375 [176/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:40.375 [177/264] Linking target lib/librte_kvargs.so.24.0 00:01:40.375 [178/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:40.375 [179/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.375 [180/264] Linking static target lib/librte_eal.a 00:01:40.375 [181/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:40.375 [182/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:40.375 [183/264] Linking static target drivers/librte_bus_vdev.a 00:01:40.375 [184/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:40.375 [185/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:40.375 [186/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:40.375 [187/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:40.375 [188/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:40.375 [189/264] Linking static target lib/librte_mbuf.a 00:01:40.375 [190/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:40.635 [191/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:40.635 [192/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:40.635 [193/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.635 [194/264] Linking static target drivers/librte_bus_pci.a 00:01:40.635 [195/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:40.635 [196/264] Linking static target lib/librte_hash.a 00:01:40.635 [197/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:40.635 [198/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.635 [199/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.635 [200/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:40.635 [201/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:40.635 [202/264] Linking static target drivers/librte_mempool_ring.a 00:01:40.635 [203/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:40.635 [204/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:40.635 [205/264] Linking static target lib/librte_cryptodev.a 00:01:40.635 [206/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.635 [207/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.894 [208/264] Linking target lib/librte_telemetry.so.24.0 00:01:40.894 [209/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.894 [210/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.894 [211/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.894 [212/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.894 [213/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:40.894 [214/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:40.894 [215/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.154 [216/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:41.154 [217/264] Linking static target lib/librte_ethdev.a 00:01:41.154 [218/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.415 [219/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.415 [220/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.415 [221/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.415 [222/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.415 [223/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.984 [224/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:41.984 [225/264] Linking static target lib/librte_vhost.a 00:01:42.924 [226/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.867 [227/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.455 [228/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.842 [229/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.102 [230/264] Linking target lib/librte_eal.so.24.0 00:01:52.102 [231/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:52.102 [232/264] Linking target lib/librte_pci.so.24.0 00:01:52.102 [233/264] Linking target lib/librte_ring.so.24.0 00:01:52.102 [234/264] Linking target lib/librte_meter.so.24.0 00:01:52.102 [235/264] Linking target lib/librte_dmadev.so.24.0 00:01:52.102 [236/264] Linking target drivers/librte_bus_vdev.so.24.0 00:01:52.102 [237/264] Linking target lib/librte_timer.so.24.0 00:01:52.363 [238/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:52.363 [239/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:52.363 [240/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:52.363 [241/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:52.363 [242/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:52.363 [243/264] Linking target drivers/librte_bus_pci.so.24.0 00:01:52.363 [244/264] Linking target lib/librte_rcu.so.24.0 00:01:52.363 [245/264] Linking target lib/librte_mempool.so.24.0 00:01:52.363 [246/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:52.623 [247/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:52.623 [248/264] Linking target drivers/librte_mempool_ring.so.24.0 00:01:52.623 [249/264] Linking target lib/librte_mbuf.so.24.0 00:01:52.623 [250/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:52.623 [251/264] Linking target lib/librte_net.so.24.0 00:01:52.623 [252/264] Linking target lib/librte_reorder.so.24.0 00:01:52.623 [253/264] Linking target lib/librte_compressdev.so.24.0 00:01:52.623 [254/264] Linking target lib/librte_cryptodev.so.24.0 00:01:52.885 [255/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:52.885 [256/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:52.885 [257/264] Linking target lib/librte_hash.so.24.0 00:01:52.885 [258/264] Linking target lib/librte_cmdline.so.24.0 00:01:52.885 [259/264] Linking target lib/librte_security.so.24.0 00:01:52.885 [260/264] Linking target lib/librte_ethdev.so.24.0 00:01:53.146 [261/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:53.146 [262/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:53.146 [263/264] Linking target lib/librte_power.so.24.0 00:01:53.146 [264/264] Linking target lib/librte_vhost.so.24.0 00:01:53.146 INFO: autodetecting backend as ninja 00:01:53.146 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:01:57.353 CC lib/ut_mock/mock.o 00:01:57.353 CC lib/ut/ut.o 00:01:57.353 CC lib/log/log.o 00:01:57.353 CC lib/log/log_flags.o 00:01:57.353 CC lib/log/log_deprecated.o 00:01:57.353 LIB libspdk_ut_mock.a 00:01:57.353 LIB libspdk_ut.a 00:01:57.353 LIB libspdk_log.a 00:01:57.353 SO libspdk_ut_mock.so.6.0 00:01:57.353 SO libspdk_ut.so.2.0 00:01:57.353 SO libspdk_log.so.7.1 00:01:57.353 SYMLINK libspdk_ut_mock.so 00:01:57.353 SYMLINK libspdk_ut.so 00:01:57.353 SYMLINK libspdk_log.so 00:01:57.613 CC lib/dma/dma.o 00:01:57.613 CXX lib/trace_parser/trace.o 00:01:57.613 CC lib/ioat/ioat.o 00:01:57.613 CC lib/util/base64.o 00:01:57.613 CC lib/util/bit_array.o 00:01:57.613 CC lib/util/cpuset.o 00:01:57.613 CC lib/util/crc16.o 00:01:57.613 CC lib/util/crc32.o 00:01:57.613 CC lib/util/crc32c.o 00:01:57.613 CC lib/util/crc32_ieee.o 00:01:57.613 CC lib/util/crc64.o 00:01:57.613 CC lib/util/dif.o 00:01:57.613 CC lib/util/fd.o 00:01:57.613 CC lib/util/fd_group.o 00:01:57.613 CC lib/util/file.o 00:01:57.613 CC lib/util/hexlify.o 00:01:57.613 CC lib/util/iov.o 00:01:57.613 CC lib/util/math.o 00:01:57.613 CC lib/util/net.o 00:01:57.613 CC lib/util/pipe.o 00:01:57.613 CC lib/util/strerror_tls.o 00:01:57.613 CC lib/util/string.o 00:01:57.613 CC lib/util/uuid.o 00:01:57.613 CC lib/util/xor.o 00:01:57.613 CC lib/util/zipf.o 00:01:57.613 CC lib/util/md5.o 00:01:57.873 CC lib/vfio_user/host/vfio_user_pci.o 00:01:57.873 CC lib/vfio_user/host/vfio_user.o 00:01:57.873 LIB libspdk_dma.a 00:01:57.873 SO libspdk_dma.so.5.0 00:01:57.873 LIB libspdk_ioat.a 00:01:57.873 SYMLINK libspdk_dma.so 00:01:57.873 SO libspdk_ioat.so.7.0 00:01:57.873 SYMLINK libspdk_ioat.so 00:01:58.135 LIB libspdk_vfio_user.a 00:01:58.135 SO libspdk_vfio_user.so.5.0 00:01:58.135 LIB libspdk_util.a 00:01:58.135 SYMLINK libspdk_vfio_user.so 00:01:58.135 SO libspdk_util.so.10.0 00:01:58.135 LIB libspdk_trace_parser.a 00:01:58.135 SO libspdk_trace_parser.so.6.0 00:01:58.396 SYMLINK libspdk_util.so 00:01:58.396 SYMLINK libspdk_trace_parser.so 00:01:58.658 CC lib/json/json_parse.o 00:01:58.658 CC lib/json/json_util.o 00:01:58.658 CC lib/json/json_write.o 00:01:58.658 CC lib/conf/conf.o 00:01:58.658 CC lib/rdma_utils/rdma_utils.o 00:01:58.658 CC lib/idxd/idxd.o 00:01:58.658 CC lib/vmd/vmd.o 00:01:58.658 CC lib/env_dpdk/env.o 00:01:58.658 CC lib/idxd/idxd_user.o 00:01:58.658 CC lib/vmd/led.o 00:01:58.658 CC lib/rdma_provider/common.o 00:01:58.658 CC lib/env_dpdk/memory.o 00:01:58.658 CC lib/idxd/idxd_kernel.o 00:01:58.658 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:58.658 CC lib/env_dpdk/pci.o 00:01:58.658 CC lib/env_dpdk/init.o 00:01:58.658 CC lib/env_dpdk/threads.o 00:01:58.658 CC lib/env_dpdk/pci_ioat.o 00:01:58.658 CC lib/env_dpdk/pci_virtio.o 00:01:58.658 CC lib/env_dpdk/pci_vmd.o 00:01:58.658 CC lib/env_dpdk/pci_idxd.o 00:01:58.658 CC lib/env_dpdk/pci_event.o 00:01:58.658 CC lib/env_dpdk/sigbus_handler.o 00:01:58.658 CC lib/env_dpdk/pci_dpdk.o 00:01:58.658 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:58.658 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:58.919 LIB libspdk_rdma_provider.a 00:01:58.919 LIB libspdk_conf.a 00:01:58.919 SO libspdk_rdma_provider.so.6.0 00:01:58.919 LIB libspdk_rdma_utils.a 00:01:58.919 SO libspdk_conf.so.6.0 00:01:58.919 LIB libspdk_json.a 00:01:58.919 SO libspdk_rdma_utils.so.1.0 00:01:58.920 SO libspdk_json.so.6.0 00:01:58.920 SYMLINK libspdk_rdma_provider.so 00:01:58.920 SYMLINK libspdk_conf.so 00:01:59.181 SYMLINK libspdk_rdma_utils.so 00:01:59.181 SYMLINK libspdk_json.so 00:01:59.181 LIB libspdk_idxd.a 00:01:59.181 SO libspdk_idxd.so.12.1 00:01:59.181 LIB libspdk_vmd.a 00:01:59.441 SO libspdk_vmd.so.6.0 00:01:59.441 SYMLINK libspdk_idxd.so 00:01:59.441 SYMLINK libspdk_vmd.so 00:01:59.441 CC lib/jsonrpc/jsonrpc_server.o 00:01:59.441 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:59.441 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:59.441 CC lib/jsonrpc/jsonrpc_client.o 00:01:59.702 LIB libspdk_jsonrpc.a 00:01:59.702 SO libspdk_jsonrpc.so.6.0 00:01:59.702 SYMLINK libspdk_jsonrpc.so 00:01:59.963 LIB libspdk_env_dpdk.a 00:01:59.963 SO libspdk_env_dpdk.so.15.0 00:02:00.223 SYMLINK libspdk_env_dpdk.so 00:02:00.223 CC lib/rpc/rpc.o 00:02:00.485 LIB libspdk_rpc.a 00:02:00.485 SO libspdk_rpc.so.6.0 00:02:00.485 SYMLINK libspdk_rpc.so 00:02:00.746 CC lib/trace/trace.o 00:02:00.746 CC lib/trace/trace_flags.o 00:02:00.746 CC lib/notify/notify.o 00:02:00.746 CC lib/trace/trace_rpc.o 00:02:00.746 CC lib/notify/notify_rpc.o 00:02:00.746 CC lib/keyring/keyring.o 00:02:00.746 CC lib/keyring/keyring_rpc.o 00:02:01.007 LIB libspdk_notify.a 00:02:01.007 SO libspdk_notify.so.6.0 00:02:01.007 LIB libspdk_keyring.a 00:02:01.007 LIB libspdk_trace.a 00:02:01.268 SO libspdk_keyring.so.2.0 00:02:01.268 SYMLINK libspdk_notify.so 00:02:01.268 SO libspdk_trace.so.11.0 00:02:01.268 SYMLINK libspdk_keyring.so 00:02:01.268 SYMLINK libspdk_trace.so 00:02:01.529 CC lib/sock/sock.o 00:02:01.529 CC lib/sock/sock_rpc.o 00:02:01.529 CC lib/thread/thread.o 00:02:01.529 CC lib/thread/iobuf.o 00:02:02.102 LIB libspdk_sock.a 00:02:02.102 SO libspdk_sock.so.10.0 00:02:02.102 SYMLINK libspdk_sock.so 00:02:02.362 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:02.362 CC lib/nvme/nvme_ctrlr.o 00:02:02.362 CC lib/nvme/nvme_fabric.o 00:02:02.362 CC lib/nvme/nvme_ns_cmd.o 00:02:02.362 CC lib/nvme/nvme_ns.o 00:02:02.362 CC lib/nvme/nvme_pcie_common.o 00:02:02.362 CC lib/nvme/nvme_pcie.o 00:02:02.362 CC lib/nvme/nvme_qpair.o 00:02:02.362 CC lib/nvme/nvme.o 00:02:02.362 CC lib/nvme/nvme_quirks.o 00:02:02.362 CC lib/nvme/nvme_transport.o 00:02:02.362 CC lib/nvme/nvme_discovery.o 00:02:02.362 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:02.362 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:02.362 CC lib/nvme/nvme_tcp.o 00:02:02.362 CC lib/nvme/nvme_opal.o 00:02:02.362 CC lib/nvme/nvme_io_msg.o 00:02:02.362 CC lib/nvme/nvme_poll_group.o 00:02:02.362 CC lib/nvme/nvme_zns.o 00:02:02.362 CC lib/nvme/nvme_stubs.o 00:02:02.362 CC lib/nvme/nvme_auth.o 00:02:02.362 CC lib/nvme/nvme_cuse.o 00:02:02.362 CC lib/nvme/nvme_vfio_user.o 00:02:02.362 CC lib/nvme/nvme_rdma.o 00:02:02.934 LIB libspdk_thread.a 00:02:02.934 SO libspdk_thread.so.10.2 00:02:02.934 SYMLINK libspdk_thread.so 00:02:03.505 CC lib/accel/accel.o 00:02:03.505 CC lib/accel/accel_sw.o 00:02:03.505 CC lib/accel/accel_rpc.o 00:02:03.505 CC lib/fsdev/fsdev.o 00:02:03.505 CC lib/fsdev/fsdev_io.o 00:02:03.505 CC lib/fsdev/fsdev_rpc.o 00:02:03.505 CC lib/blob/blobstore.o 00:02:03.505 CC lib/blob/request.o 00:02:03.505 CC lib/blob/zeroes.o 00:02:03.505 CC lib/blob/blob_bs_dev.o 00:02:03.505 CC lib/vfu_tgt/tgt_endpoint.o 00:02:03.505 CC lib/vfu_tgt/tgt_rpc.o 00:02:03.505 CC lib/virtio/virtio.o 00:02:03.505 CC lib/virtio/virtio_vhost_user.o 00:02:03.505 CC lib/init/json_config.o 00:02:03.505 CC lib/virtio/virtio_vfio_user.o 00:02:03.505 CC lib/init/subsystem.o 00:02:03.505 CC lib/virtio/virtio_pci.o 00:02:03.505 CC lib/init/subsystem_rpc.o 00:02:03.505 CC lib/init/rpc.o 00:02:03.767 LIB libspdk_init.a 00:02:03.767 SO libspdk_init.so.6.0 00:02:03.767 LIB libspdk_virtio.a 00:02:03.767 LIB libspdk_vfu_tgt.a 00:02:03.767 SO libspdk_virtio.so.7.0 00:02:03.767 SO libspdk_vfu_tgt.so.3.0 00:02:03.767 SYMLINK libspdk_init.so 00:02:03.767 SYMLINK libspdk_virtio.so 00:02:03.767 SYMLINK libspdk_vfu_tgt.so 00:02:04.027 LIB libspdk_fsdev.a 00:02:04.027 SO libspdk_fsdev.so.1.0 00:02:04.027 SYMLINK libspdk_fsdev.so 00:02:04.027 CC lib/event/app.o 00:02:04.027 CC lib/event/reactor.o 00:02:04.027 CC lib/event/log_rpc.o 00:02:04.027 CC lib/event/app_rpc.o 00:02:04.287 CC lib/event/scheduler_static.o 00:02:04.287 LIB libspdk_accel.a 00:02:04.287 SO libspdk_accel.so.16.0 00:02:04.287 LIB libspdk_nvme.a 00:02:04.548 SYMLINK libspdk_accel.so 00:02:04.548 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:04.548 SO libspdk_nvme.so.14.0 00:02:04.548 LIB libspdk_event.a 00:02:04.548 SO libspdk_event.so.14.0 00:02:04.809 SYMLINK libspdk_event.so 00:02:04.809 SYMLINK libspdk_nvme.so 00:02:04.809 CC lib/bdev/bdev.o 00:02:04.809 CC lib/bdev/bdev_rpc.o 00:02:04.809 CC lib/bdev/bdev_zone.o 00:02:04.809 CC lib/bdev/part.o 00:02:04.809 CC lib/bdev/scsi_nvme.o 00:02:05.070 LIB libspdk_fuse_dispatcher.a 00:02:05.070 SO libspdk_fuse_dispatcher.so.1.0 00:02:05.330 SYMLINK libspdk_fuse_dispatcher.so 00:02:05.902 LIB libspdk_blob.a 00:02:06.163 SO libspdk_blob.so.11.0 00:02:06.163 SYMLINK libspdk_blob.so 00:02:06.423 CC lib/blobfs/blobfs.o 00:02:06.423 CC lib/blobfs/tree.o 00:02:06.423 CC lib/lvol/lvol.o 00:02:07.472 LIB libspdk_bdev.a 00:02:07.472 SO libspdk_bdev.so.17.0 00:02:07.472 LIB libspdk_blobfs.a 00:02:07.472 SO libspdk_blobfs.so.10.0 00:02:07.472 SYMLINK libspdk_bdev.so 00:02:07.472 LIB libspdk_lvol.a 00:02:07.472 SYMLINK libspdk_blobfs.so 00:02:07.472 SO libspdk_lvol.so.10.0 00:02:07.472 SYMLINK libspdk_lvol.so 00:02:07.734 CC lib/scsi/dev.o 00:02:07.734 CC lib/scsi/lun.o 00:02:07.734 CC lib/nbd/nbd.o 00:02:07.734 CC lib/scsi/port.o 00:02:07.734 CC lib/nbd/nbd_rpc.o 00:02:07.734 CC lib/scsi/scsi.o 00:02:07.734 CC lib/scsi/scsi_bdev.o 00:02:07.734 CC lib/scsi/scsi_pr.o 00:02:07.734 CC lib/ublk/ublk.o 00:02:07.734 CC lib/nvmf/ctrlr.o 00:02:07.734 CC lib/scsi/scsi_rpc.o 00:02:07.734 CC lib/nvmf/ctrlr_discovery.o 00:02:07.734 CC lib/ublk/ublk_rpc.o 00:02:07.734 CC lib/scsi/task.o 00:02:07.734 CC lib/nvmf/ctrlr_bdev.o 00:02:07.734 CC lib/ftl/ftl_core.o 00:02:07.734 CC lib/nvmf/subsystem.o 00:02:07.734 CC lib/ftl/ftl_init.o 00:02:07.734 CC lib/nvmf/nvmf.o 00:02:07.734 CC lib/ftl/ftl_layout.o 00:02:07.734 CC lib/nvmf/nvmf_rpc.o 00:02:07.734 CC lib/ftl/ftl_debug.o 00:02:07.734 CC lib/nvmf/transport.o 00:02:07.734 CC lib/ftl/ftl_io.o 00:02:07.734 CC lib/nvmf/tcp.o 00:02:07.734 CC lib/ftl/ftl_sb.o 00:02:07.734 CC lib/nvmf/stubs.o 00:02:07.734 CC lib/ftl/ftl_l2p.o 00:02:07.734 CC lib/nvmf/vfio_user.o 00:02:07.734 CC lib/nvmf/mdns_server.o 00:02:07.734 CC lib/ftl/ftl_l2p_flat.o 00:02:07.734 CC lib/ftl/ftl_nv_cache.o 00:02:07.734 CC lib/nvmf/rdma.o 00:02:07.734 CC lib/ftl/ftl_band.o 00:02:07.734 CC lib/nvmf/auth.o 00:02:07.734 CC lib/ftl/ftl_band_ops.o 00:02:07.734 CC lib/ftl/ftl_writer.o 00:02:07.734 CC lib/ftl/ftl_rq.o 00:02:07.734 CC lib/ftl/ftl_reloc.o 00:02:07.734 CC lib/ftl/ftl_l2p_cache.o 00:02:07.734 CC lib/ftl/ftl_p2l.o 00:02:07.734 CC lib/ftl/mngt/ftl_mngt.o 00:02:07.734 CC lib/ftl/ftl_p2l_log.o 00:02:07.734 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:07.734 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:07.734 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:07.734 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:07.734 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:07.734 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:07.734 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:07.734 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:07.734 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:07.734 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:07.734 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:07.734 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:07.734 CC lib/ftl/utils/ftl_conf.o 00:02:07.734 CC lib/ftl/utils/ftl_mempool.o 00:02:07.734 CC lib/ftl/utils/ftl_md.o 00:02:07.734 CC lib/ftl/utils/ftl_bitmap.o 00:02:07.734 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:07.734 CC lib/ftl/utils/ftl_property.o 00:02:07.734 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:07.734 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:07.734 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:07.734 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:07.734 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:07.734 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:07.735 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:07.735 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:07.735 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:07.735 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:07.735 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:07.735 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:07.735 CC lib/ftl/base/ftl_base_dev.o 00:02:07.735 CC lib/ftl/base/ftl_base_bdev.o 00:02:07.735 CC lib/ftl/ftl_trace.o 00:02:08.305 LIB libspdk_nbd.a 00:02:08.305 SO libspdk_nbd.so.7.0 00:02:08.305 LIB libspdk_scsi.a 00:02:08.305 SYMLINK libspdk_nbd.so 00:02:08.305 SO libspdk_scsi.so.9.0 00:02:08.305 SYMLINK libspdk_scsi.so 00:02:08.566 LIB libspdk_ublk.a 00:02:08.566 SO libspdk_ublk.so.3.0 00:02:08.566 SYMLINK libspdk_ublk.so 00:02:08.826 LIB libspdk_ftl.a 00:02:08.826 CC lib/vhost/vhost.o 00:02:08.826 CC lib/vhost/vhost_scsi.o 00:02:08.826 CC lib/vhost/vhost_rpc.o 00:02:08.826 CC lib/vhost/vhost_blk.o 00:02:08.826 CC lib/vhost/rte_vhost_user.o 00:02:08.826 CC lib/iscsi/conn.o 00:02:08.826 CC lib/iscsi/init_grp.o 00:02:08.826 CC lib/iscsi/iscsi.o 00:02:08.826 CC lib/iscsi/param.o 00:02:08.827 CC lib/iscsi/portal_grp.o 00:02:08.827 CC lib/iscsi/tgt_node.o 00:02:08.827 CC lib/iscsi/iscsi_subsystem.o 00:02:08.827 CC lib/iscsi/iscsi_rpc.o 00:02:08.827 CC lib/iscsi/task.o 00:02:08.827 SO libspdk_ftl.so.9.0 00:02:09.087 SYMLINK libspdk_ftl.so 00:02:09.659 LIB libspdk_nvmf.a 00:02:09.659 SO libspdk_nvmf.so.19.1 00:02:09.659 LIB libspdk_vhost.a 00:02:09.920 SO libspdk_vhost.so.8.0 00:02:09.920 SYMLINK libspdk_nvmf.so 00:02:09.920 SYMLINK libspdk_vhost.so 00:02:09.920 LIB libspdk_iscsi.a 00:02:10.181 SO libspdk_iscsi.so.8.0 00:02:10.181 SYMLINK libspdk_iscsi.so 00:02:10.753 CC module/env_dpdk/env_dpdk_rpc.o 00:02:10.753 CC module/vfu_device/vfu_virtio.o 00:02:10.753 CC module/vfu_device/vfu_virtio_blk.o 00:02:10.753 CC module/vfu_device/vfu_virtio_scsi.o 00:02:10.753 CC module/vfu_device/vfu_virtio_rpc.o 00:02:10.753 CC module/vfu_device/vfu_virtio_fs.o 00:02:11.013 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:11.013 LIB libspdk_env_dpdk_rpc.a 00:02:11.013 CC module/accel/ioat/accel_ioat.o 00:02:11.013 CC module/fsdev/aio/fsdev_aio.o 00:02:11.013 CC module/sock/posix/posix.o 00:02:11.013 CC module/blob/bdev/blob_bdev.o 00:02:11.013 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:11.013 CC module/accel/ioat/accel_ioat_rpc.o 00:02:11.013 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:11.013 CC module/scheduler/gscheduler/gscheduler.o 00:02:11.013 CC module/fsdev/aio/linux_aio_mgr.o 00:02:11.013 CC module/accel/error/accel_error.o 00:02:11.013 CC module/accel/error/accel_error_rpc.o 00:02:11.013 CC module/accel/iaa/accel_iaa.o 00:02:11.013 CC module/accel/dsa/accel_dsa.o 00:02:11.013 CC module/accel/iaa/accel_iaa_rpc.o 00:02:11.013 CC module/accel/dsa/accel_dsa_rpc.o 00:02:11.013 CC module/keyring/linux/keyring_rpc.o 00:02:11.013 CC module/keyring/linux/keyring.o 00:02:11.013 CC module/keyring/file/keyring.o 00:02:11.013 CC module/keyring/file/keyring_rpc.o 00:02:11.013 SO libspdk_env_dpdk_rpc.so.6.0 00:02:11.013 SYMLINK libspdk_env_dpdk_rpc.so 00:02:11.274 LIB libspdk_keyring_linux.a 00:02:11.274 LIB libspdk_keyring_file.a 00:02:11.274 LIB libspdk_scheduler_dynamic.a 00:02:11.274 SO libspdk_keyring_linux.so.1.0 00:02:11.274 LIB libspdk_scheduler_gscheduler.a 00:02:11.274 LIB libspdk_scheduler_dpdk_governor.a 00:02:11.274 LIB libspdk_accel_error.a 00:02:11.274 LIB libspdk_accel_ioat.a 00:02:11.274 LIB libspdk_accel_iaa.a 00:02:11.274 SO libspdk_scheduler_dynamic.so.4.0 00:02:11.274 SO libspdk_keyring_file.so.2.0 00:02:11.274 SO libspdk_scheduler_gscheduler.so.4.0 00:02:11.274 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:11.274 SO libspdk_accel_error.so.2.0 00:02:11.274 SO libspdk_accel_ioat.so.6.0 00:02:11.274 SO libspdk_accel_iaa.so.3.0 00:02:11.274 SYMLINK libspdk_keyring_linux.so 00:02:11.274 LIB libspdk_blob_bdev.a 00:02:11.274 SYMLINK libspdk_scheduler_dynamic.so 00:02:11.274 SYMLINK libspdk_keyring_file.so 00:02:11.274 SYMLINK libspdk_scheduler_gscheduler.so 00:02:11.274 LIB libspdk_accel_dsa.a 00:02:11.274 SO libspdk_blob_bdev.so.11.0 00:02:11.274 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:11.274 SYMLINK libspdk_accel_error.so 00:02:11.274 SYMLINK libspdk_accel_ioat.so 00:02:11.274 SYMLINK libspdk_accel_iaa.so 00:02:11.274 SO libspdk_accel_dsa.so.5.0 00:02:11.274 LIB libspdk_vfu_device.a 00:02:11.274 SYMLINK libspdk_blob_bdev.so 00:02:11.535 SYMLINK libspdk_accel_dsa.so 00:02:11.535 SO libspdk_vfu_device.so.3.0 00:02:11.535 SYMLINK libspdk_vfu_device.so 00:02:11.535 LIB libspdk_fsdev_aio.a 00:02:11.535 SO libspdk_fsdev_aio.so.1.0 00:02:11.795 LIB libspdk_sock_posix.a 00:02:11.795 SO libspdk_sock_posix.so.6.0 00:02:11.795 SYMLINK libspdk_fsdev_aio.so 00:02:11.795 SYMLINK libspdk_sock_posix.so 00:02:12.056 CC module/bdev/gpt/gpt.o 00:02:12.056 CC module/bdev/gpt/vbdev_gpt.o 00:02:12.056 CC module/bdev/lvol/vbdev_lvol.o 00:02:12.056 CC module/bdev/error/vbdev_error.o 00:02:12.056 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:12.056 CC module/bdev/error/vbdev_error_rpc.o 00:02:12.056 CC module/bdev/nvme/bdev_nvme.o 00:02:12.056 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:12.056 CC module/bdev/nvme/nvme_rpc.o 00:02:12.056 CC module/bdev/nvme/bdev_mdns_client.o 00:02:12.056 CC module/bdev/raid/bdev_raid.o 00:02:12.056 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:12.056 CC module/bdev/nvme/vbdev_opal.o 00:02:12.056 CC module/bdev/raid/bdev_raid_rpc.o 00:02:12.056 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:12.056 CC module/bdev/raid/bdev_raid_sb.o 00:02:12.056 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:12.056 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:12.056 CC module/bdev/split/vbdev_split.o 00:02:12.056 CC module/bdev/malloc/bdev_malloc.o 00:02:12.056 CC module/bdev/raid/raid0.o 00:02:12.056 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:12.056 CC module/blobfs/bdev/blobfs_bdev.o 00:02:12.056 CC module/bdev/delay/vbdev_delay.o 00:02:12.056 CC module/bdev/raid/raid1.o 00:02:12.056 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:12.056 CC module/bdev/split/vbdev_split_rpc.o 00:02:12.056 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:12.056 CC module/bdev/null/bdev_null.o 00:02:12.056 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:12.056 CC module/bdev/raid/concat.o 00:02:12.056 CC module/bdev/null/bdev_null_rpc.o 00:02:12.056 CC module/bdev/passthru/vbdev_passthru.o 00:02:12.056 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:12.056 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:12.056 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:12.056 CC module/bdev/aio/bdev_aio.o 00:02:12.056 CC module/bdev/aio/bdev_aio_rpc.o 00:02:12.056 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:12.056 CC module/bdev/iscsi/bdev_iscsi.o 00:02:12.056 CC module/bdev/ftl/bdev_ftl.o 00:02:12.056 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:12.316 LIB libspdk_blobfs_bdev.a 00:02:12.316 SO libspdk_blobfs_bdev.so.6.0 00:02:12.316 LIB libspdk_bdev_gpt.a 00:02:12.316 LIB libspdk_bdev_split.a 00:02:12.316 SO libspdk_bdev_gpt.so.6.0 00:02:12.316 SO libspdk_bdev_split.so.6.0 00:02:12.316 LIB libspdk_bdev_error.a 00:02:12.316 SYMLINK libspdk_blobfs_bdev.so 00:02:12.316 LIB libspdk_bdev_null.a 00:02:12.316 LIB libspdk_bdev_zone_block.a 00:02:12.316 SO libspdk_bdev_error.so.6.0 00:02:12.316 LIB libspdk_bdev_passthru.a 00:02:12.316 SO libspdk_bdev_null.so.6.0 00:02:12.316 SYMLINK libspdk_bdev_split.so 00:02:12.316 SYMLINK libspdk_bdev_gpt.so 00:02:12.316 LIB libspdk_bdev_ftl.a 00:02:12.316 SO libspdk_bdev_zone_block.so.6.0 00:02:12.316 SO libspdk_bdev_passthru.so.6.0 00:02:12.577 SO libspdk_bdev_ftl.so.6.0 00:02:12.577 LIB libspdk_bdev_aio.a 00:02:12.577 SYMLINK libspdk_bdev_error.so 00:02:12.577 LIB libspdk_bdev_malloc.a 00:02:12.577 LIB libspdk_bdev_iscsi.a 00:02:12.577 LIB libspdk_bdev_delay.a 00:02:12.577 SYMLINK libspdk_bdev_null.so 00:02:12.577 SO libspdk_bdev_aio.so.6.0 00:02:12.577 LIB libspdk_bdev_lvol.a 00:02:12.577 SO libspdk_bdev_malloc.so.6.0 00:02:12.577 SO libspdk_bdev_iscsi.so.6.0 00:02:12.577 SYMLINK libspdk_bdev_zone_block.so 00:02:12.577 SO libspdk_bdev_delay.so.6.0 00:02:12.577 SYMLINK libspdk_bdev_passthru.so 00:02:12.577 SYMLINK libspdk_bdev_ftl.so 00:02:12.577 SO libspdk_bdev_lvol.so.6.0 00:02:12.577 SYMLINK libspdk_bdev_aio.so 00:02:12.577 SYMLINK libspdk_bdev_malloc.so 00:02:12.577 SYMLINK libspdk_bdev_iscsi.so 00:02:12.577 LIB libspdk_bdev_virtio.a 00:02:12.577 SYMLINK libspdk_bdev_delay.so 00:02:12.577 SO libspdk_bdev_virtio.so.6.0 00:02:12.577 SYMLINK libspdk_bdev_lvol.so 00:02:12.837 SYMLINK libspdk_bdev_virtio.so 00:02:13.098 LIB libspdk_bdev_raid.a 00:02:13.098 SO libspdk_bdev_raid.so.6.0 00:02:13.098 SYMLINK libspdk_bdev_raid.so 00:02:14.040 LIB libspdk_bdev_nvme.a 00:02:14.040 SO libspdk_bdev_nvme.so.7.0 00:02:14.301 SYMLINK libspdk_bdev_nvme.so 00:02:14.873 CC module/event/subsystems/vmd/vmd.o 00:02:14.873 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:14.873 CC module/event/subsystems/scheduler/scheduler.o 00:02:14.873 CC module/event/subsystems/iobuf/iobuf.o 00:02:14.873 CC module/event/subsystems/keyring/keyring.o 00:02:14.873 CC module/event/subsystems/sock/sock.o 00:02:14.873 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:14.873 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:14.873 CC module/event/subsystems/fsdev/fsdev.o 00:02:14.873 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:15.133 LIB libspdk_event_fsdev.a 00:02:15.133 LIB libspdk_event_vfu_tgt.a 00:02:15.133 LIB libspdk_event_sock.a 00:02:15.133 LIB libspdk_event_keyring.a 00:02:15.133 LIB libspdk_event_scheduler.a 00:02:15.133 LIB libspdk_event_vhost_blk.a 00:02:15.133 LIB libspdk_event_vmd.a 00:02:15.133 LIB libspdk_event_iobuf.a 00:02:15.133 SO libspdk_event_vfu_tgt.so.3.0 00:02:15.133 SO libspdk_event_sock.so.5.0 00:02:15.133 SO libspdk_event_fsdev.so.1.0 00:02:15.133 SO libspdk_event_scheduler.so.4.0 00:02:15.133 SO libspdk_event_keyring.so.1.0 00:02:15.133 SO libspdk_event_vhost_blk.so.3.0 00:02:15.133 SO libspdk_event_vmd.so.6.0 00:02:15.133 SO libspdk_event_iobuf.so.3.0 00:02:15.394 SYMLINK libspdk_event_sock.so 00:02:15.394 SYMLINK libspdk_event_fsdev.so 00:02:15.394 SYMLINK libspdk_event_vfu_tgt.so 00:02:15.394 SYMLINK libspdk_event_keyring.so 00:02:15.394 SYMLINK libspdk_event_vhost_blk.so 00:02:15.394 SYMLINK libspdk_event_scheduler.so 00:02:15.394 SYMLINK libspdk_event_vmd.so 00:02:15.394 SYMLINK libspdk_event_iobuf.so 00:02:15.654 CC module/event/subsystems/accel/accel.o 00:02:15.914 LIB libspdk_event_accel.a 00:02:15.914 SO libspdk_event_accel.so.6.0 00:02:15.914 SYMLINK libspdk_event_accel.so 00:02:16.174 CC module/event/subsystems/bdev/bdev.o 00:02:16.435 LIB libspdk_event_bdev.a 00:02:16.435 SO libspdk_event_bdev.so.6.0 00:02:16.435 SYMLINK libspdk_event_bdev.so 00:02:17.009 CC module/event/subsystems/scsi/scsi.o 00:02:17.009 CC module/event/subsystems/nbd/nbd.o 00:02:17.009 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:17.009 CC module/event/subsystems/ublk/ublk.o 00:02:17.009 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:17.009 LIB libspdk_event_ublk.a 00:02:17.009 LIB libspdk_event_scsi.a 00:02:17.009 LIB libspdk_event_nbd.a 00:02:17.009 SO libspdk_event_scsi.so.6.0 00:02:17.009 SO libspdk_event_ublk.so.3.0 00:02:17.009 SO libspdk_event_nbd.so.6.0 00:02:17.270 LIB libspdk_event_nvmf.a 00:02:17.270 SYMLINK libspdk_event_ublk.so 00:02:17.270 SYMLINK libspdk_event_scsi.so 00:02:17.270 SYMLINK libspdk_event_nbd.so 00:02:17.270 SO libspdk_event_nvmf.so.6.0 00:02:17.270 SYMLINK libspdk_event_nvmf.so 00:02:17.531 CC module/event/subsystems/iscsi/iscsi.o 00:02:17.531 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:17.792 LIB libspdk_event_vhost_scsi.a 00:02:17.792 LIB libspdk_event_iscsi.a 00:02:17.792 SO libspdk_event_vhost_scsi.so.3.0 00:02:17.792 SO libspdk_event_iscsi.so.6.0 00:02:17.792 SYMLINK libspdk_event_vhost_scsi.so 00:02:17.792 SYMLINK libspdk_event_iscsi.so 00:02:18.052 SO libspdk.so.6.0 00:02:18.052 SYMLINK libspdk.so 00:02:18.312 CC app/trace_record/trace_record.o 00:02:18.313 CXX app/trace/trace.o 00:02:18.313 TEST_HEADER include/spdk/accel.h 00:02:18.313 CC app/spdk_top/spdk_top.o 00:02:18.313 TEST_HEADER include/spdk/accel_module.h 00:02:18.313 TEST_HEADER include/spdk/assert.h 00:02:18.313 TEST_HEADER include/spdk/barrier.h 00:02:18.313 TEST_HEADER include/spdk/base64.h 00:02:18.589 CC app/spdk_nvme_perf/perf.o 00:02:18.589 CC test/rpc_client/rpc_client_test.o 00:02:18.589 TEST_HEADER include/spdk/bdev.h 00:02:18.589 TEST_HEADER include/spdk/bdev_module.h 00:02:18.589 TEST_HEADER include/spdk/bdev_zone.h 00:02:18.589 TEST_HEADER include/spdk/bit_array.h 00:02:18.589 CC app/spdk_nvme_discover/discovery_aer.o 00:02:18.589 CC app/spdk_nvme_identify/identify.o 00:02:18.589 TEST_HEADER include/spdk/bit_pool.h 00:02:18.589 CC app/spdk_lspci/spdk_lspci.o 00:02:18.589 TEST_HEADER include/spdk/blob_bdev.h 00:02:18.589 TEST_HEADER include/spdk/blobfs.h 00:02:18.589 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:18.589 TEST_HEADER include/spdk/blob.h 00:02:18.589 TEST_HEADER include/spdk/conf.h 00:02:18.589 TEST_HEADER include/spdk/config.h 00:02:18.589 TEST_HEADER include/spdk/crc16.h 00:02:18.589 TEST_HEADER include/spdk/cpuset.h 00:02:18.589 TEST_HEADER include/spdk/crc64.h 00:02:18.589 TEST_HEADER include/spdk/crc32.h 00:02:18.589 TEST_HEADER include/spdk/dif.h 00:02:18.589 TEST_HEADER include/spdk/dma.h 00:02:18.589 TEST_HEADER include/spdk/env_dpdk.h 00:02:18.589 TEST_HEADER include/spdk/endian.h 00:02:18.589 TEST_HEADER include/spdk/env.h 00:02:18.589 TEST_HEADER include/spdk/fd.h 00:02:18.589 TEST_HEADER include/spdk/fd_group.h 00:02:18.589 TEST_HEADER include/spdk/event.h 00:02:18.589 TEST_HEADER include/spdk/file.h 00:02:18.589 TEST_HEADER include/spdk/fsdev_module.h 00:02:18.589 TEST_HEADER include/spdk/fsdev.h 00:02:18.589 TEST_HEADER include/spdk/ftl.h 00:02:18.589 CC app/spdk_dd/spdk_dd.o 00:02:18.589 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:18.589 TEST_HEADER include/spdk/hexlify.h 00:02:18.589 TEST_HEADER include/spdk/gpt_spec.h 00:02:18.589 TEST_HEADER include/spdk/histogram_data.h 00:02:18.589 TEST_HEADER include/spdk/idxd.h 00:02:18.589 CC app/iscsi_tgt/iscsi_tgt.o 00:02:18.589 TEST_HEADER include/spdk/idxd_spec.h 00:02:18.589 TEST_HEADER include/spdk/init.h 00:02:18.589 TEST_HEADER include/spdk/ioat_spec.h 00:02:18.589 TEST_HEADER include/spdk/iscsi_spec.h 00:02:18.589 TEST_HEADER include/spdk/ioat.h 00:02:18.589 TEST_HEADER include/spdk/json.h 00:02:18.589 TEST_HEADER include/spdk/jsonrpc.h 00:02:18.589 TEST_HEADER include/spdk/keyring.h 00:02:18.589 TEST_HEADER include/spdk/keyring_module.h 00:02:18.589 TEST_HEADER include/spdk/log.h 00:02:18.589 TEST_HEADER include/spdk/likely.h 00:02:18.589 TEST_HEADER include/spdk/lvol.h 00:02:18.589 TEST_HEADER include/spdk/md5.h 00:02:18.589 TEST_HEADER include/spdk/memory.h 00:02:18.589 TEST_HEADER include/spdk/mmio.h 00:02:18.589 TEST_HEADER include/spdk/nbd.h 00:02:18.589 CC app/spdk_tgt/spdk_tgt.o 00:02:18.589 TEST_HEADER include/spdk/net.h 00:02:18.589 TEST_HEADER include/spdk/nvme.h 00:02:18.589 TEST_HEADER include/spdk/notify.h 00:02:18.589 TEST_HEADER include/spdk/nvme_intel.h 00:02:18.589 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:18.589 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:18.589 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:18.589 TEST_HEADER include/spdk/nvme_spec.h 00:02:18.589 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:18.589 TEST_HEADER include/spdk/nvmf.h 00:02:18.589 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:18.589 TEST_HEADER include/spdk/nvme_zns.h 00:02:18.589 TEST_HEADER include/spdk/nvmf_transport.h 00:02:18.589 TEST_HEADER include/spdk/nvmf_spec.h 00:02:18.589 TEST_HEADER include/spdk/opal.h 00:02:18.589 TEST_HEADER include/spdk/opal_spec.h 00:02:18.589 TEST_HEADER include/spdk/queue.h 00:02:18.589 TEST_HEADER include/spdk/pci_ids.h 00:02:18.589 TEST_HEADER include/spdk/rpc.h 00:02:18.589 TEST_HEADER include/spdk/reduce.h 00:02:18.589 TEST_HEADER include/spdk/pipe.h 00:02:18.589 CC app/nvmf_tgt/nvmf_main.o 00:02:18.589 TEST_HEADER include/spdk/scheduler.h 00:02:18.589 TEST_HEADER include/spdk/scsi.h 00:02:18.589 TEST_HEADER include/spdk/scsi_spec.h 00:02:18.589 TEST_HEADER include/spdk/sock.h 00:02:18.589 TEST_HEADER include/spdk/stdinc.h 00:02:18.589 TEST_HEADER include/spdk/string.h 00:02:18.589 TEST_HEADER include/spdk/thread.h 00:02:18.589 TEST_HEADER include/spdk/trace.h 00:02:18.589 TEST_HEADER include/spdk/trace_parser.h 00:02:18.589 TEST_HEADER include/spdk/tree.h 00:02:18.589 TEST_HEADER include/spdk/ublk.h 00:02:18.589 TEST_HEADER include/spdk/util.h 00:02:18.589 TEST_HEADER include/spdk/uuid.h 00:02:18.589 TEST_HEADER include/spdk/version.h 00:02:18.589 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:18.589 TEST_HEADER include/spdk/vhost.h 00:02:18.589 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:18.589 TEST_HEADER include/spdk/vmd.h 00:02:18.589 TEST_HEADER include/spdk/zipf.h 00:02:18.589 TEST_HEADER include/spdk/xor.h 00:02:18.589 CXX test/cpp_headers/accel.o 00:02:18.589 CXX test/cpp_headers/accel_module.o 00:02:18.589 CXX test/cpp_headers/assert.o 00:02:18.589 CXX test/cpp_headers/bdev.o 00:02:18.589 CXX test/cpp_headers/base64.o 00:02:18.589 CXX test/cpp_headers/barrier.o 00:02:18.589 CXX test/cpp_headers/bdev_zone.o 00:02:18.589 CXX test/cpp_headers/bit_pool.o 00:02:18.589 CXX test/cpp_headers/blob_bdev.o 00:02:18.589 CXX test/cpp_headers/bdev_module.o 00:02:18.589 CXX test/cpp_headers/blobfs_bdev.o 00:02:18.589 CXX test/cpp_headers/blobfs.o 00:02:18.589 CXX test/cpp_headers/bit_array.o 00:02:18.589 CXX test/cpp_headers/blob.o 00:02:18.589 CXX test/cpp_headers/conf.o 00:02:18.589 CXX test/cpp_headers/config.o 00:02:18.589 CXX test/cpp_headers/cpuset.o 00:02:18.589 CXX test/cpp_headers/crc16.o 00:02:18.589 CXX test/cpp_headers/crc32.o 00:02:18.589 CXX test/cpp_headers/crc64.o 00:02:18.589 CXX test/cpp_headers/dif.o 00:02:18.589 CXX test/cpp_headers/dma.o 00:02:18.589 CXX test/cpp_headers/endian.o 00:02:18.589 CXX test/cpp_headers/env_dpdk.o 00:02:18.589 CXX test/cpp_headers/fd_group.o 00:02:18.589 CXX test/cpp_headers/env.o 00:02:18.589 CXX test/cpp_headers/event.o 00:02:18.589 CXX test/cpp_headers/fd.o 00:02:18.589 CXX test/cpp_headers/file.o 00:02:18.589 CXX test/cpp_headers/fsdev.o 00:02:18.589 CXX test/cpp_headers/fsdev_module.o 00:02:18.589 CXX test/cpp_headers/ftl.o 00:02:18.589 CXX test/cpp_headers/fuse_dispatcher.o 00:02:18.589 CXX test/cpp_headers/histogram_data.o 00:02:18.589 CXX test/cpp_headers/gpt_spec.o 00:02:18.589 CXX test/cpp_headers/hexlify.o 00:02:18.589 CXX test/cpp_headers/init.o 00:02:18.589 CXX test/cpp_headers/idxd_spec.o 00:02:18.589 CXX test/cpp_headers/idxd.o 00:02:18.589 CXX test/cpp_headers/ioat.o 00:02:18.589 CXX test/cpp_headers/json.o 00:02:18.589 CXX test/cpp_headers/ioat_spec.o 00:02:18.589 CXX test/cpp_headers/iscsi_spec.o 00:02:18.589 CXX test/cpp_headers/keyring.o 00:02:18.589 CXX test/cpp_headers/keyring_module.o 00:02:18.590 CXX test/cpp_headers/likely.o 00:02:18.590 CXX test/cpp_headers/log.o 00:02:18.590 CXX test/cpp_headers/jsonrpc.o 00:02:18.590 CXX test/cpp_headers/lvol.o 00:02:18.590 CXX test/cpp_headers/memory.o 00:02:18.590 CXX test/cpp_headers/mmio.o 00:02:18.590 CC examples/ioat/perf/perf.o 00:02:18.590 CXX test/cpp_headers/nbd.o 00:02:18.590 CXX test/cpp_headers/md5.o 00:02:18.590 CXX test/cpp_headers/net.o 00:02:18.590 CXX test/cpp_headers/notify.o 00:02:18.590 CXX test/cpp_headers/nvme_intel.o 00:02:18.590 CXX test/cpp_headers/nvme.o 00:02:18.590 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:18.590 CXX test/cpp_headers/nvme_ocssd.o 00:02:18.861 CC app/fio/nvme/fio_plugin.o 00:02:18.861 CXX test/cpp_headers/nvme_spec.o 00:02:18.861 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:18.861 CC examples/ioat/verify/verify.o 00:02:18.861 CXX test/cpp_headers/nvme_zns.o 00:02:18.861 CXX test/cpp_headers/nvmf_spec.o 00:02:18.861 CXX test/cpp_headers/nvmf_transport.o 00:02:18.861 CXX test/cpp_headers/nvmf_cmd.o 00:02:18.861 CC test/app/histogram_perf/histogram_perf.o 00:02:18.861 CXX test/cpp_headers/nvmf.o 00:02:18.861 LINK rpc_client_test 00:02:18.861 CXX test/cpp_headers/opal_spec.o 00:02:18.861 CXX test/cpp_headers/pci_ids.o 00:02:18.861 CC test/thread/poller_perf/poller_perf.o 00:02:18.862 CXX test/cpp_headers/opal.o 00:02:18.862 CXX test/cpp_headers/reduce.o 00:02:18.862 CXX test/cpp_headers/pipe.o 00:02:18.862 CXX test/cpp_headers/rpc.o 00:02:18.862 CXX test/cpp_headers/queue.o 00:02:18.862 CXX test/cpp_headers/scsi.o 00:02:18.862 CXX test/cpp_headers/scheduler.o 00:02:18.862 CXX test/cpp_headers/scsi_spec.o 00:02:18.862 CXX test/cpp_headers/sock.o 00:02:18.862 CXX test/cpp_headers/stdinc.o 00:02:18.862 CXX test/cpp_headers/string.o 00:02:18.862 CC examples/util/zipf/zipf.o 00:02:18.862 CXX test/cpp_headers/thread.o 00:02:18.862 CXX test/cpp_headers/trace.o 00:02:18.862 CXX test/cpp_headers/trace_parser.o 00:02:18.862 CXX test/cpp_headers/tree.o 00:02:18.862 CXX test/cpp_headers/ublk.o 00:02:18.862 CC test/env/memory/memory_ut.o 00:02:18.862 CXX test/cpp_headers/util.o 00:02:18.862 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:18.862 CXX test/cpp_headers/uuid.o 00:02:18.862 CC test/app/stub/stub.o 00:02:18.862 CXX test/cpp_headers/version.o 00:02:18.862 CXX test/cpp_headers/vhost.o 00:02:18.862 CXX test/cpp_headers/vfio_user_pci.o 00:02:18.862 CXX test/cpp_headers/vfio_user_spec.o 00:02:18.862 CXX test/cpp_headers/vmd.o 00:02:18.862 CC test/dma/test_dma/test_dma.o 00:02:18.862 CXX test/cpp_headers/xor.o 00:02:18.862 CXX test/cpp_headers/zipf.o 00:02:18.862 CC test/env/vtophys/vtophys.o 00:02:18.862 CC test/app/jsoncat/jsoncat.o 00:02:18.862 CC test/app/bdev_svc/bdev_svc.o 00:02:18.862 CC app/fio/bdev/fio_plugin.o 00:02:19.140 CC test/env/pci/pci_ut.o 00:02:19.140 LINK iscsi_tgt 00:02:19.140 LINK spdk_tgt 00:02:19.140 LINK nvmf_tgt 00:02:19.140 LINK spdk_lspci 00:02:19.418 CC test/env/mem_callbacks/mem_callbacks.o 00:02:19.418 LINK spdk_dd 00:02:19.418 LINK spdk_trace_record 00:02:19.418 LINK spdk_nvme_discover 00:02:19.695 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:19.695 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:19.695 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:19.695 LINK histogram_perf 00:02:19.695 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:19.695 LINK poller_perf 00:02:19.695 LINK zipf 00:02:19.695 LINK verify 00:02:19.695 LINK ioat_perf 00:02:19.957 LINK bdev_svc 00:02:19.957 LINK interrupt_tgt 00:02:19.957 LINK spdk_nvme 00:02:19.957 LINK spdk_top 00:02:19.957 LINK jsoncat 00:02:20.218 LINK spdk_nvme_identify 00:02:20.218 LINK spdk_nvme_perf 00:02:20.218 LINK vhost_fuzz 00:02:20.218 LINK nvme_fuzz 00:02:20.218 LINK stub 00:02:20.218 LINK mem_callbacks 00:02:20.218 LINK env_dpdk_post_init 00:02:20.218 CC examples/vmd/lsvmd/lsvmd.o 00:02:20.218 LINK vtophys 00:02:20.218 CC examples/vmd/led/led.o 00:02:20.218 CC examples/idxd/perf/perf.o 00:02:20.218 CC test/event/event_perf/event_perf.o 00:02:20.480 CC examples/sock/hello_world/hello_sock.o 00:02:20.480 CC test/event/reactor/reactor.o 00:02:20.480 CC test/event/reactor_perf/reactor_perf.o 00:02:20.480 CC test/event/app_repeat/app_repeat.o 00:02:20.480 CC test/event/scheduler/scheduler.o 00:02:20.480 CC examples/thread/thread/thread_ex.o 00:02:20.480 LINK spdk_trace 00:02:20.480 LINK lsvmd 00:02:20.480 LINK led 00:02:20.480 LINK event_perf 00:02:20.480 LINK reactor 00:02:20.480 LINK reactor_perf 00:02:20.480 LINK app_repeat 00:02:20.480 LINK pci_ut 00:02:20.480 LINK hello_sock 00:02:20.743 LINK spdk_bdev 00:02:20.743 LINK scheduler 00:02:20.743 LINK test_dma 00:02:20.743 LINK idxd_perf 00:02:20.743 LINK thread 00:02:20.743 CC app/vhost/vhost.o 00:02:21.004 LINK vhost 00:02:21.264 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:21.264 CC examples/nvme/abort/abort.o 00:02:21.264 CC examples/nvme/reconnect/reconnect.o 00:02:21.264 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:21.264 CC examples/nvme/arbitration/arbitration.o 00:02:21.264 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:21.264 CC examples/nvme/hello_world/hello_world.o 00:02:21.264 CC examples/nvme/hotplug/hotplug.o 00:02:21.264 LINK memory_ut 00:02:21.264 CC test/nvme/reserve/reserve.o 00:02:21.264 CC test/nvme/aer/aer.o 00:02:21.264 CC test/nvme/overhead/overhead.o 00:02:21.264 CC test/nvme/cuse/cuse.o 00:02:21.264 CC test/nvme/err_injection/err_injection.o 00:02:21.264 CC test/nvme/reset/reset.o 00:02:21.264 CC test/nvme/sgl/sgl.o 00:02:21.264 CC test/nvme/startup/startup.o 00:02:21.264 CC test/nvme/e2edp/nvme_dp.o 00:02:21.264 CC test/nvme/compliance/nvme_compliance.o 00:02:21.264 CC test/nvme/boot_partition/boot_partition.o 00:02:21.264 CC test/nvme/fdp/fdp.o 00:02:21.264 CC test/nvme/connect_stress/connect_stress.o 00:02:21.264 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:21.264 CC test/nvme/simple_copy/simple_copy.o 00:02:21.264 CC test/nvme/fused_ordering/fused_ordering.o 00:02:21.264 CC examples/accel/perf/accel_perf.o 00:02:21.264 CC test/accel/dif/dif.o 00:02:21.264 CC test/blobfs/mkfs/mkfs.o 00:02:21.264 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:21.264 LINK iscsi_fuzz 00:02:21.264 CC examples/blob/hello_world/hello_blob.o 00:02:21.264 CC examples/blob/cli/blobcli.o 00:02:21.525 LINK pmr_persistence 00:02:21.525 LINK cmb_copy 00:02:21.525 LINK hotplug 00:02:21.525 CC test/lvol/esnap/esnap.o 00:02:21.525 LINK hello_world 00:02:21.525 LINK arbitration 00:02:21.525 LINK startup 00:02:21.525 LINK reconnect 00:02:21.525 LINK fused_ordering 00:02:21.525 LINK connect_stress 00:02:21.525 LINK boot_partition 00:02:21.525 LINK err_injection 00:02:21.525 LINK reserve 00:02:21.525 LINK doorbell_aers 00:02:21.525 LINK simple_copy 00:02:21.525 LINK abort 00:02:21.525 LINK nvme_dp 00:02:21.525 LINK reset 00:02:21.525 LINK mkfs 00:02:21.525 LINK aer 00:02:21.786 LINK sgl 00:02:21.786 LINK overhead 00:02:21.786 LINK nvme_compliance 00:02:21.786 LINK hello_blob 00:02:21.786 LINK nvme_manage 00:02:21.786 LINK fdp 00:02:21.786 LINK hello_fsdev 00:02:21.786 LINK accel_perf 00:02:21.786 LINK blobcli 00:02:22.047 LINK dif 00:02:22.308 CC examples/bdev/hello_world/hello_bdev.o 00:02:22.308 CC examples/bdev/bdevperf/bdevperf.o 00:02:22.569 LINK cuse 00:02:22.569 CC test/bdev/bdevio/bdevio.o 00:02:22.569 LINK hello_bdev 00:02:23.141 LINK bdevio 00:02:23.141 LINK bdevperf 00:02:23.712 CC examples/nvmf/nvmf/nvmf.o 00:02:24.284 LINK nvmf 00:02:25.669 LINK esnap 00:02:25.930 00:02:25.930 real 0m56.220s 00:02:25.930 user 8m4.160s 00:02:25.930 sys 5m53.812s 00:02:25.930 11:47:02 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:25.930 11:47:02 make -- common/autotest_common.sh@10 -- $ set +x 00:02:25.930 ************************************ 00:02:25.930 END TEST make 00:02:25.930 ************************************ 00:02:25.930 11:47:02 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:25.930 11:47:02 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:25.930 11:47:02 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:25.930 11:47:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.930 11:47:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:25.930 11:47:02 -- pm/common@44 -- $ pid=649055 00:02:25.930 11:47:02 -- pm/common@50 -- $ kill -TERM 649055 00:02:25.930 11:47:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.930 11:47:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:25.930 11:47:02 -- pm/common@44 -- $ pid=649056 00:02:25.930 11:47:02 -- pm/common@50 -- $ kill -TERM 649056 00:02:25.930 11:47:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.930 11:47:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:25.930 11:47:02 -- pm/common@44 -- $ pid=649058 00:02:25.930 11:47:02 -- pm/common@50 -- $ kill -TERM 649058 00:02:25.930 11:47:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.930 11:47:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:25.930 11:47:02 -- pm/common@44 -- $ pid=649082 00:02:25.930 11:47:02 -- pm/common@50 -- $ sudo -E kill -TERM 649082 00:02:26.191 11:47:02 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:02:26.191 11:47:02 -- common/autotest_common.sh@1691 -- # lcov --version 00:02:26.191 11:47:02 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:02:26.191 11:47:02 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:02:26.191 11:47:02 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:26.191 11:47:02 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:26.191 11:47:02 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:26.191 11:47:02 -- scripts/common.sh@336 -- # IFS=.-: 00:02:26.191 11:47:02 -- scripts/common.sh@336 -- # read -ra ver1 00:02:26.191 11:47:02 -- scripts/common.sh@337 -- # IFS=.-: 00:02:26.191 11:47:02 -- scripts/common.sh@337 -- # read -ra ver2 00:02:26.191 11:47:02 -- scripts/common.sh@338 -- # local 'op=<' 00:02:26.191 11:47:02 -- scripts/common.sh@340 -- # ver1_l=2 00:02:26.191 11:47:02 -- scripts/common.sh@341 -- # ver2_l=1 00:02:26.191 11:47:02 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:26.192 11:47:02 -- scripts/common.sh@344 -- # case "$op" in 00:02:26.192 11:47:02 -- scripts/common.sh@345 -- # : 1 00:02:26.192 11:47:02 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:26.192 11:47:02 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:26.192 11:47:02 -- scripts/common.sh@365 -- # decimal 1 00:02:26.192 11:47:02 -- scripts/common.sh@353 -- # local d=1 00:02:26.192 11:47:02 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:26.192 11:47:02 -- scripts/common.sh@355 -- # echo 1 00:02:26.192 11:47:02 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:26.192 11:47:02 -- scripts/common.sh@366 -- # decimal 2 00:02:26.192 11:47:02 -- scripts/common.sh@353 -- # local d=2 00:02:26.192 11:47:02 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:26.192 11:47:02 -- scripts/common.sh@355 -- # echo 2 00:02:26.192 11:47:02 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:26.192 11:47:02 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:26.192 11:47:02 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:26.192 11:47:02 -- scripts/common.sh@368 -- # return 0 00:02:26.192 11:47:02 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:26.192 11:47:02 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:02:26.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:26.192 --rc genhtml_branch_coverage=1 00:02:26.192 --rc genhtml_function_coverage=1 00:02:26.192 --rc genhtml_legend=1 00:02:26.192 --rc geninfo_all_blocks=1 00:02:26.192 --rc geninfo_unexecuted_blocks=1 00:02:26.192 00:02:26.192 ' 00:02:26.192 11:47:02 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:02:26.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:26.192 --rc genhtml_branch_coverage=1 00:02:26.192 --rc genhtml_function_coverage=1 00:02:26.192 --rc genhtml_legend=1 00:02:26.192 --rc geninfo_all_blocks=1 00:02:26.192 --rc geninfo_unexecuted_blocks=1 00:02:26.192 00:02:26.192 ' 00:02:26.192 11:47:02 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:02:26.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:26.192 --rc genhtml_branch_coverage=1 00:02:26.192 --rc genhtml_function_coverage=1 00:02:26.192 --rc genhtml_legend=1 00:02:26.192 --rc geninfo_all_blocks=1 00:02:26.192 --rc geninfo_unexecuted_blocks=1 00:02:26.192 00:02:26.192 ' 00:02:26.192 11:47:02 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:02:26.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:26.192 --rc genhtml_branch_coverage=1 00:02:26.192 --rc genhtml_function_coverage=1 00:02:26.192 --rc genhtml_legend=1 00:02:26.192 --rc geninfo_all_blocks=1 00:02:26.192 --rc geninfo_unexecuted_blocks=1 00:02:26.192 00:02:26.192 ' 00:02:26.192 11:47:02 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:26.192 11:47:02 -- nvmf/common.sh@7 -- # uname -s 00:02:26.192 11:47:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:26.192 11:47:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:26.192 11:47:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:26.192 11:47:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:26.192 11:47:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:26.192 11:47:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:26.192 11:47:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:26.192 11:47:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:26.192 11:47:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:26.192 11:47:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:26.192 11:47:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:26.192 11:47:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:26.192 11:47:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:26.192 11:47:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:26.192 11:47:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:26.192 11:47:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:26.192 11:47:02 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:26.192 11:47:02 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:26.192 11:47:02 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:26.192 11:47:02 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:26.192 11:47:02 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:26.192 11:47:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.192 11:47:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.192 11:47:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.192 11:47:02 -- paths/export.sh@5 -- # export PATH 00:02:26.192 11:47:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.192 11:47:02 -- nvmf/common.sh@51 -- # : 0 00:02:26.192 11:47:02 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:26.192 11:47:02 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:26.192 11:47:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:26.192 11:47:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:26.192 11:47:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:26.192 11:47:02 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:26.192 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:26.192 11:47:02 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:26.192 11:47:02 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:26.192 11:47:02 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:26.454 11:47:02 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:26.454 11:47:02 -- spdk/autotest.sh@32 -- # uname -s 00:02:26.454 11:47:02 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:26.454 11:47:02 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:26.454 11:47:02 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:26.454 11:47:02 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:26.454 11:47:02 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:26.454 11:47:02 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:26.454 11:47:02 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:26.454 11:47:02 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:26.454 11:47:02 -- spdk/autotest.sh@48 -- # udevadm_pid=714177 00:02:26.454 11:47:02 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:26.454 11:47:02 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:26.454 11:47:02 -- pm/common@17 -- # local monitor 00:02:26.454 11:47:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.454 11:47:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.454 11:47:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.454 11:47:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.454 11:47:02 -- pm/common@21 -- # date +%s 00:02:26.454 11:47:02 -- pm/common@21 -- # date +%s 00:02:26.454 11:47:02 -- pm/common@25 -- # sleep 1 00:02:26.454 11:47:02 -- pm/common@21 -- # date +%s 00:02:26.454 11:47:02 -- pm/common@21 -- # date +%s 00:02:26.454 11:47:02 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1729504022 00:02:26.454 11:47:02 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1729504022 00:02:26.454 11:47:02 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1729504022 00:02:26.454 11:47:02 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1729504022 00:02:26.454 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1729504022_collect-cpu-load.pm.log 00:02:26.454 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1729504022_collect-vmstat.pm.log 00:02:26.454 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1729504022_collect-cpu-temp.pm.log 00:02:26.454 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1729504022_collect-bmc-pm.bmc.pm.log 00:02:27.397 11:47:03 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:27.397 11:47:03 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:27.397 11:47:03 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:27.397 11:47:03 -- common/autotest_common.sh@10 -- # set +x 00:02:27.397 11:47:03 -- spdk/autotest.sh@59 -- # create_test_list 00:02:27.397 11:47:03 -- common/autotest_common.sh@748 -- # xtrace_disable 00:02:27.397 11:47:03 -- common/autotest_common.sh@10 -- # set +x 00:02:27.397 11:47:03 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:27.397 11:47:03 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:27.397 11:47:03 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:27.397 11:47:03 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:27.397 11:47:03 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:27.397 11:47:03 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:27.398 11:47:03 -- common/autotest_common.sh@1455 -- # uname 00:02:27.398 11:47:03 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:27.398 11:47:03 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:27.398 11:47:03 -- common/autotest_common.sh@1475 -- # uname 00:02:27.398 11:47:03 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:27.398 11:47:03 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:27.398 11:47:03 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:27.398 lcov: LCOV version 1.15 00:02:27.398 11:47:03 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:53.980 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:53.980 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:58.186 11:47:34 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:02:58.186 11:47:34 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:58.186 11:47:34 -- common/autotest_common.sh@10 -- # set +x 00:02:58.186 11:47:34 -- spdk/autotest.sh@78 -- # rm -f 00:02:58.186 11:47:34 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:01.566 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:01.566 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:01.566 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:01.566 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:01.566 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:01.566 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:01.566 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:01.566 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:01.566 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:01.566 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:01.566 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:01.566 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:01.566 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:01.566 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:01.566 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:01.566 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:01.566 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:01.827 11:47:38 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:01.827 11:47:38 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:01.827 11:47:38 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:01.827 11:47:38 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:01.827 11:47:38 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:01.827 11:47:38 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:01.827 11:47:38 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:01.827 11:47:38 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:01.827 11:47:38 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:01.827 11:47:38 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:01.827 11:47:38 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:01.827 11:47:38 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:01.827 11:47:38 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:01.827 11:47:38 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:01.827 11:47:38 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:01.827 No valid GPT data, bailing 00:03:01.827 11:47:38 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:01.827 11:47:38 -- scripts/common.sh@394 -- # pt= 00:03:01.827 11:47:38 -- scripts/common.sh@395 -- # return 1 00:03:01.827 11:47:38 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:01.827 1+0 records in 00:03:01.828 1+0 records out 00:03:01.828 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00228517 s, 459 MB/s 00:03:01.828 11:47:38 -- spdk/autotest.sh@105 -- # sync 00:03:01.828 11:47:38 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:01.828 11:47:38 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:01.828 11:47:38 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:11.833 11:47:46 -- spdk/autotest.sh@111 -- # uname -s 00:03:11.833 11:47:46 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:11.833 11:47:46 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:11.833 11:47:46 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:13.746 Hugepages 00:03:13.746 node hugesize free / total 00:03:13.746 node0 1048576kB 0 / 0 00:03:13.746 node0 2048kB 0 / 0 00:03:13.746 node1 1048576kB 0 / 0 00:03:13.746 node1 2048kB 0 / 0 00:03:13.746 00:03:13.746 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:13.746 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:03:13.746 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:03:14.007 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:03:14.007 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:03:14.007 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:03:14.007 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:03:14.007 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:03:14.007 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:03:14.007 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:03:14.007 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:03:14.007 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:03:14.007 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:03:14.007 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:03:14.007 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:03:14.007 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:03:14.007 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:03:14.007 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:03:14.007 11:47:50 -- spdk/autotest.sh@117 -- # uname -s 00:03:14.007 11:47:50 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:14.007 11:47:50 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:14.007 11:47:50 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:18.211 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:18.211 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:18.211 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:18.211 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:18.211 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:18.211 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:18.211 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:18.211 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:18.211 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:18.211 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:18.211 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:18.211 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:18.211 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:18.211 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:18.211 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:18.211 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:19.590 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:19.850 11:47:56 -- common/autotest_common.sh@1515 -- # sleep 1 00:03:20.790 11:47:57 -- common/autotest_common.sh@1516 -- # bdfs=() 00:03:20.790 11:47:57 -- common/autotest_common.sh@1516 -- # local bdfs 00:03:20.790 11:47:57 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:03:20.790 11:47:57 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:03:20.790 11:47:57 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:20.790 11:47:57 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:20.790 11:47:57 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:20.790 11:47:57 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:20.790 11:47:57 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:21.051 11:47:57 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:21.051 11:47:57 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:03:21.051 11:47:57 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:24.350 Waiting for block devices as requested 00:03:24.350 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:24.350 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:24.611 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:24.611 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:24.611 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:24.871 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:24.871 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:24.871 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:25.132 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:03:25.132 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:25.392 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:25.392 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:25.392 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:25.653 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:25.653 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:25.653 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:25.913 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:26.173 11:48:02 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:26.173 11:48:02 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:03:26.173 11:48:02 -- common/autotest_common.sh@1485 -- # grep 0000:65:00.0/nvme/nvme 00:03:26.173 11:48:02 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:03:26.173 11:48:02 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:26.173 11:48:02 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:03:26.173 11:48:02 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:26.173 11:48:02 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:03:26.173 11:48:02 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:03:26.173 11:48:02 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:03:26.173 11:48:02 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:03:26.173 11:48:02 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:26.173 11:48:02 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:26.173 11:48:02 -- common/autotest_common.sh@1529 -- # oacs=' 0x5f' 00:03:26.173 11:48:02 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:26.173 11:48:02 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:26.173 11:48:02 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:03:26.173 11:48:02 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:26.173 11:48:02 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:26.173 11:48:02 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:26.173 11:48:02 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:26.173 11:48:02 -- common/autotest_common.sh@1541 -- # continue 00:03:26.173 11:48:02 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:26.173 11:48:02 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:26.173 11:48:02 -- common/autotest_common.sh@10 -- # set +x 00:03:26.173 11:48:02 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:26.173 11:48:02 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:26.173 11:48:02 -- common/autotest_common.sh@10 -- # set +x 00:03:26.173 11:48:02 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:30.377 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:30.377 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:30.377 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:30.377 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:30.377 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:30.377 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:30.377 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:30.377 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:30.377 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:30.377 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:30.377 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:30.377 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:30.377 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:30.377 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:30.377 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:30.377 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:30.377 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:30.377 11:48:06 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:30.377 11:48:06 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:30.377 11:48:06 -- common/autotest_common.sh@10 -- # set +x 00:03:30.377 11:48:06 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:30.377 11:48:06 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:03:30.377 11:48:06 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:03:30.377 11:48:06 -- common/autotest_common.sh@1561 -- # bdfs=() 00:03:30.377 11:48:06 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:03:30.377 11:48:06 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:03:30.377 11:48:06 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:03:30.377 11:48:06 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:03:30.377 11:48:06 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:30.377 11:48:06 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:30.377 11:48:06 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:30.377 11:48:06 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:30.377 11:48:06 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:30.377 11:48:06 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:30.377 11:48:06 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:03:30.377 11:48:06 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:30.377 11:48:06 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:03:30.377 11:48:06 -- common/autotest_common.sh@1564 -- # device=0xa80a 00:03:30.377 11:48:06 -- common/autotest_common.sh@1565 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:03:30.377 11:48:06 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:03:30.377 11:48:06 -- common/autotest_common.sh@1570 -- # return 0 00:03:30.377 11:48:06 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:03:30.377 11:48:06 -- common/autotest_common.sh@1578 -- # return 0 00:03:30.377 11:48:06 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:30.377 11:48:06 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:30.377 11:48:06 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:30.377 11:48:06 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:30.377 11:48:06 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:30.377 11:48:06 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:30.377 11:48:06 -- common/autotest_common.sh@10 -- # set +x 00:03:30.377 11:48:06 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:30.377 11:48:06 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:30.377 11:48:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:30.377 11:48:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:30.377 11:48:06 -- common/autotest_common.sh@10 -- # set +x 00:03:30.377 ************************************ 00:03:30.377 START TEST env 00:03:30.377 ************************************ 00:03:30.377 11:48:06 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:30.377 * Looking for test storage... 00:03:30.638 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:30.638 11:48:06 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:30.638 11:48:06 env -- common/autotest_common.sh@1691 -- # lcov --version 00:03:30.638 11:48:06 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:30.638 11:48:07 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:30.638 11:48:07 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:30.638 11:48:07 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:30.638 11:48:07 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:30.638 11:48:07 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:30.638 11:48:07 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:30.638 11:48:07 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:30.638 11:48:07 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:30.638 11:48:07 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:30.638 11:48:07 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:30.638 11:48:07 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:30.638 11:48:07 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:30.638 11:48:07 env -- scripts/common.sh@344 -- # case "$op" in 00:03:30.638 11:48:07 env -- scripts/common.sh@345 -- # : 1 00:03:30.638 11:48:07 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:30.638 11:48:07 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:30.638 11:48:07 env -- scripts/common.sh@365 -- # decimal 1 00:03:30.638 11:48:07 env -- scripts/common.sh@353 -- # local d=1 00:03:30.638 11:48:07 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:30.638 11:48:07 env -- scripts/common.sh@355 -- # echo 1 00:03:30.638 11:48:07 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:30.638 11:48:07 env -- scripts/common.sh@366 -- # decimal 2 00:03:30.638 11:48:07 env -- scripts/common.sh@353 -- # local d=2 00:03:30.638 11:48:07 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:30.638 11:48:07 env -- scripts/common.sh@355 -- # echo 2 00:03:30.638 11:48:07 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:30.638 11:48:07 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:30.638 11:48:07 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:30.639 11:48:07 env -- scripts/common.sh@368 -- # return 0 00:03:30.639 11:48:07 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:30.639 11:48:07 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:30.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:30.639 --rc genhtml_branch_coverage=1 00:03:30.639 --rc genhtml_function_coverage=1 00:03:30.639 --rc genhtml_legend=1 00:03:30.639 --rc geninfo_all_blocks=1 00:03:30.639 --rc geninfo_unexecuted_blocks=1 00:03:30.639 00:03:30.639 ' 00:03:30.639 11:48:07 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:30.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:30.639 --rc genhtml_branch_coverage=1 00:03:30.639 --rc genhtml_function_coverage=1 00:03:30.639 --rc genhtml_legend=1 00:03:30.639 --rc geninfo_all_blocks=1 00:03:30.639 --rc geninfo_unexecuted_blocks=1 00:03:30.639 00:03:30.639 ' 00:03:30.639 11:48:07 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:30.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:30.639 --rc genhtml_branch_coverage=1 00:03:30.639 --rc genhtml_function_coverage=1 00:03:30.639 --rc genhtml_legend=1 00:03:30.639 --rc geninfo_all_blocks=1 00:03:30.639 --rc geninfo_unexecuted_blocks=1 00:03:30.639 00:03:30.639 ' 00:03:30.639 11:48:07 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:30.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:30.639 --rc genhtml_branch_coverage=1 00:03:30.639 --rc genhtml_function_coverage=1 00:03:30.639 --rc genhtml_legend=1 00:03:30.639 --rc geninfo_all_blocks=1 00:03:30.639 --rc geninfo_unexecuted_blocks=1 00:03:30.639 00:03:30.639 ' 00:03:30.639 11:48:07 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:30.639 11:48:07 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:30.639 11:48:07 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:30.639 11:48:07 env -- common/autotest_common.sh@10 -- # set +x 00:03:30.639 ************************************ 00:03:30.639 START TEST env_memory 00:03:30.639 ************************************ 00:03:30.639 11:48:07 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:30.639 00:03:30.639 00:03:30.639 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.639 http://cunit.sourceforge.net/ 00:03:30.639 00:03:30.639 00:03:30.639 Suite: memory 00:03:30.639 Test: alloc and free memory map ...[2024-10-21 11:48:07.166784] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:30.639 passed 00:03:30.639 Test: mem map translation ...[2024-10-21 11:48:07.192325] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:30.639 [2024-10-21 11:48:07.192353] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:30.639 [2024-10-21 11:48:07.192400] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:30.639 [2024-10-21 11:48:07.192407] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:30.900 passed 00:03:30.900 Test: mem map registration ...[2024-10-21 11:48:07.247690] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:30.900 [2024-10-21 11:48:07.247711] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:30.900 passed 00:03:30.900 Test: mem map adjacent registrations ...passed 00:03:30.900 00:03:30.900 Run Summary: Type Total Ran Passed Failed Inactive 00:03:30.900 suites 1 1 n/a 0 0 00:03:30.900 tests 4 4 4 0 0 00:03:30.900 asserts 152 152 152 0 n/a 00:03:30.900 00:03:30.900 Elapsed time = 0.191 seconds 00:03:30.900 00:03:30.900 real 0m0.207s 00:03:30.900 user 0m0.194s 00:03:30.900 sys 0m0.011s 00:03:30.900 11:48:07 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:30.900 11:48:07 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:30.900 ************************************ 00:03:30.900 END TEST env_memory 00:03:30.900 ************************************ 00:03:30.900 11:48:07 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:30.900 11:48:07 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:30.900 11:48:07 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:30.900 11:48:07 env -- common/autotest_common.sh@10 -- # set +x 00:03:30.900 ************************************ 00:03:30.900 START TEST env_vtophys 00:03:30.900 ************************************ 00:03:30.900 11:48:07 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:30.900 EAL: lib.eal log level changed from notice to debug 00:03:30.900 EAL: Detected lcore 0 as core 0 on socket 0 00:03:30.900 EAL: Detected lcore 1 as core 1 on socket 0 00:03:30.900 EAL: Detected lcore 2 as core 2 on socket 0 00:03:30.900 EAL: Detected lcore 3 as core 3 on socket 0 00:03:30.900 EAL: Detected lcore 4 as core 4 on socket 0 00:03:30.900 EAL: Detected lcore 5 as core 5 on socket 0 00:03:30.900 EAL: Detected lcore 6 as core 6 on socket 0 00:03:30.900 EAL: Detected lcore 7 as core 7 on socket 0 00:03:30.900 EAL: Detected lcore 8 as core 8 on socket 0 00:03:30.900 EAL: Detected lcore 9 as core 9 on socket 0 00:03:30.900 EAL: Detected lcore 10 as core 10 on socket 0 00:03:30.900 EAL: Detected lcore 11 as core 11 on socket 0 00:03:30.900 EAL: Detected lcore 12 as core 12 on socket 0 00:03:30.900 EAL: Detected lcore 13 as core 13 on socket 0 00:03:30.900 EAL: Detected lcore 14 as core 14 on socket 0 00:03:30.900 EAL: Detected lcore 15 as core 15 on socket 0 00:03:30.900 EAL: Detected lcore 16 as core 16 on socket 0 00:03:30.900 EAL: Detected lcore 17 as core 17 on socket 0 00:03:30.900 EAL: Detected lcore 18 as core 18 on socket 0 00:03:30.900 EAL: Detected lcore 19 as core 19 on socket 0 00:03:30.900 EAL: Detected lcore 20 as core 20 on socket 0 00:03:30.900 EAL: Detected lcore 21 as core 21 on socket 0 00:03:30.900 EAL: Detected lcore 22 as core 22 on socket 0 00:03:30.900 EAL: Detected lcore 23 as core 23 on socket 0 00:03:30.900 EAL: Detected lcore 24 as core 24 on socket 0 00:03:30.900 EAL: Detected lcore 25 as core 25 on socket 0 00:03:30.900 EAL: Detected lcore 26 as core 26 on socket 0 00:03:30.900 EAL: Detected lcore 27 as core 27 on socket 0 00:03:30.900 EAL: Detected lcore 28 as core 28 on socket 0 00:03:30.900 EAL: Detected lcore 29 as core 29 on socket 0 00:03:30.900 EAL: Detected lcore 30 as core 30 on socket 0 00:03:30.900 EAL: Detected lcore 31 as core 31 on socket 0 00:03:30.900 EAL: Detected lcore 32 as core 32 on socket 0 00:03:30.900 EAL: Detected lcore 33 as core 33 on socket 0 00:03:30.900 EAL: Detected lcore 34 as core 34 on socket 0 00:03:30.900 EAL: Detected lcore 35 as core 35 on socket 0 00:03:30.900 EAL: Detected lcore 36 as core 0 on socket 1 00:03:30.900 EAL: Detected lcore 37 as core 1 on socket 1 00:03:30.900 EAL: Detected lcore 38 as core 2 on socket 1 00:03:30.900 EAL: Detected lcore 39 as core 3 on socket 1 00:03:30.900 EAL: Detected lcore 40 as core 4 on socket 1 00:03:30.900 EAL: Detected lcore 41 as core 5 on socket 1 00:03:30.900 EAL: Detected lcore 42 as core 6 on socket 1 00:03:30.900 EAL: Detected lcore 43 as core 7 on socket 1 00:03:30.900 EAL: Detected lcore 44 as core 8 on socket 1 00:03:30.900 EAL: Detected lcore 45 as core 9 on socket 1 00:03:30.900 EAL: Detected lcore 46 as core 10 on socket 1 00:03:30.900 EAL: Detected lcore 47 as core 11 on socket 1 00:03:30.900 EAL: Detected lcore 48 as core 12 on socket 1 00:03:30.900 EAL: Detected lcore 49 as core 13 on socket 1 00:03:30.900 EAL: Detected lcore 50 as core 14 on socket 1 00:03:30.900 EAL: Detected lcore 51 as core 15 on socket 1 00:03:30.900 EAL: Detected lcore 52 as core 16 on socket 1 00:03:30.900 EAL: Detected lcore 53 as core 17 on socket 1 00:03:30.900 EAL: Detected lcore 54 as core 18 on socket 1 00:03:30.900 EAL: Detected lcore 55 as core 19 on socket 1 00:03:30.900 EAL: Detected lcore 56 as core 20 on socket 1 00:03:30.900 EAL: Detected lcore 57 as core 21 on socket 1 00:03:30.900 EAL: Detected lcore 58 as core 22 on socket 1 00:03:30.900 EAL: Detected lcore 59 as core 23 on socket 1 00:03:30.900 EAL: Detected lcore 60 as core 24 on socket 1 00:03:30.900 EAL: Detected lcore 61 as core 25 on socket 1 00:03:30.900 EAL: Detected lcore 62 as core 26 on socket 1 00:03:30.900 EAL: Detected lcore 63 as core 27 on socket 1 00:03:30.900 EAL: Detected lcore 64 as core 28 on socket 1 00:03:30.900 EAL: Detected lcore 65 as core 29 on socket 1 00:03:30.900 EAL: Detected lcore 66 as core 30 on socket 1 00:03:30.900 EAL: Detected lcore 67 as core 31 on socket 1 00:03:30.900 EAL: Detected lcore 68 as core 32 on socket 1 00:03:30.900 EAL: Detected lcore 69 as core 33 on socket 1 00:03:30.900 EAL: Detected lcore 70 as core 34 on socket 1 00:03:30.900 EAL: Detected lcore 71 as core 35 on socket 1 00:03:30.900 EAL: Detected lcore 72 as core 0 on socket 0 00:03:30.900 EAL: Detected lcore 73 as core 1 on socket 0 00:03:30.900 EAL: Detected lcore 74 as core 2 on socket 0 00:03:30.900 EAL: Detected lcore 75 as core 3 on socket 0 00:03:30.900 EAL: Detected lcore 76 as core 4 on socket 0 00:03:30.900 EAL: Detected lcore 77 as core 5 on socket 0 00:03:30.900 EAL: Detected lcore 78 as core 6 on socket 0 00:03:30.900 EAL: Detected lcore 79 as core 7 on socket 0 00:03:30.900 EAL: Detected lcore 80 as core 8 on socket 0 00:03:30.900 EAL: Detected lcore 81 as core 9 on socket 0 00:03:30.900 EAL: Detected lcore 82 as core 10 on socket 0 00:03:30.900 EAL: Detected lcore 83 as core 11 on socket 0 00:03:30.900 EAL: Detected lcore 84 as core 12 on socket 0 00:03:30.900 EAL: Detected lcore 85 as core 13 on socket 0 00:03:30.900 EAL: Detected lcore 86 as core 14 on socket 0 00:03:30.900 EAL: Detected lcore 87 as core 15 on socket 0 00:03:30.900 EAL: Detected lcore 88 as core 16 on socket 0 00:03:30.900 EAL: Detected lcore 89 as core 17 on socket 0 00:03:30.900 EAL: Detected lcore 90 as core 18 on socket 0 00:03:30.900 EAL: Detected lcore 91 as core 19 on socket 0 00:03:30.900 EAL: Detected lcore 92 as core 20 on socket 0 00:03:30.900 EAL: Detected lcore 93 as core 21 on socket 0 00:03:30.900 EAL: Detected lcore 94 as core 22 on socket 0 00:03:30.900 EAL: Detected lcore 95 as core 23 on socket 0 00:03:30.900 EAL: Detected lcore 96 as core 24 on socket 0 00:03:30.900 EAL: Detected lcore 97 as core 25 on socket 0 00:03:30.900 EAL: Detected lcore 98 as core 26 on socket 0 00:03:30.900 EAL: Detected lcore 99 as core 27 on socket 0 00:03:30.900 EAL: Detected lcore 100 as core 28 on socket 0 00:03:30.900 EAL: Detected lcore 101 as core 29 on socket 0 00:03:30.900 EAL: Detected lcore 102 as core 30 on socket 0 00:03:30.900 EAL: Detected lcore 103 as core 31 on socket 0 00:03:30.900 EAL: Detected lcore 104 as core 32 on socket 0 00:03:30.900 EAL: Detected lcore 105 as core 33 on socket 0 00:03:30.900 EAL: Detected lcore 106 as core 34 on socket 0 00:03:30.900 EAL: Detected lcore 107 as core 35 on socket 0 00:03:30.900 EAL: Detected lcore 108 as core 0 on socket 1 00:03:30.900 EAL: Detected lcore 109 as core 1 on socket 1 00:03:30.900 EAL: Detected lcore 110 as core 2 on socket 1 00:03:30.900 EAL: Detected lcore 111 as core 3 on socket 1 00:03:30.900 EAL: Detected lcore 112 as core 4 on socket 1 00:03:30.901 EAL: Detected lcore 113 as core 5 on socket 1 00:03:30.901 EAL: Detected lcore 114 as core 6 on socket 1 00:03:30.901 EAL: Detected lcore 115 as core 7 on socket 1 00:03:30.901 EAL: Detected lcore 116 as core 8 on socket 1 00:03:30.901 EAL: Detected lcore 117 as core 9 on socket 1 00:03:30.901 EAL: Detected lcore 118 as core 10 on socket 1 00:03:30.901 EAL: Detected lcore 119 as core 11 on socket 1 00:03:30.901 EAL: Detected lcore 120 as core 12 on socket 1 00:03:30.901 EAL: Detected lcore 121 as core 13 on socket 1 00:03:30.901 EAL: Detected lcore 122 as core 14 on socket 1 00:03:30.901 EAL: Detected lcore 123 as core 15 on socket 1 00:03:30.901 EAL: Detected lcore 124 as core 16 on socket 1 00:03:30.901 EAL: Detected lcore 125 as core 17 on socket 1 00:03:30.901 EAL: Detected lcore 126 as core 18 on socket 1 00:03:30.901 EAL: Detected lcore 127 as core 19 on socket 1 00:03:30.901 EAL: Skipped lcore 128 as core 20 on socket 1 00:03:30.901 EAL: Skipped lcore 129 as core 21 on socket 1 00:03:30.901 EAL: Skipped lcore 130 as core 22 on socket 1 00:03:30.901 EAL: Skipped lcore 131 as core 23 on socket 1 00:03:30.901 EAL: Skipped lcore 132 as core 24 on socket 1 00:03:30.901 EAL: Skipped lcore 133 as core 25 on socket 1 00:03:30.901 EAL: Skipped lcore 134 as core 26 on socket 1 00:03:30.901 EAL: Skipped lcore 135 as core 27 on socket 1 00:03:30.901 EAL: Skipped lcore 136 as core 28 on socket 1 00:03:30.901 EAL: Skipped lcore 137 as core 29 on socket 1 00:03:30.901 EAL: Skipped lcore 138 as core 30 on socket 1 00:03:30.901 EAL: Skipped lcore 139 as core 31 on socket 1 00:03:30.901 EAL: Skipped lcore 140 as core 32 on socket 1 00:03:30.901 EAL: Skipped lcore 141 as core 33 on socket 1 00:03:30.901 EAL: Skipped lcore 142 as core 34 on socket 1 00:03:30.901 EAL: Skipped lcore 143 as core 35 on socket 1 00:03:30.901 EAL: Maximum logical cores by configuration: 128 00:03:30.901 EAL: Detected CPU lcores: 128 00:03:30.901 EAL: Detected NUMA nodes: 2 00:03:30.901 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:03:30.901 EAL: Detected shared linkage of DPDK 00:03:30.901 EAL: No shared files mode enabled, IPC will be disabled 00:03:30.901 EAL: Bus pci wants IOVA as 'DC' 00:03:30.901 EAL: Buses did not request a specific IOVA mode. 00:03:30.901 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:30.901 EAL: Selected IOVA mode 'VA' 00:03:30.901 EAL: Probing VFIO support... 00:03:30.901 EAL: IOMMU type 1 (Type 1) is supported 00:03:30.901 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:30.901 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:30.901 EAL: VFIO support initialized 00:03:30.901 EAL: Ask a virtual area of 0x2e000 bytes 00:03:30.901 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:30.901 EAL: Setting up physically contiguous memory... 00:03:30.901 EAL: Setting maximum number of open files to 524288 00:03:30.901 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:30.901 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:30.901 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:30.901 EAL: Ask a virtual area of 0x61000 bytes 00:03:30.901 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:30.901 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:30.901 EAL: Ask a virtual area of 0x400000000 bytes 00:03:30.901 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:30.901 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:30.901 EAL: Ask a virtual area of 0x61000 bytes 00:03:30.901 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:30.901 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:30.901 EAL: Ask a virtual area of 0x400000000 bytes 00:03:30.901 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:30.901 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:30.901 EAL: Ask a virtual area of 0x61000 bytes 00:03:30.901 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:30.901 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:30.901 EAL: Ask a virtual area of 0x400000000 bytes 00:03:30.901 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:30.901 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:30.901 EAL: Ask a virtual area of 0x61000 bytes 00:03:30.901 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:30.901 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:30.901 EAL: Ask a virtual area of 0x400000000 bytes 00:03:30.901 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:30.901 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:30.901 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:30.901 EAL: Ask a virtual area of 0x61000 bytes 00:03:30.901 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:30.901 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:30.901 EAL: Ask a virtual area of 0x400000000 bytes 00:03:30.901 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:30.901 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:30.901 EAL: Ask a virtual area of 0x61000 bytes 00:03:30.901 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:30.901 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:30.901 EAL: Ask a virtual area of 0x400000000 bytes 00:03:30.901 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:30.901 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:30.901 EAL: Ask a virtual area of 0x61000 bytes 00:03:30.901 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:30.901 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:30.901 EAL: Ask a virtual area of 0x400000000 bytes 00:03:30.901 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:30.901 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:30.901 EAL: Ask a virtual area of 0x61000 bytes 00:03:30.901 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:30.901 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:30.901 EAL: Ask a virtual area of 0x400000000 bytes 00:03:30.901 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:30.901 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:30.901 EAL: Hugepages will be freed exactly as allocated. 00:03:30.901 EAL: No shared files mode enabled, IPC is disabled 00:03:30.901 EAL: No shared files mode enabled, IPC is disabled 00:03:30.901 EAL: TSC frequency is ~2400000 KHz 00:03:30.901 EAL: Main lcore 0 is ready (tid=7f718ff48a00;cpuset=[0]) 00:03:30.901 EAL: Trying to obtain current memory policy. 00:03:30.901 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:30.901 EAL: Restoring previous memory policy: 0 00:03:30.901 EAL: request: mp_malloc_sync 00:03:30.901 EAL: No shared files mode enabled, IPC is disabled 00:03:30.901 EAL: Heap on socket 0 was expanded by 2MB 00:03:30.901 EAL: No shared files mode enabled, IPC is disabled 00:03:31.162 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:31.162 EAL: Mem event callback 'spdk:(nil)' registered 00:03:31.162 00:03:31.162 00:03:31.162 CUnit - A unit testing framework for C - Version 2.1-3 00:03:31.162 http://cunit.sourceforge.net/ 00:03:31.162 00:03:31.162 00:03:31.162 Suite: components_suite 00:03:31.162 Test: vtophys_malloc_test ...passed 00:03:31.162 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:31.162 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:31.162 EAL: Restoring previous memory policy: 4 00:03:31.162 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.162 EAL: request: mp_malloc_sync 00:03:31.162 EAL: No shared files mode enabled, IPC is disabled 00:03:31.162 EAL: Heap on socket 0 was expanded by 4MB 00:03:31.162 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.162 EAL: request: mp_malloc_sync 00:03:31.162 EAL: No shared files mode enabled, IPC is disabled 00:03:31.162 EAL: Heap on socket 0 was shrunk by 4MB 00:03:31.162 EAL: Trying to obtain current memory policy. 00:03:31.162 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:31.162 EAL: Restoring previous memory policy: 4 00:03:31.162 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.162 EAL: request: mp_malloc_sync 00:03:31.162 EAL: No shared files mode enabled, IPC is disabled 00:03:31.162 EAL: Heap on socket 0 was expanded by 6MB 00:03:31.162 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.162 EAL: request: mp_malloc_sync 00:03:31.162 EAL: No shared files mode enabled, IPC is disabled 00:03:31.162 EAL: Heap on socket 0 was shrunk by 6MB 00:03:31.162 EAL: Trying to obtain current memory policy. 00:03:31.162 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:31.162 EAL: Restoring previous memory policy: 4 00:03:31.162 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.162 EAL: request: mp_malloc_sync 00:03:31.162 EAL: No shared files mode enabled, IPC is disabled 00:03:31.162 EAL: Heap on socket 0 was expanded by 10MB 00:03:31.162 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.162 EAL: request: mp_malloc_sync 00:03:31.162 EAL: No shared files mode enabled, IPC is disabled 00:03:31.162 EAL: Heap on socket 0 was shrunk by 10MB 00:03:31.162 EAL: Trying to obtain current memory policy. 00:03:31.162 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:31.162 EAL: Restoring previous memory policy: 4 00:03:31.162 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.162 EAL: request: mp_malloc_sync 00:03:31.162 EAL: No shared files mode enabled, IPC is disabled 00:03:31.162 EAL: Heap on socket 0 was expanded by 18MB 00:03:31.162 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.162 EAL: request: mp_malloc_sync 00:03:31.163 EAL: No shared files mode enabled, IPC is disabled 00:03:31.163 EAL: Heap on socket 0 was shrunk by 18MB 00:03:31.163 EAL: Trying to obtain current memory policy. 00:03:31.163 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:31.163 EAL: Restoring previous memory policy: 4 00:03:31.163 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.163 EAL: request: mp_malloc_sync 00:03:31.163 EAL: No shared files mode enabled, IPC is disabled 00:03:31.163 EAL: Heap on socket 0 was expanded by 34MB 00:03:31.163 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.163 EAL: request: mp_malloc_sync 00:03:31.163 EAL: No shared files mode enabled, IPC is disabled 00:03:31.163 EAL: Heap on socket 0 was shrunk by 34MB 00:03:31.163 EAL: Trying to obtain current memory policy. 00:03:31.163 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:31.163 EAL: Restoring previous memory policy: 4 00:03:31.163 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.163 EAL: request: mp_malloc_sync 00:03:31.163 EAL: No shared files mode enabled, IPC is disabled 00:03:31.163 EAL: Heap on socket 0 was expanded by 66MB 00:03:31.163 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.163 EAL: request: mp_malloc_sync 00:03:31.163 EAL: No shared files mode enabled, IPC is disabled 00:03:31.163 EAL: Heap on socket 0 was shrunk by 66MB 00:03:31.163 EAL: Trying to obtain current memory policy. 00:03:31.163 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:31.163 EAL: Restoring previous memory policy: 4 00:03:31.163 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.163 EAL: request: mp_malloc_sync 00:03:31.163 EAL: No shared files mode enabled, IPC is disabled 00:03:31.163 EAL: Heap on socket 0 was expanded by 130MB 00:03:31.163 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.163 EAL: request: mp_malloc_sync 00:03:31.163 EAL: No shared files mode enabled, IPC is disabled 00:03:31.163 EAL: Heap on socket 0 was shrunk by 130MB 00:03:31.163 EAL: Trying to obtain current memory policy. 00:03:31.163 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:31.163 EAL: Restoring previous memory policy: 4 00:03:31.163 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.163 EAL: request: mp_malloc_sync 00:03:31.163 EAL: No shared files mode enabled, IPC is disabled 00:03:31.163 EAL: Heap on socket 0 was expanded by 258MB 00:03:31.163 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.163 EAL: request: mp_malloc_sync 00:03:31.163 EAL: No shared files mode enabled, IPC is disabled 00:03:31.163 EAL: Heap on socket 0 was shrunk by 258MB 00:03:31.163 EAL: Trying to obtain current memory policy. 00:03:31.163 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:31.423 EAL: Restoring previous memory policy: 4 00:03:31.423 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.423 EAL: request: mp_malloc_sync 00:03:31.423 EAL: No shared files mode enabled, IPC is disabled 00:03:31.423 EAL: Heap on socket 0 was expanded by 514MB 00:03:31.423 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.423 EAL: request: mp_malloc_sync 00:03:31.423 EAL: No shared files mode enabled, IPC is disabled 00:03:31.423 EAL: Heap on socket 0 was shrunk by 514MB 00:03:31.423 EAL: Trying to obtain current memory policy. 00:03:31.423 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:31.683 EAL: Restoring previous memory policy: 4 00:03:31.683 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.683 EAL: request: mp_malloc_sync 00:03:31.683 EAL: No shared files mode enabled, IPC is disabled 00:03:31.683 EAL: Heap on socket 0 was expanded by 1026MB 00:03:31.683 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.683 EAL: request: mp_malloc_sync 00:03:31.683 EAL: No shared files mode enabled, IPC is disabled 00:03:31.683 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:31.683 passed 00:03:31.683 00:03:31.683 Run Summary: Type Total Ran Passed Failed Inactive 00:03:31.683 suites 1 1 n/a 0 0 00:03:31.683 tests 2 2 2 0 0 00:03:31.683 asserts 497 497 497 0 n/a 00:03:31.683 00:03:31.683 Elapsed time = 0.687 seconds 00:03:31.683 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.683 EAL: request: mp_malloc_sync 00:03:31.683 EAL: No shared files mode enabled, IPC is disabled 00:03:31.683 EAL: Heap on socket 0 was shrunk by 2MB 00:03:31.683 EAL: No shared files mode enabled, IPC is disabled 00:03:31.683 EAL: No shared files mode enabled, IPC is disabled 00:03:31.683 EAL: No shared files mode enabled, IPC is disabled 00:03:31.683 00:03:31.683 real 0m0.822s 00:03:31.683 user 0m0.437s 00:03:31.683 sys 0m0.359s 00:03:31.683 11:48:08 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:31.683 11:48:08 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:31.683 ************************************ 00:03:31.683 END TEST env_vtophys 00:03:31.683 ************************************ 00:03:31.683 11:48:08 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:31.683 11:48:08 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:31.683 11:48:08 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:31.683 11:48:08 env -- common/autotest_common.sh@10 -- # set +x 00:03:31.943 ************************************ 00:03:31.943 START TEST env_pci 00:03:31.943 ************************************ 00:03:31.943 11:48:08 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:31.943 00:03:31.943 00:03:31.943 CUnit - A unit testing framework for C - Version 2.1-3 00:03:31.943 http://cunit.sourceforge.net/ 00:03:31.943 00:03:31.943 00:03:31.943 Suite: pci 00:03:31.943 Test: pci_hook ...[2024-10-21 11:48:08.317471] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 733602 has claimed it 00:03:31.943 EAL: Cannot find device (10000:00:01.0) 00:03:31.943 EAL: Failed to attach device on primary process 00:03:31.943 passed 00:03:31.943 00:03:31.943 Run Summary: Type Total Ran Passed Failed Inactive 00:03:31.943 suites 1 1 n/a 0 0 00:03:31.943 tests 1 1 1 0 0 00:03:31.943 asserts 25 25 25 0 n/a 00:03:31.943 00:03:31.943 Elapsed time = 0.032 seconds 00:03:31.943 00:03:31.943 real 0m0.054s 00:03:31.943 user 0m0.018s 00:03:31.943 sys 0m0.036s 00:03:31.943 11:48:08 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:31.943 11:48:08 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:31.943 ************************************ 00:03:31.943 END TEST env_pci 00:03:31.943 ************************************ 00:03:31.943 11:48:08 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:31.943 11:48:08 env -- env/env.sh@15 -- # uname 00:03:31.943 11:48:08 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:31.943 11:48:08 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:31.943 11:48:08 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:31.943 11:48:08 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:03:31.943 11:48:08 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:31.943 11:48:08 env -- common/autotest_common.sh@10 -- # set +x 00:03:31.943 ************************************ 00:03:31.943 START TEST env_dpdk_post_init 00:03:31.943 ************************************ 00:03:31.943 11:48:08 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:31.943 EAL: Detected CPU lcores: 128 00:03:31.943 EAL: Detected NUMA nodes: 2 00:03:31.943 EAL: Detected shared linkage of DPDK 00:03:31.943 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:31.943 EAL: Selected IOVA mode 'VA' 00:03:31.943 EAL: VFIO support initialized 00:03:31.943 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:32.204 EAL: Using IOMMU type 1 (Type 1) 00:03:32.204 EAL: Ignore mapping IO port bar(1) 00:03:32.204 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:03:32.463 EAL: Ignore mapping IO port bar(1) 00:03:32.463 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:03:32.723 EAL: Ignore mapping IO port bar(1) 00:03:32.723 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:03:32.983 EAL: Ignore mapping IO port bar(1) 00:03:32.983 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:03:33.243 EAL: Ignore mapping IO port bar(1) 00:03:33.243 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:03:33.243 EAL: Ignore mapping IO port bar(1) 00:03:33.503 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:03:33.503 EAL: Ignore mapping IO port bar(1) 00:03:33.764 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:03:33.764 EAL: Ignore mapping IO port bar(1) 00:03:34.025 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:03:34.025 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:03:34.285 EAL: Ignore mapping IO port bar(1) 00:03:34.285 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:03:34.546 EAL: Ignore mapping IO port bar(1) 00:03:34.546 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:03:34.806 EAL: Ignore mapping IO port bar(1) 00:03:34.806 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:03:34.806 EAL: Ignore mapping IO port bar(1) 00:03:35.066 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:03:35.066 EAL: Ignore mapping IO port bar(1) 00:03:35.326 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:03:35.326 EAL: Ignore mapping IO port bar(1) 00:03:35.586 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:03:35.586 EAL: Ignore mapping IO port bar(1) 00:03:35.586 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:03:35.847 EAL: Ignore mapping IO port bar(1) 00:03:35.847 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:03:35.847 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:03:35.847 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:03:36.108 Starting DPDK initialization... 00:03:36.108 Starting SPDK post initialization... 00:03:36.108 SPDK NVMe probe 00:03:36.108 Attaching to 0000:65:00.0 00:03:36.108 Attached to 0000:65:00.0 00:03:36.108 Cleaning up... 00:03:38.019 00:03:38.019 real 0m5.733s 00:03:38.019 user 0m0.183s 00:03:38.019 sys 0m0.109s 00:03:38.019 11:48:14 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:38.019 11:48:14 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:38.019 ************************************ 00:03:38.019 END TEST env_dpdk_post_init 00:03:38.019 ************************************ 00:03:38.019 11:48:14 env -- env/env.sh@26 -- # uname 00:03:38.019 11:48:14 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:38.019 11:48:14 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:38.019 11:48:14 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:38.019 11:48:14 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:38.019 11:48:14 env -- common/autotest_common.sh@10 -- # set +x 00:03:38.019 ************************************ 00:03:38.019 START TEST env_mem_callbacks 00:03:38.019 ************************************ 00:03:38.019 11:48:14 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:38.019 EAL: Detected CPU lcores: 128 00:03:38.019 EAL: Detected NUMA nodes: 2 00:03:38.019 EAL: Detected shared linkage of DPDK 00:03:38.019 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:38.019 EAL: Selected IOVA mode 'VA' 00:03:38.019 EAL: VFIO support initialized 00:03:38.019 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:38.019 00:03:38.019 00:03:38.019 CUnit - A unit testing framework for C - Version 2.1-3 00:03:38.019 http://cunit.sourceforge.net/ 00:03:38.019 00:03:38.019 00:03:38.019 Suite: memory 00:03:38.019 Test: test ... 00:03:38.019 register 0x200000200000 2097152 00:03:38.019 malloc 3145728 00:03:38.019 register 0x200000400000 4194304 00:03:38.019 buf 0x200000500000 len 3145728 PASSED 00:03:38.019 malloc 64 00:03:38.019 buf 0x2000004fff40 len 64 PASSED 00:03:38.019 malloc 4194304 00:03:38.019 register 0x200000800000 6291456 00:03:38.019 buf 0x200000a00000 len 4194304 PASSED 00:03:38.019 free 0x200000500000 3145728 00:03:38.019 free 0x2000004fff40 64 00:03:38.019 unregister 0x200000400000 4194304 PASSED 00:03:38.019 free 0x200000a00000 4194304 00:03:38.019 unregister 0x200000800000 6291456 PASSED 00:03:38.019 malloc 8388608 00:03:38.019 register 0x200000400000 10485760 00:03:38.019 buf 0x200000600000 len 8388608 PASSED 00:03:38.019 free 0x200000600000 8388608 00:03:38.019 unregister 0x200000400000 10485760 PASSED 00:03:38.019 passed 00:03:38.019 00:03:38.019 Run Summary: Type Total Ran Passed Failed Inactive 00:03:38.019 suites 1 1 n/a 0 0 00:03:38.019 tests 1 1 1 0 0 00:03:38.019 asserts 15 15 15 0 n/a 00:03:38.019 00:03:38.019 Elapsed time = 0.010 seconds 00:03:38.019 00:03:38.019 real 0m0.069s 00:03:38.019 user 0m0.021s 00:03:38.019 sys 0m0.049s 00:03:38.019 11:48:14 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:38.019 11:48:14 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:38.019 ************************************ 00:03:38.019 END TEST env_mem_callbacks 00:03:38.019 ************************************ 00:03:38.019 00:03:38.019 real 0m7.502s 00:03:38.019 user 0m1.113s 00:03:38.019 sys 0m0.957s 00:03:38.019 11:48:14 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:38.019 11:48:14 env -- common/autotest_common.sh@10 -- # set +x 00:03:38.019 ************************************ 00:03:38.019 END TEST env 00:03:38.019 ************************************ 00:03:38.019 11:48:14 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:38.019 11:48:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:38.019 11:48:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:38.019 11:48:14 -- common/autotest_common.sh@10 -- # set +x 00:03:38.019 ************************************ 00:03:38.019 START TEST rpc 00:03:38.019 ************************************ 00:03:38.019 11:48:14 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:38.019 * Looking for test storage... 00:03:38.019 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:38.019 11:48:14 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:38.019 11:48:14 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:03:38.019 11:48:14 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:38.279 11:48:14 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:38.279 11:48:14 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:38.279 11:48:14 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:38.280 11:48:14 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:38.280 11:48:14 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:38.280 11:48:14 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:38.280 11:48:14 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:38.280 11:48:14 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:38.280 11:48:14 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:38.280 11:48:14 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:38.280 11:48:14 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:38.280 11:48:14 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:38.280 11:48:14 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:38.280 11:48:14 rpc -- scripts/common.sh@345 -- # : 1 00:03:38.280 11:48:14 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:38.280 11:48:14 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:38.280 11:48:14 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:38.280 11:48:14 rpc -- scripts/common.sh@353 -- # local d=1 00:03:38.280 11:48:14 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:38.280 11:48:14 rpc -- scripts/common.sh@355 -- # echo 1 00:03:38.280 11:48:14 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:38.280 11:48:14 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:38.280 11:48:14 rpc -- scripts/common.sh@353 -- # local d=2 00:03:38.280 11:48:14 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:38.280 11:48:14 rpc -- scripts/common.sh@355 -- # echo 2 00:03:38.280 11:48:14 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:38.280 11:48:14 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:38.280 11:48:14 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:38.280 11:48:14 rpc -- scripts/common.sh@368 -- # return 0 00:03:38.280 11:48:14 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:38.280 11:48:14 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:38.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:38.280 --rc genhtml_branch_coverage=1 00:03:38.280 --rc genhtml_function_coverage=1 00:03:38.280 --rc genhtml_legend=1 00:03:38.280 --rc geninfo_all_blocks=1 00:03:38.280 --rc geninfo_unexecuted_blocks=1 00:03:38.280 00:03:38.280 ' 00:03:38.280 11:48:14 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:38.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:38.280 --rc genhtml_branch_coverage=1 00:03:38.280 --rc genhtml_function_coverage=1 00:03:38.280 --rc genhtml_legend=1 00:03:38.280 --rc geninfo_all_blocks=1 00:03:38.280 --rc geninfo_unexecuted_blocks=1 00:03:38.280 00:03:38.280 ' 00:03:38.280 11:48:14 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:38.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:38.280 --rc genhtml_branch_coverage=1 00:03:38.280 --rc genhtml_function_coverage=1 00:03:38.280 --rc genhtml_legend=1 00:03:38.280 --rc geninfo_all_blocks=1 00:03:38.280 --rc geninfo_unexecuted_blocks=1 00:03:38.280 00:03:38.280 ' 00:03:38.280 11:48:14 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:38.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:38.280 --rc genhtml_branch_coverage=1 00:03:38.280 --rc genhtml_function_coverage=1 00:03:38.280 --rc genhtml_legend=1 00:03:38.280 --rc geninfo_all_blocks=1 00:03:38.280 --rc geninfo_unexecuted_blocks=1 00:03:38.280 00:03:38.280 ' 00:03:38.280 11:48:14 rpc -- rpc/rpc.sh@65 -- # spdk_pid=735474 00:03:38.280 11:48:14 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:38.280 11:48:14 rpc -- rpc/rpc.sh@67 -- # waitforlisten 735474 00:03:38.280 11:48:14 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:38.280 11:48:14 rpc -- common/autotest_common.sh@831 -- # '[' -z 735474 ']' 00:03:38.280 11:48:14 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:38.280 11:48:14 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:38.280 11:48:14 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:38.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:38.280 11:48:14 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:38.280 11:48:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:38.280 [2024-10-21 11:48:14.741547] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:03:38.280 [2024-10-21 11:48:14.741614] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid735474 ] 00:03:38.280 [2024-10-21 11:48:14.825254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:38.540 [2024-10-21 11:48:14.876818] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:38.540 [2024-10-21 11:48:14.876877] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 735474' to capture a snapshot of events at runtime. 00:03:38.540 [2024-10-21 11:48:14.876885] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:38.540 [2024-10-21 11:48:14.876893] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:38.540 [2024-10-21 11:48:14.876899] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid735474 for offline analysis/debug. 00:03:38.540 [2024-10-21 11:48:14.877690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:39.112 11:48:15 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:39.112 11:48:15 rpc -- common/autotest_common.sh@864 -- # return 0 00:03:39.112 11:48:15 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:39.112 11:48:15 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:39.112 11:48:15 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:39.112 11:48:15 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:39.112 11:48:15 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:39.112 11:48:15 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:39.112 11:48:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:39.112 ************************************ 00:03:39.112 START TEST rpc_integrity 00:03:39.112 ************************************ 00:03:39.112 11:48:15 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:03:39.112 11:48:15 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:39.112 11:48:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:39.112 11:48:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:39.112 11:48:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:39.112 11:48:15 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:39.112 11:48:15 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:39.112 11:48:15 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:39.112 11:48:15 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:39.112 11:48:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:39.112 11:48:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:39.112 11:48:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:39.112 11:48:15 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:39.112 11:48:15 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:39.112 11:48:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:39.112 11:48:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:39.112 11:48:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:39.112 11:48:15 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:39.112 { 00:03:39.112 "name": "Malloc0", 00:03:39.112 "aliases": [ 00:03:39.112 "ca0c9e85-0011-4be6-b638-34a1143e2f03" 00:03:39.112 ], 00:03:39.112 "product_name": "Malloc disk", 00:03:39.112 "block_size": 512, 00:03:39.112 "num_blocks": 16384, 00:03:39.112 "uuid": "ca0c9e85-0011-4be6-b638-34a1143e2f03", 00:03:39.112 "assigned_rate_limits": { 00:03:39.112 "rw_ios_per_sec": 0, 00:03:39.112 "rw_mbytes_per_sec": 0, 00:03:39.112 "r_mbytes_per_sec": 0, 00:03:39.112 "w_mbytes_per_sec": 0 00:03:39.112 }, 00:03:39.112 "claimed": false, 00:03:39.112 "zoned": false, 00:03:39.112 "supported_io_types": { 00:03:39.112 "read": true, 00:03:39.112 "write": true, 00:03:39.112 "unmap": true, 00:03:39.112 "flush": true, 00:03:39.112 "reset": true, 00:03:39.112 "nvme_admin": false, 00:03:39.112 "nvme_io": false, 00:03:39.112 "nvme_io_md": false, 00:03:39.112 "write_zeroes": true, 00:03:39.112 "zcopy": true, 00:03:39.112 "get_zone_info": false, 00:03:39.112 "zone_management": false, 00:03:39.112 "zone_append": false, 00:03:39.112 "compare": false, 00:03:39.112 "compare_and_write": false, 00:03:39.112 "abort": true, 00:03:39.112 "seek_hole": false, 00:03:39.112 "seek_data": false, 00:03:39.112 "copy": true, 00:03:39.112 "nvme_iov_md": false 00:03:39.112 }, 00:03:39.112 "memory_domains": [ 00:03:39.112 { 00:03:39.112 "dma_device_id": "system", 00:03:39.112 "dma_device_type": 1 00:03:39.112 }, 00:03:39.112 { 00:03:39.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:39.112 "dma_device_type": 2 00:03:39.112 } 00:03:39.112 ], 00:03:39.112 "driver_specific": {} 00:03:39.112 } 00:03:39.112 ]' 00:03:39.112 11:48:15 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:39.373 11:48:15 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:39.373 11:48:15 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:39.373 11:48:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:39.373 11:48:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:39.373 [2024-10-21 11:48:15.746964] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:39.373 [2024-10-21 11:48:15.747012] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:39.373 [2024-10-21 11:48:15.747028] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19cd640 00:03:39.373 [2024-10-21 11:48:15.747036] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:39.373 [2024-10-21 11:48:15.748630] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:39.373 [2024-10-21 11:48:15.748666] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:39.373 Passthru0 00:03:39.373 11:48:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:39.373 11:48:15 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:39.373 11:48:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:39.373 11:48:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:39.373 11:48:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:39.373 11:48:15 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:39.373 { 00:03:39.373 "name": "Malloc0", 00:03:39.373 "aliases": [ 00:03:39.373 "ca0c9e85-0011-4be6-b638-34a1143e2f03" 00:03:39.373 ], 00:03:39.373 "product_name": "Malloc disk", 00:03:39.373 "block_size": 512, 00:03:39.373 "num_blocks": 16384, 00:03:39.373 "uuid": "ca0c9e85-0011-4be6-b638-34a1143e2f03", 00:03:39.373 "assigned_rate_limits": { 00:03:39.373 "rw_ios_per_sec": 0, 00:03:39.373 "rw_mbytes_per_sec": 0, 00:03:39.373 "r_mbytes_per_sec": 0, 00:03:39.373 "w_mbytes_per_sec": 0 00:03:39.373 }, 00:03:39.373 "claimed": true, 00:03:39.373 "claim_type": "exclusive_write", 00:03:39.373 "zoned": false, 00:03:39.373 "supported_io_types": { 00:03:39.373 "read": true, 00:03:39.373 "write": true, 00:03:39.373 "unmap": true, 00:03:39.373 "flush": true, 00:03:39.373 "reset": true, 00:03:39.373 "nvme_admin": false, 00:03:39.373 "nvme_io": false, 00:03:39.373 "nvme_io_md": false, 00:03:39.373 "write_zeroes": true, 00:03:39.373 "zcopy": true, 00:03:39.373 "get_zone_info": false, 00:03:39.373 "zone_management": false, 00:03:39.373 "zone_append": false, 00:03:39.374 "compare": false, 00:03:39.374 "compare_and_write": false, 00:03:39.374 "abort": true, 00:03:39.374 "seek_hole": false, 00:03:39.374 "seek_data": false, 00:03:39.374 "copy": true, 00:03:39.374 "nvme_iov_md": false 00:03:39.374 }, 00:03:39.374 "memory_domains": [ 00:03:39.374 { 00:03:39.374 "dma_device_id": "system", 00:03:39.374 "dma_device_type": 1 00:03:39.374 }, 00:03:39.374 { 00:03:39.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:39.374 "dma_device_type": 2 00:03:39.374 } 00:03:39.374 ], 00:03:39.374 "driver_specific": {} 00:03:39.374 }, 00:03:39.374 { 00:03:39.374 "name": "Passthru0", 00:03:39.374 "aliases": [ 00:03:39.374 "9aca6301-d5ed-5951-89a2-b8a3009315a1" 00:03:39.374 ], 00:03:39.374 "product_name": "passthru", 00:03:39.374 "block_size": 512, 00:03:39.374 "num_blocks": 16384, 00:03:39.374 "uuid": "9aca6301-d5ed-5951-89a2-b8a3009315a1", 00:03:39.374 "assigned_rate_limits": { 00:03:39.374 "rw_ios_per_sec": 0, 00:03:39.374 "rw_mbytes_per_sec": 0, 00:03:39.374 "r_mbytes_per_sec": 0, 00:03:39.374 "w_mbytes_per_sec": 0 00:03:39.374 }, 00:03:39.374 "claimed": false, 00:03:39.374 "zoned": false, 00:03:39.374 "supported_io_types": { 00:03:39.374 "read": true, 00:03:39.374 "write": true, 00:03:39.374 "unmap": true, 00:03:39.374 "flush": true, 00:03:39.374 "reset": true, 00:03:39.374 "nvme_admin": false, 00:03:39.374 "nvme_io": false, 00:03:39.374 "nvme_io_md": false, 00:03:39.374 "write_zeroes": true, 00:03:39.374 "zcopy": true, 00:03:39.374 "get_zone_info": false, 00:03:39.374 "zone_management": false, 00:03:39.374 "zone_append": false, 00:03:39.374 "compare": false, 00:03:39.374 "compare_and_write": false, 00:03:39.374 "abort": true, 00:03:39.374 "seek_hole": false, 00:03:39.374 "seek_data": false, 00:03:39.374 "copy": true, 00:03:39.374 "nvme_iov_md": false 00:03:39.374 }, 00:03:39.374 "memory_domains": [ 00:03:39.374 { 00:03:39.374 "dma_device_id": "system", 00:03:39.374 "dma_device_type": 1 00:03:39.374 }, 00:03:39.374 { 00:03:39.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:39.374 "dma_device_type": 2 00:03:39.374 } 00:03:39.374 ], 00:03:39.374 "driver_specific": { 00:03:39.374 "passthru": { 00:03:39.374 "name": "Passthru0", 00:03:39.374 "base_bdev_name": "Malloc0" 00:03:39.374 } 00:03:39.374 } 00:03:39.374 } 00:03:39.374 ]' 00:03:39.374 11:48:15 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:39.374 11:48:15 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:39.374 11:48:15 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:39.374 11:48:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:39.374 11:48:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:39.374 11:48:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:39.374 11:48:15 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:39.374 11:48:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:39.374 11:48:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:39.374 11:48:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:39.374 11:48:15 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:39.374 11:48:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:39.374 11:48:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:39.374 11:48:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:39.374 11:48:15 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:39.374 11:48:15 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:39.374 11:48:15 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:39.374 00:03:39.374 real 0m0.312s 00:03:39.374 user 0m0.193s 00:03:39.374 sys 0m0.046s 00:03:39.374 11:48:15 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:39.374 11:48:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:39.374 ************************************ 00:03:39.374 END TEST rpc_integrity 00:03:39.374 ************************************ 00:03:39.374 11:48:15 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:39.374 11:48:15 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:39.374 11:48:15 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:39.374 11:48:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:39.636 ************************************ 00:03:39.636 START TEST rpc_plugins 00:03:39.636 ************************************ 00:03:39.636 11:48:15 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:03:39.636 11:48:15 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:39.636 11:48:15 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:39.636 11:48:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:39.636 11:48:16 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:39.636 11:48:16 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:39.636 11:48:16 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:39.636 11:48:16 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:39.636 11:48:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:39.636 11:48:16 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:39.636 11:48:16 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:39.636 { 00:03:39.636 "name": "Malloc1", 00:03:39.636 "aliases": [ 00:03:39.636 "479ed311-36c0-48e0-8b53-4fcc13602cf0" 00:03:39.636 ], 00:03:39.636 "product_name": "Malloc disk", 00:03:39.636 "block_size": 4096, 00:03:39.636 "num_blocks": 256, 00:03:39.636 "uuid": "479ed311-36c0-48e0-8b53-4fcc13602cf0", 00:03:39.636 "assigned_rate_limits": { 00:03:39.636 "rw_ios_per_sec": 0, 00:03:39.636 "rw_mbytes_per_sec": 0, 00:03:39.636 "r_mbytes_per_sec": 0, 00:03:39.636 "w_mbytes_per_sec": 0 00:03:39.636 }, 00:03:39.636 "claimed": false, 00:03:39.636 "zoned": false, 00:03:39.636 "supported_io_types": { 00:03:39.636 "read": true, 00:03:39.636 "write": true, 00:03:39.636 "unmap": true, 00:03:39.636 "flush": true, 00:03:39.636 "reset": true, 00:03:39.636 "nvme_admin": false, 00:03:39.636 "nvme_io": false, 00:03:39.636 "nvme_io_md": false, 00:03:39.636 "write_zeroes": true, 00:03:39.636 "zcopy": true, 00:03:39.636 "get_zone_info": false, 00:03:39.636 "zone_management": false, 00:03:39.636 "zone_append": false, 00:03:39.636 "compare": false, 00:03:39.636 "compare_and_write": false, 00:03:39.636 "abort": true, 00:03:39.636 "seek_hole": false, 00:03:39.636 "seek_data": false, 00:03:39.636 "copy": true, 00:03:39.636 "nvme_iov_md": false 00:03:39.636 }, 00:03:39.636 "memory_domains": [ 00:03:39.636 { 00:03:39.636 "dma_device_id": "system", 00:03:39.636 "dma_device_type": 1 00:03:39.636 }, 00:03:39.636 { 00:03:39.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:39.636 "dma_device_type": 2 00:03:39.636 } 00:03:39.636 ], 00:03:39.636 "driver_specific": {} 00:03:39.636 } 00:03:39.636 ]' 00:03:39.636 11:48:16 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:39.636 11:48:16 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:39.636 11:48:16 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:39.636 11:48:16 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:39.636 11:48:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:39.636 11:48:16 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:39.636 11:48:16 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:39.636 11:48:16 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:39.636 11:48:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:39.636 11:48:16 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:39.636 11:48:16 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:39.636 11:48:16 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:39.636 11:48:16 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:39.636 00:03:39.636 real 0m0.153s 00:03:39.636 user 0m0.097s 00:03:39.636 sys 0m0.021s 00:03:39.636 11:48:16 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:39.636 11:48:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:39.636 ************************************ 00:03:39.636 END TEST rpc_plugins 00:03:39.636 ************************************ 00:03:39.636 11:48:16 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:39.636 11:48:16 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:39.636 11:48:16 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:39.636 11:48:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:39.636 ************************************ 00:03:39.636 START TEST rpc_trace_cmd_test 00:03:39.636 ************************************ 00:03:39.636 11:48:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:03:39.636 11:48:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:39.636 11:48:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:39.636 11:48:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:39.636 11:48:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:39.897 11:48:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:39.897 11:48:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:39.897 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid735474", 00:03:39.897 "tpoint_group_mask": "0x8", 00:03:39.897 "iscsi_conn": { 00:03:39.897 "mask": "0x2", 00:03:39.897 "tpoint_mask": "0x0" 00:03:39.897 }, 00:03:39.897 "scsi": { 00:03:39.897 "mask": "0x4", 00:03:39.897 "tpoint_mask": "0x0" 00:03:39.897 }, 00:03:39.897 "bdev": { 00:03:39.897 "mask": "0x8", 00:03:39.897 "tpoint_mask": "0xffffffffffffffff" 00:03:39.897 }, 00:03:39.897 "nvmf_rdma": { 00:03:39.897 "mask": "0x10", 00:03:39.897 "tpoint_mask": "0x0" 00:03:39.897 }, 00:03:39.897 "nvmf_tcp": { 00:03:39.897 "mask": "0x20", 00:03:39.897 "tpoint_mask": "0x0" 00:03:39.897 }, 00:03:39.897 "ftl": { 00:03:39.897 "mask": "0x40", 00:03:39.897 "tpoint_mask": "0x0" 00:03:39.897 }, 00:03:39.897 "blobfs": { 00:03:39.897 "mask": "0x80", 00:03:39.897 "tpoint_mask": "0x0" 00:03:39.897 }, 00:03:39.897 "dsa": { 00:03:39.897 "mask": "0x200", 00:03:39.897 "tpoint_mask": "0x0" 00:03:39.897 }, 00:03:39.897 "thread": { 00:03:39.897 "mask": "0x400", 00:03:39.897 "tpoint_mask": "0x0" 00:03:39.897 }, 00:03:39.897 "nvme_pcie": { 00:03:39.897 "mask": "0x800", 00:03:39.897 "tpoint_mask": "0x0" 00:03:39.897 }, 00:03:39.897 "iaa": { 00:03:39.897 "mask": "0x1000", 00:03:39.897 "tpoint_mask": "0x0" 00:03:39.897 }, 00:03:39.897 "nvme_tcp": { 00:03:39.897 "mask": "0x2000", 00:03:39.897 "tpoint_mask": "0x0" 00:03:39.897 }, 00:03:39.897 "bdev_nvme": { 00:03:39.897 "mask": "0x4000", 00:03:39.897 "tpoint_mask": "0x0" 00:03:39.897 }, 00:03:39.897 "sock": { 00:03:39.897 "mask": "0x8000", 00:03:39.897 "tpoint_mask": "0x0" 00:03:39.897 }, 00:03:39.897 "blob": { 00:03:39.897 "mask": "0x10000", 00:03:39.897 "tpoint_mask": "0x0" 00:03:39.897 }, 00:03:39.897 "bdev_raid": { 00:03:39.897 "mask": "0x20000", 00:03:39.897 "tpoint_mask": "0x0" 00:03:39.897 }, 00:03:39.897 "scheduler": { 00:03:39.897 "mask": "0x40000", 00:03:39.897 "tpoint_mask": "0x0" 00:03:39.897 } 00:03:39.897 }' 00:03:39.897 11:48:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:39.897 11:48:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:39.897 11:48:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:39.897 11:48:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:39.897 11:48:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:39.897 11:48:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:39.897 11:48:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:39.897 11:48:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:39.897 11:48:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:39.897 11:48:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:39.897 00:03:39.897 real 0m0.218s 00:03:39.897 user 0m0.178s 00:03:39.897 sys 0m0.029s 00:03:39.897 11:48:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:39.897 11:48:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:39.897 ************************************ 00:03:39.897 END TEST rpc_trace_cmd_test 00:03:39.897 ************************************ 00:03:39.897 11:48:16 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:39.897 11:48:16 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:39.897 11:48:16 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:39.897 11:48:16 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:39.897 11:48:16 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:39.897 11:48:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:40.158 ************************************ 00:03:40.158 START TEST rpc_daemon_integrity 00:03:40.158 ************************************ 00:03:40.158 11:48:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:03:40.158 11:48:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:40.158 11:48:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:40.158 11:48:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.158 11:48:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:40.158 11:48:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:40.158 11:48:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:40.158 11:48:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:40.158 11:48:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:40.158 11:48:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:40.158 11:48:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.158 11:48:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:40.158 11:48:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:40.158 11:48:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:40.158 11:48:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:40.158 11:48:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.158 11:48:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:40.158 11:48:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:40.158 { 00:03:40.158 "name": "Malloc2", 00:03:40.158 "aliases": [ 00:03:40.158 "a905fcdd-0b21-49bf-ade4-a357c990a0a7" 00:03:40.158 ], 00:03:40.158 "product_name": "Malloc disk", 00:03:40.158 "block_size": 512, 00:03:40.158 "num_blocks": 16384, 00:03:40.158 "uuid": "a905fcdd-0b21-49bf-ade4-a357c990a0a7", 00:03:40.158 "assigned_rate_limits": { 00:03:40.158 "rw_ios_per_sec": 0, 00:03:40.158 "rw_mbytes_per_sec": 0, 00:03:40.158 "r_mbytes_per_sec": 0, 00:03:40.158 "w_mbytes_per_sec": 0 00:03:40.158 }, 00:03:40.158 "claimed": false, 00:03:40.158 "zoned": false, 00:03:40.158 "supported_io_types": { 00:03:40.158 "read": true, 00:03:40.158 "write": true, 00:03:40.158 "unmap": true, 00:03:40.158 "flush": true, 00:03:40.158 "reset": true, 00:03:40.158 "nvme_admin": false, 00:03:40.158 "nvme_io": false, 00:03:40.158 "nvme_io_md": false, 00:03:40.158 "write_zeroes": true, 00:03:40.158 "zcopy": true, 00:03:40.158 "get_zone_info": false, 00:03:40.158 "zone_management": false, 00:03:40.158 "zone_append": false, 00:03:40.158 "compare": false, 00:03:40.158 "compare_and_write": false, 00:03:40.158 "abort": true, 00:03:40.158 "seek_hole": false, 00:03:40.158 "seek_data": false, 00:03:40.158 "copy": true, 00:03:40.158 "nvme_iov_md": false 00:03:40.158 }, 00:03:40.158 "memory_domains": [ 00:03:40.158 { 00:03:40.158 "dma_device_id": "system", 00:03:40.158 "dma_device_type": 1 00:03:40.158 }, 00:03:40.158 { 00:03:40.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:40.158 "dma_device_type": 2 00:03:40.158 } 00:03:40.158 ], 00:03:40.158 "driver_specific": {} 00:03:40.158 } 00:03:40.158 ]' 00:03:40.159 11:48:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:40.159 11:48:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:40.159 11:48:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:40.159 11:48:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:40.159 11:48:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.159 [2024-10-21 11:48:16.665460] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:40.159 [2024-10-21 11:48:16.665503] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:40.159 [2024-10-21 11:48:16.665519] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a5f900 00:03:40.159 [2024-10-21 11:48:16.665527] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:40.159 [2024-10-21 11:48:16.666979] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:40.159 [2024-10-21 11:48:16.667014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:40.159 Passthru0 00:03:40.159 11:48:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:40.159 11:48:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:40.159 11:48:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:40.159 11:48:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.159 11:48:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:40.159 11:48:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:40.159 { 00:03:40.159 "name": "Malloc2", 00:03:40.159 "aliases": [ 00:03:40.159 "a905fcdd-0b21-49bf-ade4-a357c990a0a7" 00:03:40.159 ], 00:03:40.159 "product_name": "Malloc disk", 00:03:40.159 "block_size": 512, 00:03:40.159 "num_blocks": 16384, 00:03:40.159 "uuid": "a905fcdd-0b21-49bf-ade4-a357c990a0a7", 00:03:40.159 "assigned_rate_limits": { 00:03:40.159 "rw_ios_per_sec": 0, 00:03:40.159 "rw_mbytes_per_sec": 0, 00:03:40.159 "r_mbytes_per_sec": 0, 00:03:40.159 "w_mbytes_per_sec": 0 00:03:40.159 }, 00:03:40.159 "claimed": true, 00:03:40.159 "claim_type": "exclusive_write", 00:03:40.159 "zoned": false, 00:03:40.159 "supported_io_types": { 00:03:40.159 "read": true, 00:03:40.159 "write": true, 00:03:40.159 "unmap": true, 00:03:40.159 "flush": true, 00:03:40.159 "reset": true, 00:03:40.159 "nvme_admin": false, 00:03:40.159 "nvme_io": false, 00:03:40.159 "nvme_io_md": false, 00:03:40.159 "write_zeroes": true, 00:03:40.159 "zcopy": true, 00:03:40.159 "get_zone_info": false, 00:03:40.159 "zone_management": false, 00:03:40.159 "zone_append": false, 00:03:40.159 "compare": false, 00:03:40.159 "compare_and_write": false, 00:03:40.159 "abort": true, 00:03:40.159 "seek_hole": false, 00:03:40.159 "seek_data": false, 00:03:40.159 "copy": true, 00:03:40.159 "nvme_iov_md": false 00:03:40.159 }, 00:03:40.159 "memory_domains": [ 00:03:40.159 { 00:03:40.159 "dma_device_id": "system", 00:03:40.159 "dma_device_type": 1 00:03:40.159 }, 00:03:40.159 { 00:03:40.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:40.159 "dma_device_type": 2 00:03:40.159 } 00:03:40.159 ], 00:03:40.159 "driver_specific": {} 00:03:40.159 }, 00:03:40.159 { 00:03:40.159 "name": "Passthru0", 00:03:40.159 "aliases": [ 00:03:40.159 "b2ca911c-3e45-5640-85f5-e914099ddac0" 00:03:40.159 ], 00:03:40.159 "product_name": "passthru", 00:03:40.159 "block_size": 512, 00:03:40.159 "num_blocks": 16384, 00:03:40.159 "uuid": "b2ca911c-3e45-5640-85f5-e914099ddac0", 00:03:40.159 "assigned_rate_limits": { 00:03:40.159 "rw_ios_per_sec": 0, 00:03:40.159 "rw_mbytes_per_sec": 0, 00:03:40.159 "r_mbytes_per_sec": 0, 00:03:40.159 "w_mbytes_per_sec": 0 00:03:40.159 }, 00:03:40.159 "claimed": false, 00:03:40.159 "zoned": false, 00:03:40.159 "supported_io_types": { 00:03:40.159 "read": true, 00:03:40.159 "write": true, 00:03:40.159 "unmap": true, 00:03:40.159 "flush": true, 00:03:40.159 "reset": true, 00:03:40.159 "nvme_admin": false, 00:03:40.159 "nvme_io": false, 00:03:40.159 "nvme_io_md": false, 00:03:40.159 "write_zeroes": true, 00:03:40.159 "zcopy": true, 00:03:40.159 "get_zone_info": false, 00:03:40.159 "zone_management": false, 00:03:40.159 "zone_append": false, 00:03:40.159 "compare": false, 00:03:40.159 "compare_and_write": false, 00:03:40.159 "abort": true, 00:03:40.159 "seek_hole": false, 00:03:40.159 "seek_data": false, 00:03:40.159 "copy": true, 00:03:40.159 "nvme_iov_md": false 00:03:40.159 }, 00:03:40.159 "memory_domains": [ 00:03:40.159 { 00:03:40.159 "dma_device_id": "system", 00:03:40.159 "dma_device_type": 1 00:03:40.159 }, 00:03:40.159 { 00:03:40.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:40.159 "dma_device_type": 2 00:03:40.159 } 00:03:40.159 ], 00:03:40.159 "driver_specific": { 00:03:40.159 "passthru": { 00:03:40.159 "name": "Passthru0", 00:03:40.159 "base_bdev_name": "Malloc2" 00:03:40.159 } 00:03:40.159 } 00:03:40.159 } 00:03:40.159 ]' 00:03:40.159 11:48:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:40.159 11:48:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:40.159 11:48:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:40.159 11:48:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:40.159 11:48:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.421 11:48:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:40.421 11:48:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:40.421 11:48:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:40.421 11:48:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.421 11:48:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:40.421 11:48:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:40.421 11:48:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:40.421 11:48:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.421 11:48:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:40.421 11:48:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:40.421 11:48:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:40.421 11:48:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:40.421 00:03:40.421 real 0m0.299s 00:03:40.421 user 0m0.194s 00:03:40.421 sys 0m0.037s 00:03:40.421 11:48:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:40.421 11:48:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.421 ************************************ 00:03:40.421 END TEST rpc_daemon_integrity 00:03:40.421 ************************************ 00:03:40.421 11:48:16 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:40.421 11:48:16 rpc -- rpc/rpc.sh@84 -- # killprocess 735474 00:03:40.421 11:48:16 rpc -- common/autotest_common.sh@950 -- # '[' -z 735474 ']' 00:03:40.421 11:48:16 rpc -- common/autotest_common.sh@954 -- # kill -0 735474 00:03:40.421 11:48:16 rpc -- common/autotest_common.sh@955 -- # uname 00:03:40.421 11:48:16 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:40.421 11:48:16 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 735474 00:03:40.421 11:48:16 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:40.421 11:48:16 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:40.421 11:48:16 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 735474' 00:03:40.421 killing process with pid 735474 00:03:40.421 11:48:16 rpc -- common/autotest_common.sh@969 -- # kill 735474 00:03:40.421 11:48:16 rpc -- common/autotest_common.sh@974 -- # wait 735474 00:03:40.682 00:03:40.682 real 0m2.710s 00:03:40.682 user 0m3.437s 00:03:40.682 sys 0m0.846s 00:03:40.682 11:48:17 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:40.682 11:48:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:40.682 ************************************ 00:03:40.682 END TEST rpc 00:03:40.682 ************************************ 00:03:40.682 11:48:17 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:40.682 11:48:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:40.682 11:48:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:40.682 11:48:17 -- common/autotest_common.sh@10 -- # set +x 00:03:40.682 ************************************ 00:03:40.682 START TEST skip_rpc 00:03:40.682 ************************************ 00:03:40.682 11:48:17 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:40.943 * Looking for test storage... 00:03:40.943 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:40.943 11:48:17 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:40.943 11:48:17 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:03:40.943 11:48:17 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:40.943 11:48:17 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:40.943 11:48:17 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:40.943 11:48:17 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:40.943 11:48:17 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:40.943 11:48:17 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:40.943 11:48:17 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:40.943 11:48:17 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:40.943 11:48:17 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:40.943 11:48:17 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:40.943 11:48:17 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:40.943 11:48:17 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:40.943 11:48:17 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:40.943 11:48:17 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:40.943 11:48:17 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:40.943 11:48:17 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:40.943 11:48:17 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:40.943 11:48:17 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:40.943 11:48:17 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:40.943 11:48:17 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:40.943 11:48:17 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:40.943 11:48:17 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:40.943 11:48:17 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:40.943 11:48:17 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:40.943 11:48:17 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:40.943 11:48:17 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:40.943 11:48:17 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:40.943 11:48:17 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:40.943 11:48:17 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:40.943 11:48:17 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:40.943 11:48:17 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:40.943 11:48:17 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:40.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.943 --rc genhtml_branch_coverage=1 00:03:40.943 --rc genhtml_function_coverage=1 00:03:40.943 --rc genhtml_legend=1 00:03:40.943 --rc geninfo_all_blocks=1 00:03:40.943 --rc geninfo_unexecuted_blocks=1 00:03:40.943 00:03:40.943 ' 00:03:40.943 11:48:17 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:40.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.943 --rc genhtml_branch_coverage=1 00:03:40.943 --rc genhtml_function_coverage=1 00:03:40.943 --rc genhtml_legend=1 00:03:40.943 --rc geninfo_all_blocks=1 00:03:40.943 --rc geninfo_unexecuted_blocks=1 00:03:40.943 00:03:40.943 ' 00:03:40.943 11:48:17 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:40.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.943 --rc genhtml_branch_coverage=1 00:03:40.943 --rc genhtml_function_coverage=1 00:03:40.943 --rc genhtml_legend=1 00:03:40.943 --rc geninfo_all_blocks=1 00:03:40.943 --rc geninfo_unexecuted_blocks=1 00:03:40.943 00:03:40.943 ' 00:03:40.943 11:48:17 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:40.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.943 --rc genhtml_branch_coverage=1 00:03:40.943 --rc genhtml_function_coverage=1 00:03:40.943 --rc genhtml_legend=1 00:03:40.943 --rc geninfo_all_blocks=1 00:03:40.943 --rc geninfo_unexecuted_blocks=1 00:03:40.943 00:03:40.943 ' 00:03:40.943 11:48:17 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:40.943 11:48:17 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:40.943 11:48:17 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:40.943 11:48:17 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:40.943 11:48:17 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:40.943 11:48:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:40.943 ************************************ 00:03:40.943 START TEST skip_rpc 00:03:40.943 ************************************ 00:03:40.943 11:48:17 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:03:40.943 11:48:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=736316 00:03:40.943 11:48:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:40.943 11:48:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:40.943 11:48:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:41.204 [2024-10-21 11:48:17.550097] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:03:41.204 [2024-10-21 11:48:17.550159] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid736316 ] 00:03:41.204 [2024-10-21 11:48:17.632400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:41.204 [2024-10-21 11:48:17.684749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:46.551 11:48:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:46.551 11:48:22 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:03:46.551 11:48:22 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:46.551 11:48:22 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:03:46.551 11:48:22 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:46.551 11:48:22 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:03:46.551 11:48:22 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:46.551 11:48:22 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:03:46.551 11:48:22 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:46.551 11:48:22 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:46.551 11:48:22 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:46.551 11:48:22 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:03:46.551 11:48:22 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:46.551 11:48:22 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:03:46.551 11:48:22 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:46.551 11:48:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:46.551 11:48:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 736316 00:03:46.551 11:48:22 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 736316 ']' 00:03:46.551 11:48:22 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 736316 00:03:46.551 11:48:22 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:03:46.551 11:48:22 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:46.551 11:48:22 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 736316 00:03:46.551 11:48:22 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:46.551 11:48:22 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:46.551 11:48:22 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 736316' 00:03:46.551 killing process with pid 736316 00:03:46.551 11:48:22 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 736316 00:03:46.551 11:48:22 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 736316 00:03:46.551 00:03:46.551 real 0m5.261s 00:03:46.551 user 0m5.017s 00:03:46.551 sys 0m0.294s 00:03:46.551 11:48:22 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:46.551 11:48:22 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:46.551 ************************************ 00:03:46.551 END TEST skip_rpc 00:03:46.551 ************************************ 00:03:46.551 11:48:22 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:46.551 11:48:22 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:46.551 11:48:22 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:46.551 11:48:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:46.551 ************************************ 00:03:46.551 START TEST skip_rpc_with_json 00:03:46.551 ************************************ 00:03:46.551 11:48:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:03:46.551 11:48:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:46.551 11:48:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=737363 00:03:46.551 11:48:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:46.551 11:48:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 737363 00:03:46.551 11:48:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:46.551 11:48:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 737363 ']' 00:03:46.551 11:48:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:46.551 11:48:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:46.551 11:48:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:46.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:46.551 11:48:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:46.551 11:48:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:46.551 [2024-10-21 11:48:22.886457] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:03:46.551 [2024-10-21 11:48:22.886512] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid737363 ] 00:03:46.551 [2024-10-21 11:48:22.963454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:46.551 [2024-10-21 11:48:22.998929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:47.146 11:48:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:47.146 11:48:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:03:47.146 11:48:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:47.146 11:48:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.146 11:48:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:47.146 [2024-10-21 11:48:23.672525] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:47.146 request: 00:03:47.146 { 00:03:47.146 "trtype": "tcp", 00:03:47.146 "method": "nvmf_get_transports", 00:03:47.146 "req_id": 1 00:03:47.146 } 00:03:47.146 Got JSON-RPC error response 00:03:47.146 response: 00:03:47.146 { 00:03:47.146 "code": -19, 00:03:47.146 "message": "No such device" 00:03:47.146 } 00:03:47.146 11:48:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:47.146 11:48:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:47.146 11:48:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.146 11:48:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:47.146 [2024-10-21 11:48:23.684625] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:47.146 11:48:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.146 11:48:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:47.146 11:48:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.146 11:48:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:47.408 11:48:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.408 11:48:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:47.408 { 00:03:47.408 "subsystems": [ 00:03:47.408 { 00:03:47.408 "subsystem": "fsdev", 00:03:47.408 "config": [ 00:03:47.408 { 00:03:47.408 "method": "fsdev_set_opts", 00:03:47.408 "params": { 00:03:47.408 "fsdev_io_pool_size": 65535, 00:03:47.408 "fsdev_io_cache_size": 256 00:03:47.408 } 00:03:47.408 } 00:03:47.408 ] 00:03:47.408 }, 00:03:47.408 { 00:03:47.408 "subsystem": "vfio_user_target", 00:03:47.408 "config": null 00:03:47.408 }, 00:03:47.408 { 00:03:47.408 "subsystem": "keyring", 00:03:47.408 "config": [] 00:03:47.408 }, 00:03:47.408 { 00:03:47.408 "subsystem": "iobuf", 00:03:47.408 "config": [ 00:03:47.408 { 00:03:47.408 "method": "iobuf_set_options", 00:03:47.408 "params": { 00:03:47.408 "small_pool_count": 8192, 00:03:47.408 "large_pool_count": 1024, 00:03:47.408 "small_bufsize": 8192, 00:03:47.408 "large_bufsize": 135168 00:03:47.408 } 00:03:47.408 } 00:03:47.408 ] 00:03:47.408 }, 00:03:47.408 { 00:03:47.408 "subsystem": "sock", 00:03:47.408 "config": [ 00:03:47.408 { 00:03:47.408 "method": "sock_set_default_impl", 00:03:47.408 "params": { 00:03:47.408 "impl_name": "posix" 00:03:47.408 } 00:03:47.408 }, 00:03:47.408 { 00:03:47.408 "method": "sock_impl_set_options", 00:03:47.408 "params": { 00:03:47.408 "impl_name": "ssl", 00:03:47.408 "recv_buf_size": 4096, 00:03:47.408 "send_buf_size": 4096, 00:03:47.408 "enable_recv_pipe": true, 00:03:47.408 "enable_quickack": false, 00:03:47.408 "enable_placement_id": 0, 00:03:47.408 "enable_zerocopy_send_server": true, 00:03:47.408 "enable_zerocopy_send_client": false, 00:03:47.408 "zerocopy_threshold": 0, 00:03:47.408 "tls_version": 0, 00:03:47.408 "enable_ktls": false 00:03:47.408 } 00:03:47.408 }, 00:03:47.408 { 00:03:47.408 "method": "sock_impl_set_options", 00:03:47.408 "params": { 00:03:47.409 "impl_name": "posix", 00:03:47.409 "recv_buf_size": 2097152, 00:03:47.409 "send_buf_size": 2097152, 00:03:47.409 "enable_recv_pipe": true, 00:03:47.409 "enable_quickack": false, 00:03:47.409 "enable_placement_id": 0, 00:03:47.409 "enable_zerocopy_send_server": true, 00:03:47.409 "enable_zerocopy_send_client": false, 00:03:47.409 "zerocopy_threshold": 0, 00:03:47.409 "tls_version": 0, 00:03:47.409 "enable_ktls": false 00:03:47.409 } 00:03:47.409 } 00:03:47.409 ] 00:03:47.409 }, 00:03:47.409 { 00:03:47.409 "subsystem": "vmd", 00:03:47.409 "config": [] 00:03:47.409 }, 00:03:47.409 { 00:03:47.409 "subsystem": "accel", 00:03:47.409 "config": [ 00:03:47.409 { 00:03:47.409 "method": "accel_set_options", 00:03:47.409 "params": { 00:03:47.409 "small_cache_size": 128, 00:03:47.409 "large_cache_size": 16, 00:03:47.409 "task_count": 2048, 00:03:47.409 "sequence_count": 2048, 00:03:47.409 "buf_count": 2048 00:03:47.409 } 00:03:47.409 } 00:03:47.409 ] 00:03:47.409 }, 00:03:47.409 { 00:03:47.409 "subsystem": "bdev", 00:03:47.409 "config": [ 00:03:47.409 { 00:03:47.409 "method": "bdev_set_options", 00:03:47.409 "params": { 00:03:47.409 "bdev_io_pool_size": 65535, 00:03:47.409 "bdev_io_cache_size": 256, 00:03:47.409 "bdev_auto_examine": true, 00:03:47.409 "iobuf_small_cache_size": 128, 00:03:47.409 "iobuf_large_cache_size": 16 00:03:47.409 } 00:03:47.409 }, 00:03:47.409 { 00:03:47.409 "method": "bdev_raid_set_options", 00:03:47.409 "params": { 00:03:47.409 "process_window_size_kb": 1024, 00:03:47.409 "process_max_bandwidth_mb_sec": 0 00:03:47.409 } 00:03:47.409 }, 00:03:47.409 { 00:03:47.409 "method": "bdev_iscsi_set_options", 00:03:47.409 "params": { 00:03:47.409 "timeout_sec": 30 00:03:47.409 } 00:03:47.409 }, 00:03:47.409 { 00:03:47.409 "method": "bdev_nvme_set_options", 00:03:47.409 "params": { 00:03:47.409 "action_on_timeout": "none", 00:03:47.409 "timeout_us": 0, 00:03:47.409 "timeout_admin_us": 0, 00:03:47.409 "keep_alive_timeout_ms": 10000, 00:03:47.409 "arbitration_burst": 0, 00:03:47.409 "low_priority_weight": 0, 00:03:47.409 "medium_priority_weight": 0, 00:03:47.409 "high_priority_weight": 0, 00:03:47.409 "nvme_adminq_poll_period_us": 10000, 00:03:47.409 "nvme_ioq_poll_period_us": 0, 00:03:47.409 "io_queue_requests": 0, 00:03:47.409 "delay_cmd_submit": true, 00:03:47.409 "transport_retry_count": 4, 00:03:47.409 "bdev_retry_count": 3, 00:03:47.409 "transport_ack_timeout": 0, 00:03:47.409 "ctrlr_loss_timeout_sec": 0, 00:03:47.409 "reconnect_delay_sec": 0, 00:03:47.409 "fast_io_fail_timeout_sec": 0, 00:03:47.409 "disable_auto_failback": false, 00:03:47.409 "generate_uuids": false, 00:03:47.409 "transport_tos": 0, 00:03:47.409 "nvme_error_stat": false, 00:03:47.409 "rdma_srq_size": 0, 00:03:47.409 "io_path_stat": false, 00:03:47.409 "allow_accel_sequence": false, 00:03:47.409 "rdma_max_cq_size": 0, 00:03:47.409 "rdma_cm_event_timeout_ms": 0, 00:03:47.409 "dhchap_digests": [ 00:03:47.409 "sha256", 00:03:47.409 "sha384", 00:03:47.409 "sha512" 00:03:47.409 ], 00:03:47.409 "dhchap_dhgroups": [ 00:03:47.409 "null", 00:03:47.409 "ffdhe2048", 00:03:47.409 "ffdhe3072", 00:03:47.409 "ffdhe4096", 00:03:47.409 "ffdhe6144", 00:03:47.409 "ffdhe8192" 00:03:47.409 ] 00:03:47.409 } 00:03:47.409 }, 00:03:47.409 { 00:03:47.409 "method": "bdev_nvme_set_hotplug", 00:03:47.409 "params": { 00:03:47.409 "period_us": 100000, 00:03:47.409 "enable": false 00:03:47.409 } 00:03:47.409 }, 00:03:47.409 { 00:03:47.409 "method": "bdev_wait_for_examine" 00:03:47.409 } 00:03:47.409 ] 00:03:47.409 }, 00:03:47.409 { 00:03:47.409 "subsystem": "scsi", 00:03:47.409 "config": null 00:03:47.409 }, 00:03:47.409 { 00:03:47.409 "subsystem": "scheduler", 00:03:47.409 "config": [ 00:03:47.409 { 00:03:47.409 "method": "framework_set_scheduler", 00:03:47.409 "params": { 00:03:47.409 "name": "static" 00:03:47.409 } 00:03:47.409 } 00:03:47.409 ] 00:03:47.409 }, 00:03:47.409 { 00:03:47.409 "subsystem": "vhost_scsi", 00:03:47.409 "config": [] 00:03:47.409 }, 00:03:47.409 { 00:03:47.409 "subsystem": "vhost_blk", 00:03:47.409 "config": [] 00:03:47.409 }, 00:03:47.409 { 00:03:47.409 "subsystem": "ublk", 00:03:47.409 "config": [] 00:03:47.409 }, 00:03:47.409 { 00:03:47.409 "subsystem": "nbd", 00:03:47.409 "config": [] 00:03:47.409 }, 00:03:47.409 { 00:03:47.409 "subsystem": "nvmf", 00:03:47.409 "config": [ 00:03:47.409 { 00:03:47.409 "method": "nvmf_set_config", 00:03:47.409 "params": { 00:03:47.409 "discovery_filter": "match_any", 00:03:47.409 "admin_cmd_passthru": { 00:03:47.409 "identify_ctrlr": false 00:03:47.409 }, 00:03:47.409 "dhchap_digests": [ 00:03:47.409 "sha256", 00:03:47.409 "sha384", 00:03:47.409 "sha512" 00:03:47.409 ], 00:03:47.409 "dhchap_dhgroups": [ 00:03:47.409 "null", 00:03:47.409 "ffdhe2048", 00:03:47.409 "ffdhe3072", 00:03:47.409 "ffdhe4096", 00:03:47.409 "ffdhe6144", 00:03:47.409 "ffdhe8192" 00:03:47.409 ] 00:03:47.409 } 00:03:47.409 }, 00:03:47.409 { 00:03:47.409 "method": "nvmf_set_max_subsystems", 00:03:47.409 "params": { 00:03:47.409 "max_subsystems": 1024 00:03:47.409 } 00:03:47.409 }, 00:03:47.409 { 00:03:47.409 "method": "nvmf_set_crdt", 00:03:47.409 "params": { 00:03:47.409 "crdt1": 0, 00:03:47.409 "crdt2": 0, 00:03:47.409 "crdt3": 0 00:03:47.409 } 00:03:47.409 }, 00:03:47.409 { 00:03:47.409 "method": "nvmf_create_transport", 00:03:47.409 "params": { 00:03:47.409 "trtype": "TCP", 00:03:47.409 "max_queue_depth": 128, 00:03:47.409 "max_io_qpairs_per_ctrlr": 127, 00:03:47.409 "in_capsule_data_size": 4096, 00:03:47.409 "max_io_size": 131072, 00:03:47.409 "io_unit_size": 131072, 00:03:47.409 "max_aq_depth": 128, 00:03:47.409 "num_shared_buffers": 511, 00:03:47.409 "buf_cache_size": 4294967295, 00:03:47.409 "dif_insert_or_strip": false, 00:03:47.409 "zcopy": false, 00:03:47.409 "c2h_success": true, 00:03:47.409 "sock_priority": 0, 00:03:47.409 "abort_timeout_sec": 1, 00:03:47.409 "ack_timeout": 0, 00:03:47.409 "data_wr_pool_size": 0 00:03:47.409 } 00:03:47.409 } 00:03:47.409 ] 00:03:47.409 }, 00:03:47.409 { 00:03:47.409 "subsystem": "iscsi", 00:03:47.409 "config": [ 00:03:47.409 { 00:03:47.409 "method": "iscsi_set_options", 00:03:47.409 "params": { 00:03:47.409 "node_base": "iqn.2016-06.io.spdk", 00:03:47.409 "max_sessions": 128, 00:03:47.409 "max_connections_per_session": 2, 00:03:47.409 "max_queue_depth": 64, 00:03:47.409 "default_time2wait": 2, 00:03:47.409 "default_time2retain": 20, 00:03:47.409 "first_burst_length": 8192, 00:03:47.409 "immediate_data": true, 00:03:47.409 "allow_duplicated_isid": false, 00:03:47.409 "error_recovery_level": 0, 00:03:47.409 "nop_timeout": 60, 00:03:47.409 "nop_in_interval": 30, 00:03:47.409 "disable_chap": false, 00:03:47.409 "require_chap": false, 00:03:47.409 "mutual_chap": false, 00:03:47.409 "chap_group": 0, 00:03:47.409 "max_large_datain_per_connection": 64, 00:03:47.409 "max_r2t_per_connection": 4, 00:03:47.409 "pdu_pool_size": 36864, 00:03:47.409 "immediate_data_pool_size": 16384, 00:03:47.409 "data_out_pool_size": 2048 00:03:47.409 } 00:03:47.409 } 00:03:47.409 ] 00:03:47.409 } 00:03:47.409 ] 00:03:47.409 } 00:03:47.409 11:48:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:47.409 11:48:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 737363 00:03:47.409 11:48:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 737363 ']' 00:03:47.409 11:48:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 737363 00:03:47.409 11:48:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:03:47.409 11:48:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:47.409 11:48:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 737363 00:03:47.409 11:48:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:47.409 11:48:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:47.409 11:48:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 737363' 00:03:47.409 killing process with pid 737363 00:03:47.409 11:48:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 737363 00:03:47.409 11:48:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 737363 00:03:47.670 11:48:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=737620 00:03:47.670 11:48:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:47.670 11:48:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:52.960 11:48:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 737620 00:03:52.960 11:48:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 737620 ']' 00:03:52.960 11:48:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 737620 00:03:52.960 11:48:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:03:52.960 11:48:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:52.960 11:48:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 737620 00:03:52.960 11:48:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:52.960 11:48:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:52.960 11:48:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 737620' 00:03:52.960 killing process with pid 737620 00:03:52.960 11:48:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 737620 00:03:52.960 11:48:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 737620 00:03:52.960 11:48:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:52.960 11:48:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:52.960 00:03:52.960 real 0m6.541s 00:03:52.960 user 0m6.461s 00:03:52.960 sys 0m0.545s 00:03:52.960 11:48:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:52.960 11:48:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:52.960 ************************************ 00:03:52.960 END TEST skip_rpc_with_json 00:03:52.960 ************************************ 00:03:52.960 11:48:29 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:52.960 11:48:29 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:52.960 11:48:29 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:52.960 11:48:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:52.960 ************************************ 00:03:52.960 START TEST skip_rpc_with_delay 00:03:52.960 ************************************ 00:03:52.960 11:48:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:03:52.960 11:48:29 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:52.960 11:48:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:03:52.960 11:48:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:52.960 11:48:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:52.960 11:48:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:52.960 11:48:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:52.960 11:48:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:52.960 11:48:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:52.960 11:48:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:52.960 11:48:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:52.960 11:48:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:52.960 11:48:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:52.960 [2024-10-21 11:48:29.505145] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:52.960 11:48:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:03:52.960 11:48:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:52.960 11:48:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:03:52.960 11:48:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:52.960 00:03:52.960 real 0m0.075s 00:03:52.960 user 0m0.052s 00:03:52.960 sys 0m0.022s 00:03:52.960 11:48:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:52.960 11:48:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:52.960 ************************************ 00:03:52.960 END TEST skip_rpc_with_delay 00:03:52.960 ************************************ 00:03:53.220 11:48:29 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:53.220 11:48:29 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:53.220 11:48:29 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:53.220 11:48:29 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:53.220 11:48:29 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:53.220 11:48:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:53.220 ************************************ 00:03:53.220 START TEST exit_on_failed_rpc_init 00:03:53.220 ************************************ 00:03:53.220 11:48:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:03:53.220 11:48:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=738768 00:03:53.220 11:48:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 738768 00:03:53.220 11:48:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:53.220 11:48:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 738768 ']' 00:03:53.220 11:48:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:53.220 11:48:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:53.220 11:48:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:53.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:53.221 11:48:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:53.221 11:48:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:53.221 [2024-10-21 11:48:29.665308] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:03:53.221 [2024-10-21 11:48:29.665363] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid738768 ] 00:03:53.221 [2024-10-21 11:48:29.740408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:53.221 [2024-10-21 11:48:29.771008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:54.163 11:48:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:54.163 11:48:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:03:54.164 11:48:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:54.164 11:48:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:54.164 11:48:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:03:54.164 11:48:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:54.164 11:48:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:54.164 11:48:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:54.164 11:48:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:54.164 11:48:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:54.164 11:48:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:54.164 11:48:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:54.164 11:48:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:54.164 11:48:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:54.164 11:48:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:54.164 [2024-10-21 11:48:30.500233] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:03:54.164 [2024-10-21 11:48:30.500287] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid738813 ] 00:03:54.164 [2024-10-21 11:48:30.576432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:54.164 [2024-10-21 11:48:30.612768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:54.164 [2024-10-21 11:48:30.612819] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:54.164 [2024-10-21 11:48:30.612829] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:54.164 [2024-10-21 11:48:30.612835] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:54.164 11:48:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:03:54.164 11:48:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:54.164 11:48:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:03:54.164 11:48:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:03:54.164 11:48:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:03:54.164 11:48:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:54.164 11:48:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:54.164 11:48:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 738768 00:03:54.164 11:48:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 738768 ']' 00:03:54.164 11:48:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 738768 00:03:54.164 11:48:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:03:54.164 11:48:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:54.164 11:48:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 738768 00:03:54.164 11:48:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:54.164 11:48:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:54.164 11:48:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 738768' 00:03:54.164 killing process with pid 738768 00:03:54.164 11:48:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 738768 00:03:54.164 11:48:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 738768 00:03:54.425 00:03:54.425 real 0m1.301s 00:03:54.425 user 0m1.511s 00:03:54.425 sys 0m0.381s 00:03:54.425 11:48:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:54.425 11:48:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:54.425 ************************************ 00:03:54.425 END TEST exit_on_failed_rpc_init 00:03:54.425 ************************************ 00:03:54.425 11:48:30 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:54.425 00:03:54.425 real 0m13.698s 00:03:54.425 user 0m13.284s 00:03:54.425 sys 0m1.546s 00:03:54.425 11:48:30 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:54.425 11:48:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:54.425 ************************************ 00:03:54.425 END TEST skip_rpc 00:03:54.425 ************************************ 00:03:54.425 11:48:30 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:54.425 11:48:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:54.425 11:48:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:54.425 11:48:30 -- common/autotest_common.sh@10 -- # set +x 00:03:54.426 ************************************ 00:03:54.426 START TEST rpc_client 00:03:54.426 ************************************ 00:03:54.426 11:48:31 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:54.686 * Looking for test storage... 00:03:54.686 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:54.686 11:48:31 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:54.686 11:48:31 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:03:54.686 11:48:31 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:54.686 11:48:31 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:54.686 11:48:31 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:54.686 11:48:31 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:54.686 11:48:31 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:54.686 11:48:31 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:03:54.686 11:48:31 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:03:54.686 11:48:31 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:03:54.686 11:48:31 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:03:54.686 11:48:31 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:03:54.686 11:48:31 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:03:54.686 11:48:31 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:03:54.686 11:48:31 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:54.686 11:48:31 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:03:54.686 11:48:31 rpc_client -- scripts/common.sh@345 -- # : 1 00:03:54.686 11:48:31 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:54.686 11:48:31 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:54.686 11:48:31 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:03:54.686 11:48:31 rpc_client -- scripts/common.sh@353 -- # local d=1 00:03:54.686 11:48:31 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:54.686 11:48:31 rpc_client -- scripts/common.sh@355 -- # echo 1 00:03:54.686 11:48:31 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:03:54.686 11:48:31 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:03:54.686 11:48:31 rpc_client -- scripts/common.sh@353 -- # local d=2 00:03:54.686 11:48:31 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:54.686 11:48:31 rpc_client -- scripts/common.sh@355 -- # echo 2 00:03:54.686 11:48:31 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:03:54.686 11:48:31 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:54.686 11:48:31 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:54.686 11:48:31 rpc_client -- scripts/common.sh@368 -- # return 0 00:03:54.686 11:48:31 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:54.686 11:48:31 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:54.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.686 --rc genhtml_branch_coverage=1 00:03:54.686 --rc genhtml_function_coverage=1 00:03:54.686 --rc genhtml_legend=1 00:03:54.686 --rc geninfo_all_blocks=1 00:03:54.686 --rc geninfo_unexecuted_blocks=1 00:03:54.686 00:03:54.686 ' 00:03:54.686 11:48:31 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:54.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.686 --rc genhtml_branch_coverage=1 00:03:54.686 --rc genhtml_function_coverage=1 00:03:54.686 --rc genhtml_legend=1 00:03:54.686 --rc geninfo_all_blocks=1 00:03:54.686 --rc geninfo_unexecuted_blocks=1 00:03:54.686 00:03:54.686 ' 00:03:54.686 11:48:31 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:54.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.686 --rc genhtml_branch_coverage=1 00:03:54.686 --rc genhtml_function_coverage=1 00:03:54.686 --rc genhtml_legend=1 00:03:54.686 --rc geninfo_all_blocks=1 00:03:54.686 --rc geninfo_unexecuted_blocks=1 00:03:54.686 00:03:54.686 ' 00:03:54.686 11:48:31 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:54.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.686 --rc genhtml_branch_coverage=1 00:03:54.686 --rc genhtml_function_coverage=1 00:03:54.686 --rc genhtml_legend=1 00:03:54.686 --rc geninfo_all_blocks=1 00:03:54.686 --rc geninfo_unexecuted_blocks=1 00:03:54.686 00:03:54.686 ' 00:03:54.686 11:48:31 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:54.686 OK 00:03:54.686 11:48:31 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:54.686 00:03:54.686 real 0m0.232s 00:03:54.686 user 0m0.133s 00:03:54.686 sys 0m0.112s 00:03:54.687 11:48:31 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:54.687 11:48:31 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:54.687 ************************************ 00:03:54.687 END TEST rpc_client 00:03:54.687 ************************************ 00:03:54.948 11:48:31 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:54.948 11:48:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:54.948 11:48:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:54.948 11:48:31 -- common/autotest_common.sh@10 -- # set +x 00:03:54.948 ************************************ 00:03:54.948 START TEST json_config 00:03:54.948 ************************************ 00:03:54.948 11:48:31 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:54.948 11:48:31 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:54.948 11:48:31 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:03:54.948 11:48:31 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:54.948 11:48:31 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:54.948 11:48:31 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:54.948 11:48:31 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:54.948 11:48:31 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:54.948 11:48:31 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:03:54.948 11:48:31 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:03:54.948 11:48:31 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:03:54.948 11:48:31 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:03:54.948 11:48:31 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:03:54.948 11:48:31 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:03:54.948 11:48:31 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:03:54.948 11:48:31 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:54.948 11:48:31 json_config -- scripts/common.sh@344 -- # case "$op" in 00:03:54.948 11:48:31 json_config -- scripts/common.sh@345 -- # : 1 00:03:54.948 11:48:31 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:54.948 11:48:31 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:54.948 11:48:31 json_config -- scripts/common.sh@365 -- # decimal 1 00:03:54.948 11:48:31 json_config -- scripts/common.sh@353 -- # local d=1 00:03:54.948 11:48:31 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:54.948 11:48:31 json_config -- scripts/common.sh@355 -- # echo 1 00:03:54.948 11:48:31 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:03:54.948 11:48:31 json_config -- scripts/common.sh@366 -- # decimal 2 00:03:54.948 11:48:31 json_config -- scripts/common.sh@353 -- # local d=2 00:03:54.948 11:48:31 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:54.948 11:48:31 json_config -- scripts/common.sh@355 -- # echo 2 00:03:54.948 11:48:31 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:03:54.948 11:48:31 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:54.948 11:48:31 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:54.948 11:48:31 json_config -- scripts/common.sh@368 -- # return 0 00:03:54.948 11:48:31 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:54.948 11:48:31 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:54.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.948 --rc genhtml_branch_coverage=1 00:03:54.948 --rc genhtml_function_coverage=1 00:03:54.948 --rc genhtml_legend=1 00:03:54.948 --rc geninfo_all_blocks=1 00:03:54.948 --rc geninfo_unexecuted_blocks=1 00:03:54.948 00:03:54.948 ' 00:03:54.948 11:48:31 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:54.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.948 --rc genhtml_branch_coverage=1 00:03:54.948 --rc genhtml_function_coverage=1 00:03:54.948 --rc genhtml_legend=1 00:03:54.948 --rc geninfo_all_blocks=1 00:03:54.948 --rc geninfo_unexecuted_blocks=1 00:03:54.948 00:03:54.948 ' 00:03:54.948 11:48:31 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:54.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.948 --rc genhtml_branch_coverage=1 00:03:54.948 --rc genhtml_function_coverage=1 00:03:54.948 --rc genhtml_legend=1 00:03:54.948 --rc geninfo_all_blocks=1 00:03:54.948 --rc geninfo_unexecuted_blocks=1 00:03:54.948 00:03:54.948 ' 00:03:54.948 11:48:31 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:54.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.948 --rc genhtml_branch_coverage=1 00:03:54.948 --rc genhtml_function_coverage=1 00:03:54.948 --rc genhtml_legend=1 00:03:54.948 --rc geninfo_all_blocks=1 00:03:54.948 --rc geninfo_unexecuted_blocks=1 00:03:54.948 00:03:54.948 ' 00:03:54.948 11:48:31 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:54.948 11:48:31 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:54.948 11:48:31 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:54.948 11:48:31 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:54.948 11:48:31 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:54.948 11:48:31 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:54.948 11:48:31 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:54.948 11:48:31 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:54.948 11:48:31 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:54.948 11:48:31 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:54.948 11:48:31 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:54.948 11:48:31 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:54.948 11:48:31 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:03:54.948 11:48:31 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:03:54.948 11:48:31 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:54.948 11:48:31 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:54.948 11:48:31 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:54.948 11:48:31 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:54.948 11:48:31 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:54.948 11:48:31 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:03:54.948 11:48:31 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:54.948 11:48:31 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:54.948 11:48:31 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:54.948 11:48:31 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:54.948 11:48:31 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:54.948 11:48:31 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:54.948 11:48:31 json_config -- paths/export.sh@5 -- # export PATH 00:03:54.948 11:48:31 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:54.948 11:48:31 json_config -- nvmf/common.sh@51 -- # : 0 00:03:54.948 11:48:31 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:54.948 11:48:31 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:54.948 11:48:31 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:54.948 11:48:31 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:54.948 11:48:31 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:54.948 11:48:31 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:54.948 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:54.949 11:48:31 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:54.949 11:48:31 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:54.949 11:48:31 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:54.949 11:48:31 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:54.949 11:48:31 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:54.949 11:48:31 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:54.949 11:48:31 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:54.949 11:48:31 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:54.949 11:48:31 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:54.949 11:48:31 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:54.949 11:48:31 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:54.949 11:48:31 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:54.949 11:48:31 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:54.949 11:48:31 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:54.949 11:48:31 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:54.949 11:48:31 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:54.949 11:48:31 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:54.949 11:48:31 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:54.949 11:48:31 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:03:54.949 INFO: JSON configuration test init 00:03:54.949 11:48:31 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:03:54.949 11:48:31 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:03:54.949 11:48:31 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:54.949 11:48:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:55.209 11:48:31 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:03:55.209 11:48:31 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:55.209 11:48:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:55.209 11:48:31 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:03:55.209 11:48:31 json_config -- json_config/common.sh@9 -- # local app=target 00:03:55.209 11:48:31 json_config -- json_config/common.sh@10 -- # shift 00:03:55.209 11:48:31 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:55.209 11:48:31 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:55.209 11:48:31 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:55.209 11:48:31 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:55.209 11:48:31 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:55.209 11:48:31 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=739246 00:03:55.209 11:48:31 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:55.209 Waiting for target to run... 00:03:55.209 11:48:31 json_config -- json_config/common.sh@25 -- # waitforlisten 739246 /var/tmp/spdk_tgt.sock 00:03:55.209 11:48:31 json_config -- common/autotest_common.sh@831 -- # '[' -z 739246 ']' 00:03:55.209 11:48:31 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:55.209 11:48:31 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:55.209 11:48:31 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:55.209 11:48:31 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:55.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:55.209 11:48:31 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:55.209 11:48:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:55.209 [2024-10-21 11:48:31.606399] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:03:55.209 [2024-10-21 11:48:31.606455] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid739246 ] 00:03:55.469 [2024-10-21 11:48:31.899008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:55.470 [2024-10-21 11:48:31.924158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:56.039 11:48:32 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:56.039 11:48:32 json_config -- common/autotest_common.sh@864 -- # return 0 00:03:56.039 11:48:32 json_config -- json_config/common.sh@26 -- # echo '' 00:03:56.039 00:03:56.039 11:48:32 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:03:56.039 11:48:32 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:03:56.039 11:48:32 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:56.039 11:48:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:56.039 11:48:32 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:03:56.039 11:48:32 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:03:56.039 11:48:32 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:56.039 11:48:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:56.039 11:48:32 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:56.039 11:48:32 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:03:56.039 11:48:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:56.611 11:48:32 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:03:56.611 11:48:32 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:56.611 11:48:32 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:56.611 11:48:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:56.611 11:48:32 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:03:56.611 11:48:32 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:56.611 11:48:32 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:03:56.611 11:48:32 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:03:56.611 11:48:32 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:03:56.611 11:48:32 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:03:56.611 11:48:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:56.611 11:48:32 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:03:56.611 11:48:33 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:03:56.611 11:48:33 json_config -- json_config/json_config.sh@51 -- # local get_types 00:03:56.611 11:48:33 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:03:56.611 11:48:33 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:03:56.611 11:48:33 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:03:56.611 11:48:33 json_config -- json_config/json_config.sh@54 -- # sort 00:03:56.611 11:48:33 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:03:56.611 11:48:33 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:03:56.611 11:48:33 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:03:56.611 11:48:33 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:03:56.611 11:48:33 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:56.611 11:48:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:56.872 11:48:33 json_config -- json_config/json_config.sh@62 -- # return 0 00:03:56.872 11:48:33 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:03:56.872 11:48:33 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:03:56.872 11:48:33 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:03:56.872 11:48:33 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:03:56.872 11:48:33 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:03:56.872 11:48:33 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:03:56.872 11:48:33 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:56.872 11:48:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:56.872 11:48:33 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:56.872 11:48:33 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:03:56.872 11:48:33 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:03:56.872 11:48:33 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:56.872 11:48:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:56.872 MallocForNvmf0 00:03:56.872 11:48:33 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:56.872 11:48:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:57.133 MallocForNvmf1 00:03:57.133 11:48:33 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:03:57.133 11:48:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:03:57.393 [2024-10-21 11:48:33.733438] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:57.393 11:48:33 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:57.393 11:48:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:57.393 11:48:33 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:57.393 11:48:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:57.654 11:48:34 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:57.654 11:48:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:57.915 11:48:34 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:57.915 11:48:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:57.915 [2024-10-21 11:48:34.471684] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:57.915 11:48:34 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:03:57.915 11:48:34 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:57.915 11:48:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:58.176 11:48:34 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:03:58.176 11:48:34 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:58.176 11:48:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:58.176 11:48:34 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:03:58.176 11:48:34 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:58.176 11:48:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:58.176 MallocBdevForConfigChangeCheck 00:03:58.176 11:48:34 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:03:58.176 11:48:34 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:58.176 11:48:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:58.437 11:48:34 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:03:58.437 11:48:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:58.697 11:48:35 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:03:58.697 INFO: shutting down applications... 00:03:58.697 11:48:35 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:03:58.697 11:48:35 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:03:58.697 11:48:35 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:03:58.697 11:48:35 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:03:58.958 Calling clear_iscsi_subsystem 00:03:58.958 Calling clear_nvmf_subsystem 00:03:58.958 Calling clear_nbd_subsystem 00:03:58.958 Calling clear_ublk_subsystem 00:03:58.958 Calling clear_vhost_blk_subsystem 00:03:58.958 Calling clear_vhost_scsi_subsystem 00:03:58.958 Calling clear_bdev_subsystem 00:03:58.958 11:48:35 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:03:58.958 11:48:35 json_config -- json_config/json_config.sh@350 -- # count=100 00:03:58.958 11:48:35 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:03:58.958 11:48:35 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:58.958 11:48:35 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:03:58.958 11:48:35 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:03:59.530 11:48:35 json_config -- json_config/json_config.sh@352 -- # break 00:03:59.530 11:48:35 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:03:59.530 11:48:35 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:03:59.530 11:48:35 json_config -- json_config/common.sh@31 -- # local app=target 00:03:59.530 11:48:35 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:59.530 11:48:35 json_config -- json_config/common.sh@35 -- # [[ -n 739246 ]] 00:03:59.530 11:48:35 json_config -- json_config/common.sh@38 -- # kill -SIGINT 739246 00:03:59.530 11:48:35 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:59.530 11:48:35 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:59.531 11:48:35 json_config -- json_config/common.sh@41 -- # kill -0 739246 00:03:59.531 11:48:35 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:00.103 11:48:36 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:00.103 11:48:36 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:00.103 11:48:36 json_config -- json_config/common.sh@41 -- # kill -0 739246 00:04:00.103 11:48:36 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:00.103 11:48:36 json_config -- json_config/common.sh@43 -- # break 00:04:00.103 11:48:36 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:00.103 11:48:36 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:00.103 SPDK target shutdown done 00:04:00.103 11:48:36 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:00.103 INFO: relaunching applications... 00:04:00.103 11:48:36 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:00.103 11:48:36 json_config -- json_config/common.sh@9 -- # local app=target 00:04:00.103 11:48:36 json_config -- json_config/common.sh@10 -- # shift 00:04:00.103 11:48:36 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:00.103 11:48:36 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:00.103 11:48:36 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:00.103 11:48:36 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:00.103 11:48:36 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:00.103 11:48:36 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=740383 00:04:00.103 11:48:36 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:00.103 Waiting for target to run... 00:04:00.103 11:48:36 json_config -- json_config/common.sh@25 -- # waitforlisten 740383 /var/tmp/spdk_tgt.sock 00:04:00.103 11:48:36 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:00.103 11:48:36 json_config -- common/autotest_common.sh@831 -- # '[' -z 740383 ']' 00:04:00.104 11:48:36 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:00.104 11:48:36 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:00.104 11:48:36 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:00.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:00.104 11:48:36 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:00.104 11:48:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:00.104 [2024-10-21 11:48:36.456992] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:04:00.104 [2024-10-21 11:48:36.457055] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid740383 ] 00:04:00.365 [2024-10-21 11:48:36.814565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:00.365 [2024-10-21 11:48:36.840350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:00.936 [2024-10-21 11:48:37.344316] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:00.936 [2024-10-21 11:48:37.376789] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:00.936 11:48:37 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:00.936 11:48:37 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:00.936 11:48:37 json_config -- json_config/common.sh@26 -- # echo '' 00:04:00.936 00:04:00.936 11:48:37 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:00.936 11:48:37 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:00.936 INFO: Checking if target configuration is the same... 00:04:00.936 11:48:37 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:00.936 11:48:37 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:00.936 11:48:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:00.936 + '[' 2 -ne 2 ']' 00:04:00.936 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:00.936 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:00.936 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:00.936 +++ basename /dev/fd/62 00:04:00.936 ++ mktemp /tmp/62.XXX 00:04:00.936 + tmp_file_1=/tmp/62.m8r 00:04:00.936 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:00.937 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:00.937 + tmp_file_2=/tmp/spdk_tgt_config.json.sIE 00:04:00.937 + ret=0 00:04:00.937 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:01.197 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:01.197 + diff -u /tmp/62.m8r /tmp/spdk_tgt_config.json.sIE 00:04:01.458 + echo 'INFO: JSON config files are the same' 00:04:01.458 INFO: JSON config files are the same 00:04:01.458 + rm /tmp/62.m8r /tmp/spdk_tgt_config.json.sIE 00:04:01.458 + exit 0 00:04:01.458 11:48:37 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:01.458 11:48:37 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:01.458 INFO: changing configuration and checking if this can be detected... 00:04:01.458 11:48:37 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:01.458 11:48:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:01.458 11:48:37 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:01.458 11:48:37 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:01.458 11:48:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:01.458 + '[' 2 -ne 2 ']' 00:04:01.458 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:01.458 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:01.458 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:01.458 +++ basename /dev/fd/62 00:04:01.458 ++ mktemp /tmp/62.XXX 00:04:01.458 + tmp_file_1=/tmp/62.R5K 00:04:01.458 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:01.458 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:01.458 + tmp_file_2=/tmp/spdk_tgt_config.json.lfR 00:04:01.458 + ret=0 00:04:01.458 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:01.718 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:02.115 + diff -u /tmp/62.R5K /tmp/spdk_tgt_config.json.lfR 00:04:02.115 + ret=1 00:04:02.115 + echo '=== Start of file: /tmp/62.R5K ===' 00:04:02.115 + cat /tmp/62.R5K 00:04:02.115 + echo '=== End of file: /tmp/62.R5K ===' 00:04:02.115 + echo '' 00:04:02.115 + echo '=== Start of file: /tmp/spdk_tgt_config.json.lfR ===' 00:04:02.115 + cat /tmp/spdk_tgt_config.json.lfR 00:04:02.115 + echo '=== End of file: /tmp/spdk_tgt_config.json.lfR ===' 00:04:02.115 + echo '' 00:04:02.115 + rm /tmp/62.R5K /tmp/spdk_tgt_config.json.lfR 00:04:02.115 + exit 1 00:04:02.115 11:48:38 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:02.115 INFO: configuration change detected. 00:04:02.115 11:48:38 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:02.115 11:48:38 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:02.115 11:48:38 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:02.115 11:48:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:02.115 11:48:38 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:02.115 11:48:38 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:02.115 11:48:38 json_config -- json_config/json_config.sh@324 -- # [[ -n 740383 ]] 00:04:02.115 11:48:38 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:02.115 11:48:38 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:02.115 11:48:38 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:02.115 11:48:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:02.115 11:48:38 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:02.115 11:48:38 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:02.115 11:48:38 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:02.116 11:48:38 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:02.116 11:48:38 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:02.116 11:48:38 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:02.116 11:48:38 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:02.116 11:48:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:02.116 11:48:38 json_config -- json_config/json_config.sh@330 -- # killprocess 740383 00:04:02.116 11:48:38 json_config -- common/autotest_common.sh@950 -- # '[' -z 740383 ']' 00:04:02.116 11:48:38 json_config -- common/autotest_common.sh@954 -- # kill -0 740383 00:04:02.116 11:48:38 json_config -- common/autotest_common.sh@955 -- # uname 00:04:02.116 11:48:38 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:02.116 11:48:38 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 740383 00:04:02.116 11:48:38 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:02.116 11:48:38 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:02.116 11:48:38 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 740383' 00:04:02.116 killing process with pid 740383 00:04:02.116 11:48:38 json_config -- common/autotest_common.sh@969 -- # kill 740383 00:04:02.116 11:48:38 json_config -- common/autotest_common.sh@974 -- # wait 740383 00:04:02.384 11:48:38 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:02.384 11:48:38 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:02.384 11:48:38 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:02.384 11:48:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:02.384 11:48:38 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:02.384 11:48:38 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:02.384 INFO: Success 00:04:02.384 00:04:02.384 real 0m7.460s 00:04:02.384 user 0m9.170s 00:04:02.384 sys 0m1.830s 00:04:02.384 11:48:38 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:02.384 11:48:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:02.384 ************************************ 00:04:02.384 END TEST json_config 00:04:02.384 ************************************ 00:04:02.384 11:48:38 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:02.384 11:48:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:02.384 11:48:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:02.384 11:48:38 -- common/autotest_common.sh@10 -- # set +x 00:04:02.384 ************************************ 00:04:02.384 START TEST json_config_extra_key 00:04:02.384 ************************************ 00:04:02.384 11:48:38 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:02.384 11:48:38 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:02.384 11:48:38 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:04:02.384 11:48:38 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:02.646 11:48:39 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:02.646 11:48:39 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:02.646 11:48:39 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:02.646 11:48:39 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:02.646 11:48:39 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:02.646 11:48:39 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:02.646 11:48:39 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:02.646 11:48:39 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:02.646 11:48:39 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:02.646 11:48:39 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:02.646 11:48:39 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:02.646 11:48:39 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:02.646 11:48:39 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:02.646 11:48:39 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:02.646 11:48:39 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:02.646 11:48:39 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:02.646 11:48:39 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:02.646 11:48:39 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:02.646 11:48:39 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:02.646 11:48:39 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:02.646 11:48:39 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:02.646 11:48:39 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:02.646 11:48:39 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:02.646 11:48:39 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:02.646 11:48:39 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:02.646 11:48:39 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:02.646 11:48:39 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:02.646 11:48:39 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:02.646 11:48:39 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:02.646 11:48:39 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:02.646 11:48:39 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:02.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.646 --rc genhtml_branch_coverage=1 00:04:02.646 --rc genhtml_function_coverage=1 00:04:02.646 --rc genhtml_legend=1 00:04:02.646 --rc geninfo_all_blocks=1 00:04:02.646 --rc geninfo_unexecuted_blocks=1 00:04:02.646 00:04:02.646 ' 00:04:02.646 11:48:39 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:02.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.646 --rc genhtml_branch_coverage=1 00:04:02.646 --rc genhtml_function_coverage=1 00:04:02.646 --rc genhtml_legend=1 00:04:02.646 --rc geninfo_all_blocks=1 00:04:02.646 --rc geninfo_unexecuted_blocks=1 00:04:02.646 00:04:02.646 ' 00:04:02.646 11:48:39 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:02.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.646 --rc genhtml_branch_coverage=1 00:04:02.646 --rc genhtml_function_coverage=1 00:04:02.646 --rc genhtml_legend=1 00:04:02.646 --rc geninfo_all_blocks=1 00:04:02.646 --rc geninfo_unexecuted_blocks=1 00:04:02.646 00:04:02.646 ' 00:04:02.646 11:48:39 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:02.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.646 --rc genhtml_branch_coverage=1 00:04:02.646 --rc genhtml_function_coverage=1 00:04:02.646 --rc genhtml_legend=1 00:04:02.646 --rc geninfo_all_blocks=1 00:04:02.646 --rc geninfo_unexecuted_blocks=1 00:04:02.646 00:04:02.646 ' 00:04:02.646 11:48:39 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:02.646 11:48:39 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:02.646 11:48:39 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:02.646 11:48:39 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:02.646 11:48:39 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:02.646 11:48:39 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:02.646 11:48:39 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:02.646 11:48:39 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:02.646 11:48:39 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:02.646 11:48:39 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:02.646 11:48:39 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:02.646 11:48:39 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:02.646 11:48:39 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:02.646 11:48:39 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:02.646 11:48:39 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:02.646 11:48:39 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:02.646 11:48:39 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:02.646 11:48:39 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:02.646 11:48:39 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:02.646 11:48:39 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:02.646 11:48:39 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:02.646 11:48:39 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:02.646 11:48:39 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:02.646 11:48:39 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.647 11:48:39 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.647 11:48:39 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.647 11:48:39 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:02.647 11:48:39 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.647 11:48:39 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:02.647 11:48:39 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:02.647 11:48:39 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:02.647 11:48:39 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:02.647 11:48:39 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:02.647 11:48:39 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:02.647 11:48:39 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:02.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:02.647 11:48:39 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:02.647 11:48:39 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:02.647 11:48:39 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:02.647 11:48:39 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:02.647 11:48:39 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:02.647 11:48:39 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:02.647 11:48:39 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:02.647 11:48:39 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:02.647 11:48:39 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:02.647 11:48:39 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:02.647 11:48:39 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:02.647 11:48:39 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:02.647 11:48:39 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:02.647 11:48:39 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:02.647 INFO: launching applications... 00:04:02.647 11:48:39 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:02.647 11:48:39 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:02.647 11:48:39 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:02.647 11:48:39 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:02.647 11:48:39 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:02.647 11:48:39 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:02.647 11:48:39 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:02.647 11:48:39 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:02.647 11:48:39 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=740969 00:04:02.647 11:48:39 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:02.647 Waiting for target to run... 00:04:02.647 11:48:39 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 740969 /var/tmp/spdk_tgt.sock 00:04:02.647 11:48:39 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 740969 ']' 00:04:02.647 11:48:39 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:02.647 11:48:39 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:02.647 11:48:39 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:02.647 11:48:39 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:02.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:02.647 11:48:39 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:02.647 11:48:39 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:02.647 [2024-10-21 11:48:39.138353] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:04:02.647 [2024-10-21 11:48:39.138433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid740969 ] 00:04:02.908 [2024-10-21 11:48:39.421813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:02.908 [2024-10-21 11:48:39.445576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.480 11:48:39 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:03.480 11:48:39 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:04:03.480 11:48:39 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:03.480 00:04:03.480 11:48:39 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:03.480 INFO: shutting down applications... 00:04:03.480 11:48:39 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:03.480 11:48:39 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:03.480 11:48:39 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:03.480 11:48:39 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 740969 ]] 00:04:03.480 11:48:39 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 740969 00:04:03.480 11:48:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:03.480 11:48:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:03.480 11:48:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 740969 00:04:03.480 11:48:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:04.051 11:48:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:04.051 11:48:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:04.051 11:48:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 740969 00:04:04.051 11:48:40 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:04.051 11:48:40 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:04.051 11:48:40 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:04.051 11:48:40 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:04.051 SPDK target shutdown done 00:04:04.051 11:48:40 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:04.051 Success 00:04:04.051 00:04:04.051 real 0m1.572s 00:04:04.051 user 0m1.184s 00:04:04.051 sys 0m0.418s 00:04:04.051 11:48:40 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:04.051 11:48:40 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:04.051 ************************************ 00:04:04.051 END TEST json_config_extra_key 00:04:04.051 ************************************ 00:04:04.051 11:48:40 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:04.051 11:48:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:04.051 11:48:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:04.051 11:48:40 -- common/autotest_common.sh@10 -- # set +x 00:04:04.051 ************************************ 00:04:04.051 START TEST alias_rpc 00:04:04.051 ************************************ 00:04:04.051 11:48:40 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:04.051 * Looking for test storage... 00:04:04.051 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:04.051 11:48:40 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:04.051 11:48:40 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:04.051 11:48:40 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:04.314 11:48:40 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:04.314 11:48:40 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:04.314 11:48:40 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:04.314 11:48:40 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:04.314 11:48:40 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:04.314 11:48:40 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:04.314 11:48:40 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:04.314 11:48:40 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:04.314 11:48:40 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:04.314 11:48:40 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:04.314 11:48:40 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:04.314 11:48:40 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:04.314 11:48:40 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:04.314 11:48:40 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:04.314 11:48:40 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:04.314 11:48:40 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:04.314 11:48:40 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:04.314 11:48:40 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:04.314 11:48:40 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:04.314 11:48:40 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:04.314 11:48:40 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:04.314 11:48:40 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:04.314 11:48:40 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:04.314 11:48:40 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:04.314 11:48:40 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:04.314 11:48:40 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:04.314 11:48:40 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:04.314 11:48:40 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:04.314 11:48:40 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:04.314 11:48:40 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:04.314 11:48:40 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:04.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.314 --rc genhtml_branch_coverage=1 00:04:04.314 --rc genhtml_function_coverage=1 00:04:04.314 --rc genhtml_legend=1 00:04:04.314 --rc geninfo_all_blocks=1 00:04:04.314 --rc geninfo_unexecuted_blocks=1 00:04:04.314 00:04:04.314 ' 00:04:04.314 11:48:40 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:04.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.314 --rc genhtml_branch_coverage=1 00:04:04.314 --rc genhtml_function_coverage=1 00:04:04.314 --rc genhtml_legend=1 00:04:04.314 --rc geninfo_all_blocks=1 00:04:04.314 --rc geninfo_unexecuted_blocks=1 00:04:04.314 00:04:04.314 ' 00:04:04.314 11:48:40 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:04.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.314 --rc genhtml_branch_coverage=1 00:04:04.314 --rc genhtml_function_coverage=1 00:04:04.314 --rc genhtml_legend=1 00:04:04.314 --rc geninfo_all_blocks=1 00:04:04.314 --rc geninfo_unexecuted_blocks=1 00:04:04.314 00:04:04.314 ' 00:04:04.314 11:48:40 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:04.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.314 --rc genhtml_branch_coverage=1 00:04:04.314 --rc genhtml_function_coverage=1 00:04:04.314 --rc genhtml_legend=1 00:04:04.314 --rc geninfo_all_blocks=1 00:04:04.314 --rc geninfo_unexecuted_blocks=1 00:04:04.314 00:04:04.314 ' 00:04:04.314 11:48:40 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:04.314 11:48:40 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=741327 00:04:04.314 11:48:40 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 741327 00:04:04.314 11:48:40 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 741327 ']' 00:04:04.314 11:48:40 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:04.314 11:48:40 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:04.314 11:48:40 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:04.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:04.314 11:48:40 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:04.314 11:48:40 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.314 11:48:40 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:04.314 [2024-10-21 11:48:40.778636] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:04:04.314 [2024-10-21 11:48:40.778712] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid741327 ] 00:04:04.314 [2024-10-21 11:48:40.859589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:04.314 [2024-10-21 11:48:40.896277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:05.257 11:48:41 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:05.257 11:48:41 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:05.257 11:48:41 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:05.257 11:48:41 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 741327 00:04:05.257 11:48:41 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 741327 ']' 00:04:05.257 11:48:41 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 741327 00:04:05.257 11:48:41 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:04:05.257 11:48:41 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:05.257 11:48:41 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 741327 00:04:05.257 11:48:41 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:05.257 11:48:41 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:05.257 11:48:41 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 741327' 00:04:05.257 killing process with pid 741327 00:04:05.257 11:48:41 alias_rpc -- common/autotest_common.sh@969 -- # kill 741327 00:04:05.257 11:48:41 alias_rpc -- common/autotest_common.sh@974 -- # wait 741327 00:04:05.517 00:04:05.517 real 0m1.475s 00:04:05.517 user 0m1.600s 00:04:05.517 sys 0m0.415s 00:04:05.517 11:48:41 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:05.517 11:48:41 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.517 ************************************ 00:04:05.517 END TEST alias_rpc 00:04:05.517 ************************************ 00:04:05.517 11:48:42 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:05.517 11:48:42 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:05.517 11:48:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:05.517 11:48:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:05.517 11:48:42 -- common/autotest_common.sh@10 -- # set +x 00:04:05.517 ************************************ 00:04:05.517 START TEST spdkcli_tcp 00:04:05.517 ************************************ 00:04:05.517 11:48:42 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:05.778 * Looking for test storage... 00:04:05.778 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:05.778 11:48:42 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:05.778 11:48:42 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:04:05.778 11:48:42 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:05.778 11:48:42 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:05.778 11:48:42 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:05.778 11:48:42 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:05.778 11:48:42 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:05.778 11:48:42 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:05.778 11:48:42 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:05.778 11:48:42 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:05.778 11:48:42 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:05.778 11:48:42 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:05.778 11:48:42 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:05.778 11:48:42 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:05.778 11:48:42 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:05.778 11:48:42 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:05.778 11:48:42 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:05.778 11:48:42 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:05.778 11:48:42 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:05.778 11:48:42 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:05.778 11:48:42 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:05.778 11:48:42 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:05.778 11:48:42 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:05.778 11:48:42 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:05.778 11:48:42 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:05.778 11:48:42 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:05.778 11:48:42 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:05.778 11:48:42 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:05.778 11:48:42 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:05.778 11:48:42 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:05.778 11:48:42 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:05.778 11:48:42 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:05.778 11:48:42 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:05.778 11:48:42 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:05.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.778 --rc genhtml_branch_coverage=1 00:04:05.778 --rc genhtml_function_coverage=1 00:04:05.778 --rc genhtml_legend=1 00:04:05.779 --rc geninfo_all_blocks=1 00:04:05.779 --rc geninfo_unexecuted_blocks=1 00:04:05.779 00:04:05.779 ' 00:04:05.779 11:48:42 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:05.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.779 --rc genhtml_branch_coverage=1 00:04:05.779 --rc genhtml_function_coverage=1 00:04:05.779 --rc genhtml_legend=1 00:04:05.779 --rc geninfo_all_blocks=1 00:04:05.779 --rc geninfo_unexecuted_blocks=1 00:04:05.779 00:04:05.779 ' 00:04:05.779 11:48:42 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:05.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.779 --rc genhtml_branch_coverage=1 00:04:05.779 --rc genhtml_function_coverage=1 00:04:05.779 --rc genhtml_legend=1 00:04:05.779 --rc geninfo_all_blocks=1 00:04:05.779 --rc geninfo_unexecuted_blocks=1 00:04:05.779 00:04:05.779 ' 00:04:05.779 11:48:42 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:05.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.779 --rc genhtml_branch_coverage=1 00:04:05.779 --rc genhtml_function_coverage=1 00:04:05.779 --rc genhtml_legend=1 00:04:05.779 --rc geninfo_all_blocks=1 00:04:05.779 --rc geninfo_unexecuted_blocks=1 00:04:05.779 00:04:05.779 ' 00:04:05.779 11:48:42 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:05.779 11:48:42 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:05.779 11:48:42 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:05.779 11:48:42 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:05.779 11:48:42 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:05.779 11:48:42 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:05.779 11:48:42 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:05.779 11:48:42 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:05.779 11:48:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:05.779 11:48:42 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=741660 00:04:05.779 11:48:42 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 741660 00:04:05.779 11:48:42 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:05.779 11:48:42 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 741660 ']' 00:04:05.779 11:48:42 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:05.779 11:48:42 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:05.779 11:48:42 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:05.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:05.779 11:48:42 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:05.779 11:48:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:05.779 [2024-10-21 11:48:42.341608] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:04:05.779 [2024-10-21 11:48:42.341680] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid741660 ] 00:04:06.039 [2024-10-21 11:48:42.422018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:06.039 [2024-10-21 11:48:42.464702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:06.039 [2024-10-21 11:48:42.464703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.610 11:48:43 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:06.610 11:48:43 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:04:06.610 11:48:43 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:06.610 11:48:43 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=741977 00:04:06.610 11:48:43 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:06.871 [ 00:04:06.871 "bdev_malloc_delete", 00:04:06.871 "bdev_malloc_create", 00:04:06.871 "bdev_null_resize", 00:04:06.871 "bdev_null_delete", 00:04:06.871 "bdev_null_create", 00:04:06.871 "bdev_nvme_cuse_unregister", 00:04:06.871 "bdev_nvme_cuse_register", 00:04:06.871 "bdev_opal_new_user", 00:04:06.871 "bdev_opal_set_lock_state", 00:04:06.871 "bdev_opal_delete", 00:04:06.871 "bdev_opal_get_info", 00:04:06.871 "bdev_opal_create", 00:04:06.871 "bdev_nvme_opal_revert", 00:04:06.871 "bdev_nvme_opal_init", 00:04:06.871 "bdev_nvme_send_cmd", 00:04:06.871 "bdev_nvme_set_keys", 00:04:06.871 "bdev_nvme_get_path_iostat", 00:04:06.871 "bdev_nvme_get_mdns_discovery_info", 00:04:06.871 "bdev_nvme_stop_mdns_discovery", 00:04:06.871 "bdev_nvme_start_mdns_discovery", 00:04:06.871 "bdev_nvme_set_multipath_policy", 00:04:06.871 "bdev_nvme_set_preferred_path", 00:04:06.871 "bdev_nvme_get_io_paths", 00:04:06.871 "bdev_nvme_remove_error_injection", 00:04:06.871 "bdev_nvme_add_error_injection", 00:04:06.871 "bdev_nvme_get_discovery_info", 00:04:06.871 "bdev_nvme_stop_discovery", 00:04:06.871 "bdev_nvme_start_discovery", 00:04:06.871 "bdev_nvme_get_controller_health_info", 00:04:06.871 "bdev_nvme_disable_controller", 00:04:06.871 "bdev_nvme_enable_controller", 00:04:06.871 "bdev_nvme_reset_controller", 00:04:06.871 "bdev_nvme_get_transport_statistics", 00:04:06.871 "bdev_nvme_apply_firmware", 00:04:06.871 "bdev_nvme_detach_controller", 00:04:06.871 "bdev_nvme_get_controllers", 00:04:06.871 "bdev_nvme_attach_controller", 00:04:06.871 "bdev_nvme_set_hotplug", 00:04:06.871 "bdev_nvme_set_options", 00:04:06.871 "bdev_passthru_delete", 00:04:06.871 "bdev_passthru_create", 00:04:06.871 "bdev_lvol_set_parent_bdev", 00:04:06.871 "bdev_lvol_set_parent", 00:04:06.871 "bdev_lvol_check_shallow_copy", 00:04:06.871 "bdev_lvol_start_shallow_copy", 00:04:06.871 "bdev_lvol_grow_lvstore", 00:04:06.871 "bdev_lvol_get_lvols", 00:04:06.871 "bdev_lvol_get_lvstores", 00:04:06.871 "bdev_lvol_delete", 00:04:06.871 "bdev_lvol_set_read_only", 00:04:06.871 "bdev_lvol_resize", 00:04:06.871 "bdev_lvol_decouple_parent", 00:04:06.871 "bdev_lvol_inflate", 00:04:06.871 "bdev_lvol_rename", 00:04:06.871 "bdev_lvol_clone_bdev", 00:04:06.871 "bdev_lvol_clone", 00:04:06.871 "bdev_lvol_snapshot", 00:04:06.871 "bdev_lvol_create", 00:04:06.872 "bdev_lvol_delete_lvstore", 00:04:06.872 "bdev_lvol_rename_lvstore", 00:04:06.872 "bdev_lvol_create_lvstore", 00:04:06.872 "bdev_raid_set_options", 00:04:06.872 "bdev_raid_remove_base_bdev", 00:04:06.872 "bdev_raid_add_base_bdev", 00:04:06.872 "bdev_raid_delete", 00:04:06.872 "bdev_raid_create", 00:04:06.872 "bdev_raid_get_bdevs", 00:04:06.872 "bdev_error_inject_error", 00:04:06.872 "bdev_error_delete", 00:04:06.872 "bdev_error_create", 00:04:06.872 "bdev_split_delete", 00:04:06.872 "bdev_split_create", 00:04:06.872 "bdev_delay_delete", 00:04:06.872 "bdev_delay_create", 00:04:06.872 "bdev_delay_update_latency", 00:04:06.872 "bdev_zone_block_delete", 00:04:06.872 "bdev_zone_block_create", 00:04:06.872 "blobfs_create", 00:04:06.872 "blobfs_detect", 00:04:06.872 "blobfs_set_cache_size", 00:04:06.872 "bdev_aio_delete", 00:04:06.872 "bdev_aio_rescan", 00:04:06.872 "bdev_aio_create", 00:04:06.872 "bdev_ftl_set_property", 00:04:06.872 "bdev_ftl_get_properties", 00:04:06.872 "bdev_ftl_get_stats", 00:04:06.872 "bdev_ftl_unmap", 00:04:06.872 "bdev_ftl_unload", 00:04:06.872 "bdev_ftl_delete", 00:04:06.872 "bdev_ftl_load", 00:04:06.872 "bdev_ftl_create", 00:04:06.872 "bdev_virtio_attach_controller", 00:04:06.872 "bdev_virtio_scsi_get_devices", 00:04:06.872 "bdev_virtio_detach_controller", 00:04:06.872 "bdev_virtio_blk_set_hotplug", 00:04:06.872 "bdev_iscsi_delete", 00:04:06.872 "bdev_iscsi_create", 00:04:06.872 "bdev_iscsi_set_options", 00:04:06.872 "accel_error_inject_error", 00:04:06.872 "ioat_scan_accel_module", 00:04:06.872 "dsa_scan_accel_module", 00:04:06.872 "iaa_scan_accel_module", 00:04:06.872 "vfu_virtio_create_fs_endpoint", 00:04:06.872 "vfu_virtio_create_scsi_endpoint", 00:04:06.872 "vfu_virtio_scsi_remove_target", 00:04:06.872 "vfu_virtio_scsi_add_target", 00:04:06.872 "vfu_virtio_create_blk_endpoint", 00:04:06.872 "vfu_virtio_delete_endpoint", 00:04:06.872 "keyring_file_remove_key", 00:04:06.872 "keyring_file_add_key", 00:04:06.872 "keyring_linux_set_options", 00:04:06.872 "fsdev_aio_delete", 00:04:06.872 "fsdev_aio_create", 00:04:06.872 "iscsi_get_histogram", 00:04:06.872 "iscsi_enable_histogram", 00:04:06.872 "iscsi_set_options", 00:04:06.872 "iscsi_get_auth_groups", 00:04:06.872 "iscsi_auth_group_remove_secret", 00:04:06.872 "iscsi_auth_group_add_secret", 00:04:06.872 "iscsi_delete_auth_group", 00:04:06.872 "iscsi_create_auth_group", 00:04:06.872 "iscsi_set_discovery_auth", 00:04:06.872 "iscsi_get_options", 00:04:06.872 "iscsi_target_node_request_logout", 00:04:06.872 "iscsi_target_node_set_redirect", 00:04:06.872 "iscsi_target_node_set_auth", 00:04:06.872 "iscsi_target_node_add_lun", 00:04:06.872 "iscsi_get_stats", 00:04:06.872 "iscsi_get_connections", 00:04:06.872 "iscsi_portal_group_set_auth", 00:04:06.872 "iscsi_start_portal_group", 00:04:06.872 "iscsi_delete_portal_group", 00:04:06.872 "iscsi_create_portal_group", 00:04:06.872 "iscsi_get_portal_groups", 00:04:06.872 "iscsi_delete_target_node", 00:04:06.872 "iscsi_target_node_remove_pg_ig_maps", 00:04:06.872 "iscsi_target_node_add_pg_ig_maps", 00:04:06.872 "iscsi_create_target_node", 00:04:06.872 "iscsi_get_target_nodes", 00:04:06.872 "iscsi_delete_initiator_group", 00:04:06.872 "iscsi_initiator_group_remove_initiators", 00:04:06.872 "iscsi_initiator_group_add_initiators", 00:04:06.872 "iscsi_create_initiator_group", 00:04:06.872 "iscsi_get_initiator_groups", 00:04:06.872 "nvmf_set_crdt", 00:04:06.872 "nvmf_set_config", 00:04:06.872 "nvmf_set_max_subsystems", 00:04:06.872 "nvmf_stop_mdns_prr", 00:04:06.872 "nvmf_publish_mdns_prr", 00:04:06.872 "nvmf_subsystem_get_listeners", 00:04:06.872 "nvmf_subsystem_get_qpairs", 00:04:06.872 "nvmf_subsystem_get_controllers", 00:04:06.872 "nvmf_get_stats", 00:04:06.872 "nvmf_get_transports", 00:04:06.872 "nvmf_create_transport", 00:04:06.872 "nvmf_get_targets", 00:04:06.872 "nvmf_delete_target", 00:04:06.872 "nvmf_create_target", 00:04:06.872 "nvmf_subsystem_allow_any_host", 00:04:06.872 "nvmf_subsystem_set_keys", 00:04:06.872 "nvmf_subsystem_remove_host", 00:04:06.872 "nvmf_subsystem_add_host", 00:04:06.872 "nvmf_ns_remove_host", 00:04:06.872 "nvmf_ns_add_host", 00:04:06.872 "nvmf_subsystem_remove_ns", 00:04:06.872 "nvmf_subsystem_set_ns_ana_group", 00:04:06.872 "nvmf_subsystem_add_ns", 00:04:06.872 "nvmf_subsystem_listener_set_ana_state", 00:04:06.872 "nvmf_discovery_get_referrals", 00:04:06.872 "nvmf_discovery_remove_referral", 00:04:06.872 "nvmf_discovery_add_referral", 00:04:06.872 "nvmf_subsystem_remove_listener", 00:04:06.872 "nvmf_subsystem_add_listener", 00:04:06.872 "nvmf_delete_subsystem", 00:04:06.872 "nvmf_create_subsystem", 00:04:06.872 "nvmf_get_subsystems", 00:04:06.872 "env_dpdk_get_mem_stats", 00:04:06.872 "nbd_get_disks", 00:04:06.872 "nbd_stop_disk", 00:04:06.872 "nbd_start_disk", 00:04:06.872 "ublk_recover_disk", 00:04:06.872 "ublk_get_disks", 00:04:06.872 "ublk_stop_disk", 00:04:06.872 "ublk_start_disk", 00:04:06.872 "ublk_destroy_target", 00:04:06.872 "ublk_create_target", 00:04:06.872 "virtio_blk_create_transport", 00:04:06.872 "virtio_blk_get_transports", 00:04:06.872 "vhost_controller_set_coalescing", 00:04:06.872 "vhost_get_controllers", 00:04:06.872 "vhost_delete_controller", 00:04:06.872 "vhost_create_blk_controller", 00:04:06.872 "vhost_scsi_controller_remove_target", 00:04:06.872 "vhost_scsi_controller_add_target", 00:04:06.872 "vhost_start_scsi_controller", 00:04:06.872 "vhost_create_scsi_controller", 00:04:06.872 "thread_set_cpumask", 00:04:06.872 "scheduler_set_options", 00:04:06.872 "framework_get_governor", 00:04:06.872 "framework_get_scheduler", 00:04:06.872 "framework_set_scheduler", 00:04:06.872 "framework_get_reactors", 00:04:06.872 "thread_get_io_channels", 00:04:06.872 "thread_get_pollers", 00:04:06.872 "thread_get_stats", 00:04:06.872 "framework_monitor_context_switch", 00:04:06.872 "spdk_kill_instance", 00:04:06.872 "log_enable_timestamps", 00:04:06.872 "log_get_flags", 00:04:06.872 "log_clear_flag", 00:04:06.872 "log_set_flag", 00:04:06.872 "log_get_level", 00:04:06.872 "log_set_level", 00:04:06.872 "log_get_print_level", 00:04:06.872 "log_set_print_level", 00:04:06.872 "framework_enable_cpumask_locks", 00:04:06.872 "framework_disable_cpumask_locks", 00:04:06.872 "framework_wait_init", 00:04:06.872 "framework_start_init", 00:04:06.872 "scsi_get_devices", 00:04:06.872 "bdev_get_histogram", 00:04:06.872 "bdev_enable_histogram", 00:04:06.872 "bdev_set_qos_limit", 00:04:06.872 "bdev_set_qd_sampling_period", 00:04:06.872 "bdev_get_bdevs", 00:04:06.872 "bdev_reset_iostat", 00:04:06.872 "bdev_get_iostat", 00:04:06.872 "bdev_examine", 00:04:06.872 "bdev_wait_for_examine", 00:04:06.872 "bdev_set_options", 00:04:06.872 "accel_get_stats", 00:04:06.872 "accel_set_options", 00:04:06.872 "accel_set_driver", 00:04:06.872 "accel_crypto_key_destroy", 00:04:06.872 "accel_crypto_keys_get", 00:04:06.872 "accel_crypto_key_create", 00:04:06.872 "accel_assign_opc", 00:04:06.872 "accel_get_module_info", 00:04:06.872 "accel_get_opc_assignments", 00:04:06.872 "vmd_rescan", 00:04:06.872 "vmd_remove_device", 00:04:06.872 "vmd_enable", 00:04:06.872 "sock_get_default_impl", 00:04:06.872 "sock_set_default_impl", 00:04:06.872 "sock_impl_set_options", 00:04:06.872 "sock_impl_get_options", 00:04:06.872 "iobuf_get_stats", 00:04:06.872 "iobuf_set_options", 00:04:06.872 "keyring_get_keys", 00:04:06.872 "vfu_tgt_set_base_path", 00:04:06.872 "framework_get_pci_devices", 00:04:06.872 "framework_get_config", 00:04:06.872 "framework_get_subsystems", 00:04:06.872 "fsdev_set_opts", 00:04:06.872 "fsdev_get_opts", 00:04:06.872 "trace_get_info", 00:04:06.872 "trace_get_tpoint_group_mask", 00:04:06.872 "trace_disable_tpoint_group", 00:04:06.872 "trace_enable_tpoint_group", 00:04:06.872 "trace_clear_tpoint_mask", 00:04:06.872 "trace_set_tpoint_mask", 00:04:06.872 "notify_get_notifications", 00:04:06.872 "notify_get_types", 00:04:06.872 "spdk_get_version", 00:04:06.872 "rpc_get_methods" 00:04:06.872 ] 00:04:06.872 11:48:43 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:06.872 11:48:43 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:06.872 11:48:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:06.872 11:48:43 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:06.872 11:48:43 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 741660 00:04:06.872 11:48:43 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 741660 ']' 00:04:06.872 11:48:43 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 741660 00:04:06.872 11:48:43 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:04:06.872 11:48:43 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:06.872 11:48:43 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 741660 00:04:06.872 11:48:43 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:06.872 11:48:43 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:06.872 11:48:43 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 741660' 00:04:06.872 killing process with pid 741660 00:04:06.872 11:48:43 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 741660 00:04:06.872 11:48:43 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 741660 00:04:07.133 00:04:07.133 real 0m1.559s 00:04:07.133 user 0m2.822s 00:04:07.133 sys 0m0.501s 00:04:07.133 11:48:43 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:07.133 11:48:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:07.133 ************************************ 00:04:07.133 END TEST spdkcli_tcp 00:04:07.133 ************************************ 00:04:07.134 11:48:43 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:07.134 11:48:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:07.134 11:48:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:07.134 11:48:43 -- common/autotest_common.sh@10 -- # set +x 00:04:07.134 ************************************ 00:04:07.134 START TEST dpdk_mem_utility 00:04:07.134 ************************************ 00:04:07.134 11:48:43 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:07.395 * Looking for test storage... 00:04:07.395 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:07.395 11:48:43 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:07.395 11:48:43 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:04:07.395 11:48:43 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:07.395 11:48:43 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:07.395 11:48:43 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:07.395 11:48:43 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:07.395 11:48:43 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:07.395 11:48:43 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:07.395 11:48:43 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:07.395 11:48:43 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:07.395 11:48:43 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:07.395 11:48:43 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:07.395 11:48:43 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:07.395 11:48:43 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:07.395 11:48:43 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:07.395 11:48:43 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:07.395 11:48:43 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:07.395 11:48:43 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:07.395 11:48:43 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:07.395 11:48:43 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:07.395 11:48:43 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:07.395 11:48:43 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:07.395 11:48:43 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:07.395 11:48:43 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:07.395 11:48:43 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:07.395 11:48:43 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:07.395 11:48:43 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:07.395 11:48:43 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:07.395 11:48:43 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:07.395 11:48:43 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:07.395 11:48:43 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:07.395 11:48:43 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:07.395 11:48:43 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:07.395 11:48:43 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:07.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.395 --rc genhtml_branch_coverage=1 00:04:07.395 --rc genhtml_function_coverage=1 00:04:07.395 --rc genhtml_legend=1 00:04:07.395 --rc geninfo_all_blocks=1 00:04:07.395 --rc geninfo_unexecuted_blocks=1 00:04:07.395 00:04:07.395 ' 00:04:07.395 11:48:43 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:07.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.395 --rc genhtml_branch_coverage=1 00:04:07.395 --rc genhtml_function_coverage=1 00:04:07.395 --rc genhtml_legend=1 00:04:07.395 --rc geninfo_all_blocks=1 00:04:07.395 --rc geninfo_unexecuted_blocks=1 00:04:07.395 00:04:07.395 ' 00:04:07.395 11:48:43 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:07.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.395 --rc genhtml_branch_coverage=1 00:04:07.395 --rc genhtml_function_coverage=1 00:04:07.395 --rc genhtml_legend=1 00:04:07.395 --rc geninfo_all_blocks=1 00:04:07.395 --rc geninfo_unexecuted_blocks=1 00:04:07.395 00:04:07.395 ' 00:04:07.395 11:48:43 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:07.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.395 --rc genhtml_branch_coverage=1 00:04:07.395 --rc genhtml_function_coverage=1 00:04:07.395 --rc genhtml_legend=1 00:04:07.395 --rc geninfo_all_blocks=1 00:04:07.395 --rc geninfo_unexecuted_blocks=1 00:04:07.395 00:04:07.395 ' 00:04:07.395 11:48:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:07.395 11:48:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=742057 00:04:07.395 11:48:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 742057 00:04:07.395 11:48:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:07.395 11:48:43 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 742057 ']' 00:04:07.395 11:48:43 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:07.395 11:48:43 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:07.395 11:48:43 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:07.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:07.395 11:48:43 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:07.395 11:48:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:07.395 [2024-10-21 11:48:43.963177] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:04:07.395 [2024-10-21 11:48:43.963248] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid742057 ] 00:04:07.656 [2024-10-21 11:48:44.043875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:07.656 [2024-10-21 11:48:44.080743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.228 11:48:44 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:08.228 11:48:44 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:04:08.228 11:48:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:08.228 11:48:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:08.228 11:48:44 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.228 11:48:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:08.228 { 00:04:08.228 "filename": "/tmp/spdk_mem_dump.txt" 00:04:08.228 } 00:04:08.228 11:48:44 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.228 11:48:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:08.228 DPDK memory size 810.000000 MiB in 1 heap(s) 00:04:08.228 1 heaps totaling size 810.000000 MiB 00:04:08.228 size: 810.000000 MiB heap id: 0 00:04:08.228 end heaps---------- 00:04:08.228 9 mempools totaling size 595.772034 MiB 00:04:08.228 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:08.228 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:08.228 size: 92.545471 MiB name: bdev_io_742057 00:04:08.228 size: 50.003479 MiB name: msgpool_742057 00:04:08.228 size: 36.509338 MiB name: fsdev_io_742057 00:04:08.228 size: 21.763794 MiB name: PDU_Pool 00:04:08.228 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:08.228 size: 4.133484 MiB name: evtpool_742057 00:04:08.228 size: 0.026123 MiB name: Session_Pool 00:04:08.228 end mempools------- 00:04:08.228 6 memzones totaling size 4.142822 MiB 00:04:08.228 size: 1.000366 MiB name: RG_ring_0_742057 00:04:08.228 size: 1.000366 MiB name: RG_ring_1_742057 00:04:08.228 size: 1.000366 MiB name: RG_ring_4_742057 00:04:08.228 size: 1.000366 MiB name: RG_ring_5_742057 00:04:08.228 size: 0.125366 MiB name: RG_ring_2_742057 00:04:08.228 size: 0.015991 MiB name: RG_ring_3_742057 00:04:08.228 end memzones------- 00:04:08.228 11:48:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:08.489 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:08.489 list of free elements. size: 10.862488 MiB 00:04:08.489 element at address: 0x200018a00000 with size: 0.999878 MiB 00:04:08.489 element at address: 0x200018c00000 with size: 0.999878 MiB 00:04:08.489 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:08.489 element at address: 0x200031800000 with size: 0.994446 MiB 00:04:08.489 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:08.489 element at address: 0x200012c00000 with size: 0.954285 MiB 00:04:08.489 element at address: 0x200018e00000 with size: 0.936584 MiB 00:04:08.489 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:08.489 element at address: 0x20001a600000 with size: 0.582886 MiB 00:04:08.489 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:08.489 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:08.489 element at address: 0x200019000000 with size: 0.485657 MiB 00:04:08.489 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:08.489 element at address: 0x200027a00000 with size: 0.410034 MiB 00:04:08.489 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:08.489 list of standard malloc elements. size: 199.218628 MiB 00:04:08.489 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:08.489 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:08.489 element at address: 0x200018afff80 with size: 1.000122 MiB 00:04:08.489 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:04:08.489 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:08.489 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:08.489 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:04:08.489 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:08.489 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:04:08.489 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:08.489 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:08.489 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:08.489 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:08.489 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:08.489 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:08.489 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:08.489 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:08.489 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:08.489 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:08.489 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:08.489 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:08.490 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:08.490 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:08.490 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:08.490 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:08.490 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:08.490 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:08.490 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:08.490 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:08.490 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:08.490 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:08.490 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:08.490 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:08.490 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:04:08.490 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:04:08.490 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:04:08.490 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:04:08.490 element at address: 0x20001a695380 with size: 0.000183 MiB 00:04:08.490 element at address: 0x20001a695440 with size: 0.000183 MiB 00:04:08.490 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:04:08.490 element at address: 0x200027a69040 with size: 0.000183 MiB 00:04:08.490 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:04:08.490 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:04:08.490 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:04:08.490 list of memzone associated elements. size: 599.918884 MiB 00:04:08.490 element at address: 0x20001a695500 with size: 211.416748 MiB 00:04:08.490 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:08.490 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:04:08.490 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:08.490 element at address: 0x200012df4780 with size: 92.045044 MiB 00:04:08.490 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_742057_0 00:04:08.490 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:08.490 associated memzone info: size: 48.002930 MiB name: MP_msgpool_742057_0 00:04:08.490 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:08.490 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_742057_0 00:04:08.490 element at address: 0x2000191be940 with size: 20.255554 MiB 00:04:08.490 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:08.490 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:04:08.490 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:08.490 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:08.490 associated memzone info: size: 3.000122 MiB name: MP_evtpool_742057_0 00:04:08.490 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:08.490 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_742057 00:04:08.490 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:08.490 associated memzone info: size: 1.007996 MiB name: MP_evtpool_742057 00:04:08.490 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:08.490 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:08.490 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:04:08.490 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:08.490 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:08.490 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:08.490 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:08.490 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:08.490 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:08.490 associated memzone info: size: 1.000366 MiB name: RG_ring_0_742057 00:04:08.490 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:08.490 associated memzone info: size: 1.000366 MiB name: RG_ring_1_742057 00:04:08.490 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:04:08.490 associated memzone info: size: 1.000366 MiB name: RG_ring_4_742057 00:04:08.490 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:04:08.490 associated memzone info: size: 1.000366 MiB name: RG_ring_5_742057 00:04:08.490 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:08.490 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_742057 00:04:08.490 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:08.490 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_742057 00:04:08.490 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:08.490 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:08.490 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:08.490 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:08.490 element at address: 0x20001907c540 with size: 0.250488 MiB 00:04:08.490 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:08.490 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:08.490 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_742057 00:04:08.490 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:08.490 associated memzone info: size: 0.125366 MiB name: RG_ring_2_742057 00:04:08.490 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:08.490 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:08.490 element at address: 0x200027a69100 with size: 0.023743 MiB 00:04:08.490 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:08.490 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:08.490 associated memzone info: size: 0.015991 MiB name: RG_ring_3_742057 00:04:08.490 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:04:08.490 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:08.490 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:08.490 associated memzone info: size: 0.000183 MiB name: MP_msgpool_742057 00:04:08.490 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:08.490 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_742057 00:04:08.490 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:08.490 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_742057 00:04:08.490 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:04:08.490 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:08.490 11:48:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:08.490 11:48:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 742057 00:04:08.490 11:48:44 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 742057 ']' 00:04:08.490 11:48:44 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 742057 00:04:08.490 11:48:44 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:04:08.490 11:48:44 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:08.490 11:48:44 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 742057 00:04:08.490 11:48:44 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:08.490 11:48:44 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:08.490 11:48:44 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 742057' 00:04:08.490 killing process with pid 742057 00:04:08.490 11:48:44 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 742057 00:04:08.490 11:48:44 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 742057 00:04:08.751 00:04:08.751 real 0m1.399s 00:04:08.751 user 0m1.457s 00:04:08.751 sys 0m0.434s 00:04:08.751 11:48:45 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:08.751 11:48:45 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:08.751 ************************************ 00:04:08.751 END TEST dpdk_mem_utility 00:04:08.751 ************************************ 00:04:08.751 11:48:45 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:08.751 11:48:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:08.751 11:48:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:08.751 11:48:45 -- common/autotest_common.sh@10 -- # set +x 00:04:08.751 ************************************ 00:04:08.751 START TEST event 00:04:08.751 ************************************ 00:04:08.751 11:48:45 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:08.751 * Looking for test storage... 00:04:08.751 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:08.751 11:48:45 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:08.751 11:48:45 event -- common/autotest_common.sh@1691 -- # lcov --version 00:04:08.751 11:48:45 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:09.012 11:48:45 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:09.012 11:48:45 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:09.012 11:48:45 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:09.012 11:48:45 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:09.012 11:48:45 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:09.012 11:48:45 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:09.012 11:48:45 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:09.012 11:48:45 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:09.012 11:48:45 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:09.012 11:48:45 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:09.012 11:48:45 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:09.012 11:48:45 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:09.012 11:48:45 event -- scripts/common.sh@344 -- # case "$op" in 00:04:09.012 11:48:45 event -- scripts/common.sh@345 -- # : 1 00:04:09.012 11:48:45 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:09.012 11:48:45 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:09.012 11:48:45 event -- scripts/common.sh@365 -- # decimal 1 00:04:09.012 11:48:45 event -- scripts/common.sh@353 -- # local d=1 00:04:09.012 11:48:45 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:09.012 11:48:45 event -- scripts/common.sh@355 -- # echo 1 00:04:09.012 11:48:45 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:09.012 11:48:45 event -- scripts/common.sh@366 -- # decimal 2 00:04:09.012 11:48:45 event -- scripts/common.sh@353 -- # local d=2 00:04:09.012 11:48:45 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:09.012 11:48:45 event -- scripts/common.sh@355 -- # echo 2 00:04:09.012 11:48:45 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:09.012 11:48:45 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:09.012 11:48:45 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:09.012 11:48:45 event -- scripts/common.sh@368 -- # return 0 00:04:09.012 11:48:45 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:09.012 11:48:45 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:09.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.012 --rc genhtml_branch_coverage=1 00:04:09.012 --rc genhtml_function_coverage=1 00:04:09.012 --rc genhtml_legend=1 00:04:09.012 --rc geninfo_all_blocks=1 00:04:09.012 --rc geninfo_unexecuted_blocks=1 00:04:09.012 00:04:09.012 ' 00:04:09.012 11:48:45 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:09.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.012 --rc genhtml_branch_coverage=1 00:04:09.012 --rc genhtml_function_coverage=1 00:04:09.012 --rc genhtml_legend=1 00:04:09.012 --rc geninfo_all_blocks=1 00:04:09.012 --rc geninfo_unexecuted_blocks=1 00:04:09.012 00:04:09.012 ' 00:04:09.012 11:48:45 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:09.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.012 --rc genhtml_branch_coverage=1 00:04:09.012 --rc genhtml_function_coverage=1 00:04:09.012 --rc genhtml_legend=1 00:04:09.012 --rc geninfo_all_blocks=1 00:04:09.012 --rc geninfo_unexecuted_blocks=1 00:04:09.012 00:04:09.012 ' 00:04:09.012 11:48:45 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:09.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.012 --rc genhtml_branch_coverage=1 00:04:09.012 --rc genhtml_function_coverage=1 00:04:09.012 --rc genhtml_legend=1 00:04:09.012 --rc geninfo_all_blocks=1 00:04:09.012 --rc geninfo_unexecuted_blocks=1 00:04:09.012 00:04:09.012 ' 00:04:09.012 11:48:45 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:09.012 11:48:45 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:09.012 11:48:45 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:09.012 11:48:45 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:04:09.012 11:48:45 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:09.012 11:48:45 event -- common/autotest_common.sh@10 -- # set +x 00:04:09.012 ************************************ 00:04:09.012 START TEST event_perf 00:04:09.012 ************************************ 00:04:09.012 11:48:45 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:09.012 Running I/O for 1 seconds...[2024-10-21 11:48:45.439806] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:04:09.012 [2024-10-21 11:48:45.439919] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid742457 ] 00:04:09.012 [2024-10-21 11:48:45.522460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:09.012 [2024-10-21 11:48:45.568372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:09.012 [2024-10-21 11:48:45.568448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:09.012 [2024-10-21 11:48:45.568604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.012 [2024-10-21 11:48:45.568605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:10.396 Running I/O for 1 seconds... 00:04:10.396 lcore 0: 182694 00:04:10.396 lcore 1: 182695 00:04:10.396 lcore 2: 182697 00:04:10.396 lcore 3: 182697 00:04:10.396 done. 00:04:10.396 00:04:10.396 real 0m1.179s 00:04:10.396 user 0m4.089s 00:04:10.396 sys 0m0.085s 00:04:10.396 11:48:46 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:10.396 11:48:46 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:10.396 ************************************ 00:04:10.396 END TEST event_perf 00:04:10.396 ************************************ 00:04:10.396 11:48:46 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:10.396 11:48:46 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:10.396 11:48:46 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:10.396 11:48:46 event -- common/autotest_common.sh@10 -- # set +x 00:04:10.396 ************************************ 00:04:10.396 START TEST event_reactor 00:04:10.396 ************************************ 00:04:10.396 11:48:46 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:10.396 [2024-10-21 11:48:46.693235] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:04:10.396 [2024-10-21 11:48:46.693340] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid742813 ] 00:04:10.396 [2024-10-21 11:48:46.774134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:10.396 [2024-10-21 11:48:46.808730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.337 test_start 00:04:11.337 oneshot 00:04:11.337 tick 100 00:04:11.337 tick 100 00:04:11.337 tick 250 00:04:11.337 tick 100 00:04:11.337 tick 100 00:04:11.337 tick 100 00:04:11.337 tick 250 00:04:11.337 tick 500 00:04:11.337 tick 100 00:04:11.337 tick 100 00:04:11.337 tick 250 00:04:11.337 tick 100 00:04:11.337 tick 100 00:04:11.337 test_end 00:04:11.337 00:04:11.337 real 0m1.163s 00:04:11.337 user 0m1.083s 00:04:11.337 sys 0m0.077s 00:04:11.337 11:48:47 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:11.337 11:48:47 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:11.337 ************************************ 00:04:11.337 END TEST event_reactor 00:04:11.337 ************************************ 00:04:11.337 11:48:47 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:11.337 11:48:47 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:11.337 11:48:47 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:11.337 11:48:47 event -- common/autotest_common.sh@10 -- # set +x 00:04:11.337 ************************************ 00:04:11.337 START TEST event_reactor_perf 00:04:11.337 ************************************ 00:04:11.337 11:48:47 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:11.337 [2024-10-21 11:48:47.931016] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:04:11.337 [2024-10-21 11:48:47.931095] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid743123 ] 00:04:11.597 [2024-10-21 11:48:48.014110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.597 [2024-10-21 11:48:48.051739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.537 test_start 00:04:12.537 test_end 00:04:12.537 Performance: 538750 events per second 00:04:12.537 00:04:12.537 real 0m1.167s 00:04:12.537 user 0m1.081s 00:04:12.537 sys 0m0.083s 00:04:12.537 11:48:49 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:12.537 11:48:49 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:12.537 ************************************ 00:04:12.537 END TEST event_reactor_perf 00:04:12.537 ************************************ 00:04:12.537 11:48:49 event -- event/event.sh@49 -- # uname -s 00:04:12.537 11:48:49 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:12.537 11:48:49 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:12.537 11:48:49 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:12.537 11:48:49 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:12.537 11:48:49 event -- common/autotest_common.sh@10 -- # set +x 00:04:12.798 ************************************ 00:04:12.798 START TEST event_scheduler 00:04:12.798 ************************************ 00:04:12.799 11:48:49 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:12.799 * Looking for test storage... 00:04:12.799 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:12.799 11:48:49 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:12.799 11:48:49 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:04:12.799 11:48:49 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:12.799 11:48:49 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:12.799 11:48:49 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:12.799 11:48:49 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:12.799 11:48:49 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:12.799 11:48:49 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:12.799 11:48:49 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:12.799 11:48:49 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:12.799 11:48:49 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:12.799 11:48:49 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:12.799 11:48:49 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:12.799 11:48:49 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:12.799 11:48:49 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:12.799 11:48:49 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:12.799 11:48:49 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:12.799 11:48:49 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:12.799 11:48:49 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:12.799 11:48:49 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:12.799 11:48:49 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:12.799 11:48:49 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:12.799 11:48:49 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:12.799 11:48:49 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:12.799 11:48:49 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:12.799 11:48:49 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:12.799 11:48:49 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:12.799 11:48:49 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:12.799 11:48:49 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:12.799 11:48:49 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:12.799 11:48:49 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:12.799 11:48:49 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:12.799 11:48:49 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:12.799 11:48:49 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:12.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.799 --rc genhtml_branch_coverage=1 00:04:12.799 --rc genhtml_function_coverage=1 00:04:12.799 --rc genhtml_legend=1 00:04:12.799 --rc geninfo_all_blocks=1 00:04:12.799 --rc geninfo_unexecuted_blocks=1 00:04:12.799 00:04:12.799 ' 00:04:12.799 11:48:49 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:12.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.799 --rc genhtml_branch_coverage=1 00:04:12.799 --rc genhtml_function_coverage=1 00:04:12.799 --rc genhtml_legend=1 00:04:12.799 --rc geninfo_all_blocks=1 00:04:12.799 --rc geninfo_unexecuted_blocks=1 00:04:12.799 00:04:12.799 ' 00:04:12.799 11:48:49 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:12.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.799 --rc genhtml_branch_coverage=1 00:04:12.799 --rc genhtml_function_coverage=1 00:04:12.799 --rc genhtml_legend=1 00:04:12.799 --rc geninfo_all_blocks=1 00:04:12.799 --rc geninfo_unexecuted_blocks=1 00:04:12.799 00:04:12.799 ' 00:04:12.799 11:48:49 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:12.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.799 --rc genhtml_branch_coverage=1 00:04:12.799 --rc genhtml_function_coverage=1 00:04:12.799 --rc genhtml_legend=1 00:04:12.799 --rc geninfo_all_blocks=1 00:04:12.799 --rc geninfo_unexecuted_blocks=1 00:04:12.799 00:04:12.799 ' 00:04:12.799 11:48:49 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:12.799 11:48:49 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=743391 00:04:12.799 11:48:49 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:12.799 11:48:49 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 743391 00:04:12.799 11:48:49 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:12.799 11:48:49 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 743391 ']' 00:04:12.799 11:48:49 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:12.799 11:48:49 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:12.799 11:48:49 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:12.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:12.799 11:48:49 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:12.799 11:48:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:13.059 [2024-10-21 11:48:49.411467] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:04:13.059 [2024-10-21 11:48:49.411521] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid743391 ] 00:04:13.059 [2024-10-21 11:48:49.491141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:13.059 [2024-10-21 11:48:49.538697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.059 [2024-10-21 11:48:49.538853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:13.059 [2024-10-21 11:48:49.539010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:13.059 [2024-10-21 11:48:49.539010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:13.628 11:48:50 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:13.628 11:48:50 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:04:13.628 11:48:50 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:13.628 11:48:50 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.628 11:48:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:13.889 [2024-10-21 11:48:50.225323] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:13.889 [2024-10-21 11:48:50.225352] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:13.889 [2024-10-21 11:48:50.225363] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:13.889 [2024-10-21 11:48:50.225369] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:13.889 [2024-10-21 11:48:50.225374] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:13.889 11:48:50 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.889 11:48:50 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:13.889 11:48:50 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.889 11:48:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:13.889 [2024-10-21 11:48:50.290916] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:13.889 11:48:50 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.889 11:48:50 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:13.889 11:48:50 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:13.889 11:48:50 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:13.889 11:48:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:13.889 ************************************ 00:04:13.889 START TEST scheduler_create_thread 00:04:13.889 ************************************ 00:04:13.889 11:48:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:04:13.889 11:48:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:13.889 11:48:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.889 11:48:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:13.889 2 00:04:13.889 11:48:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.889 11:48:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:13.889 11:48:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.889 11:48:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:13.889 3 00:04:13.889 11:48:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.889 11:48:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:13.889 11:48:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.889 11:48:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:13.889 4 00:04:13.889 11:48:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.889 11:48:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:13.889 11:48:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.889 11:48:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:13.889 5 00:04:13.889 11:48:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.889 11:48:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:13.889 11:48:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.889 11:48:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:13.889 6 00:04:13.889 11:48:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.889 11:48:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:13.889 11:48:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.889 11:48:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:13.889 7 00:04:13.889 11:48:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.889 11:48:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:13.889 11:48:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.889 11:48:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:13.889 8 00:04:13.889 11:48:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.889 11:48:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:13.889 11:48:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.889 11:48:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:13.889 9 00:04:13.889 11:48:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.889 11:48:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:13.889 11:48:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.889 11:48:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:14.460 10 00:04:14.460 11:48:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.460 11:48:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:14.460 11:48:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.460 11:48:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.845 11:48:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.845 11:48:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:15.845 11:48:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:15.845 11:48:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.845 11:48:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:16.786 11:48:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.786 11:48:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:16.786 11:48:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.786 11:48:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:17.356 11:48:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:17.356 11:48:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:17.356 11:48:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:17.356 11:48:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:17.356 11:48:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:18.296 11:48:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:18.296 00:04:18.296 real 0m4.226s 00:04:18.296 user 0m0.025s 00:04:18.296 sys 0m0.006s 00:04:18.296 11:48:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:18.296 11:48:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:18.296 ************************************ 00:04:18.296 END TEST scheduler_create_thread 00:04:18.296 ************************************ 00:04:18.296 11:48:54 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:18.296 11:48:54 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 743391 00:04:18.296 11:48:54 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 743391 ']' 00:04:18.296 11:48:54 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 743391 00:04:18.296 11:48:54 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:04:18.296 11:48:54 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:18.296 11:48:54 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 743391 00:04:18.296 11:48:54 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:04:18.296 11:48:54 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:04:18.296 11:48:54 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 743391' 00:04:18.296 killing process with pid 743391 00:04:18.296 11:48:54 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 743391 00:04:18.296 11:48:54 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 743391 00:04:18.296 [2024-10-21 11:48:54.836589] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:18.557 00:04:18.557 real 0m5.831s 00:04:18.557 user 0m12.950s 00:04:18.557 sys 0m0.401s 00:04:18.557 11:48:54 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:18.557 11:48:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:18.557 ************************************ 00:04:18.557 END TEST event_scheduler 00:04:18.557 ************************************ 00:04:18.557 11:48:55 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:18.557 11:48:55 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:18.557 11:48:55 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:18.557 11:48:55 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:18.557 11:48:55 event -- common/autotest_common.sh@10 -- # set +x 00:04:18.557 ************************************ 00:04:18.557 START TEST app_repeat 00:04:18.557 ************************************ 00:04:18.557 11:48:55 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:04:18.557 11:48:55 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:18.557 11:48:55 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:18.557 11:48:55 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:18.557 11:48:55 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:18.557 11:48:55 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:18.557 11:48:55 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:18.557 11:48:55 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:18.557 11:48:55 event.app_repeat -- event/event.sh@19 -- # repeat_pid=744621 00:04:18.557 11:48:55 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:18.557 11:48:55 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:18.557 11:48:55 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 744621' 00:04:18.557 Process app_repeat pid: 744621 00:04:18.557 11:48:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:18.557 11:48:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:18.557 spdk_app_start Round 0 00:04:18.557 11:48:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 744621 /var/tmp/spdk-nbd.sock 00:04:18.557 11:48:55 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 744621 ']' 00:04:18.557 11:48:55 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:18.557 11:48:55 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:18.557 11:48:55 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:18.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:18.557 11:48:55 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:18.557 11:48:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:18.557 [2024-10-21 11:48:55.107466] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:04:18.557 [2024-10-21 11:48:55.107529] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid744621 ] 00:04:18.817 [2024-10-21 11:48:55.186913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:18.817 [2024-10-21 11:48:55.217857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:18.817 [2024-10-21 11:48:55.217859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.817 11:48:55 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:18.817 11:48:55 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:18.817 11:48:55 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:19.077 Malloc0 00:04:19.077 11:48:55 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:19.077 Malloc1 00:04:19.077 11:48:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:19.077 11:48:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.077 11:48:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:19.077 11:48:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:19.077 11:48:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.077 11:48:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:19.077 11:48:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:19.077 11:48:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.077 11:48:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:19.077 11:48:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:19.077 11:48:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.077 11:48:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:19.077 11:48:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:19.077 11:48:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:19.077 11:48:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:19.077 11:48:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:19.337 /dev/nbd0 00:04:19.337 11:48:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:19.338 11:48:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:19.338 11:48:55 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:19.338 11:48:55 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:19.338 11:48:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:19.338 11:48:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:19.338 11:48:55 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:19.338 11:48:55 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:19.338 11:48:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:19.338 11:48:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:19.338 11:48:55 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:19.338 1+0 records in 00:04:19.338 1+0 records out 00:04:19.338 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293661 s, 13.9 MB/s 00:04:19.338 11:48:55 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:19.338 11:48:55 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:19.338 11:48:55 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:19.338 11:48:55 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:19.338 11:48:55 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:19.338 11:48:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:19.338 11:48:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:19.338 11:48:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:19.598 /dev/nbd1 00:04:19.598 11:48:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:19.598 11:48:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:19.598 11:48:56 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:19.598 11:48:56 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:19.598 11:48:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:19.598 11:48:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:19.598 11:48:56 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:19.598 11:48:56 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:19.598 11:48:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:19.598 11:48:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:19.598 11:48:56 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:19.598 1+0 records in 00:04:19.598 1+0 records out 00:04:19.598 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301433 s, 13.6 MB/s 00:04:19.598 11:48:56 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:19.598 11:48:56 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:19.598 11:48:56 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:19.599 11:48:56 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:19.599 11:48:56 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:19.599 11:48:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:19.599 11:48:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:19.599 11:48:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:19.599 11:48:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.599 11:48:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:19.859 11:48:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:19.859 { 00:04:19.859 "nbd_device": "/dev/nbd0", 00:04:19.859 "bdev_name": "Malloc0" 00:04:19.859 }, 00:04:19.859 { 00:04:19.859 "nbd_device": "/dev/nbd1", 00:04:19.859 "bdev_name": "Malloc1" 00:04:19.859 } 00:04:19.859 ]' 00:04:19.860 11:48:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:19.860 { 00:04:19.860 "nbd_device": "/dev/nbd0", 00:04:19.860 "bdev_name": "Malloc0" 00:04:19.860 }, 00:04:19.860 { 00:04:19.860 "nbd_device": "/dev/nbd1", 00:04:19.860 "bdev_name": "Malloc1" 00:04:19.860 } 00:04:19.860 ]' 00:04:19.860 11:48:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:19.860 11:48:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:19.860 /dev/nbd1' 00:04:19.860 11:48:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:19.860 /dev/nbd1' 00:04:19.860 11:48:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:19.860 11:48:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:19.860 11:48:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:19.860 11:48:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:19.860 11:48:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:19.860 11:48:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:19.860 11:48:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.860 11:48:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:19.860 11:48:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:19.860 11:48:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:19.860 11:48:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:19.860 11:48:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:19.860 256+0 records in 00:04:19.860 256+0 records out 00:04:19.860 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123691 s, 84.8 MB/s 00:04:19.860 11:48:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:19.860 11:48:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:19.860 256+0 records in 00:04:19.860 256+0 records out 00:04:19.860 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119833 s, 87.5 MB/s 00:04:19.860 11:48:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:19.860 11:48:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:19.860 256+0 records in 00:04:19.860 256+0 records out 00:04:19.860 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0125887 s, 83.3 MB/s 00:04:19.860 11:48:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:19.860 11:48:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.860 11:48:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:19.860 11:48:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:19.860 11:48:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:19.860 11:48:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:19.860 11:48:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:19.860 11:48:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:19.860 11:48:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:19.860 11:48:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:19.860 11:48:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:19.860 11:48:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:19.860 11:48:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:19.860 11:48:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.860 11:48:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.860 11:48:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:19.860 11:48:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:19.860 11:48:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:19.860 11:48:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:20.121 11:48:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:20.121 11:48:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:20.121 11:48:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:20.121 11:48:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:20.121 11:48:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:20.121 11:48:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:20.121 11:48:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:20.121 11:48:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:20.121 11:48:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:20.121 11:48:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:20.381 11:48:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:20.381 11:48:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:20.381 11:48:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:20.381 11:48:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:20.381 11:48:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:20.381 11:48:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:20.381 11:48:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:20.381 11:48:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:20.381 11:48:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:20.381 11:48:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:20.381 11:48:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:20.642 11:48:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:20.642 11:48:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:20.642 11:48:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:20.642 11:48:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:20.642 11:48:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:20.642 11:48:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:20.642 11:48:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:20.642 11:48:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:20.642 11:48:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:20.642 11:48:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:20.642 11:48:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:20.642 11:48:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:20.642 11:48:57 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:20.901 11:48:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:20.901 [2024-10-21 11:48:57.330534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:20.901 [2024-10-21 11:48:57.360871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:20.901 [2024-10-21 11:48:57.360873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.901 [2024-10-21 11:48:57.389745] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:20.901 [2024-10-21 11:48:57.389778] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:24.203 11:49:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:24.203 11:49:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:24.203 spdk_app_start Round 1 00:04:24.203 11:49:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 744621 /var/tmp/spdk-nbd.sock 00:04:24.203 11:49:00 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 744621 ']' 00:04:24.203 11:49:00 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:24.203 11:49:00 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:24.203 11:49:00 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:24.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:24.203 11:49:00 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:24.203 11:49:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:24.203 11:49:00 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:24.203 11:49:00 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:24.203 11:49:00 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:24.203 Malloc0 00:04:24.203 11:49:00 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:24.203 Malloc1 00:04:24.463 11:49:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:24.463 11:49:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.463 11:49:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:24.463 11:49:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:24.464 11:49:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.464 11:49:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:24.464 11:49:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:24.464 11:49:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.464 11:49:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:24.464 11:49:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:24.464 11:49:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.464 11:49:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:24.464 11:49:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:24.464 11:49:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:24.464 11:49:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:24.464 11:49:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:24.464 /dev/nbd0 00:04:24.464 11:49:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:24.464 11:49:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:24.464 11:49:01 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:24.464 11:49:01 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:24.464 11:49:01 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:24.464 11:49:01 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:24.464 11:49:01 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:24.464 11:49:01 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:24.464 11:49:01 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:24.464 11:49:01 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:24.464 11:49:01 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:24.464 1+0 records in 00:04:24.464 1+0 records out 00:04:24.464 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273371 s, 15.0 MB/s 00:04:24.464 11:49:01 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:24.464 11:49:01 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:24.464 11:49:01 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:24.464 11:49:01 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:24.464 11:49:01 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:24.464 11:49:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:24.464 11:49:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:24.464 11:49:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:24.725 /dev/nbd1 00:04:24.725 11:49:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:24.725 11:49:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:24.725 11:49:01 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:24.725 11:49:01 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:24.725 11:49:01 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:24.725 11:49:01 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:24.725 11:49:01 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:24.725 11:49:01 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:24.726 11:49:01 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:24.726 11:49:01 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:24.726 11:49:01 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:24.726 1+0 records in 00:04:24.726 1+0 records out 00:04:24.726 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276274 s, 14.8 MB/s 00:04:24.726 11:49:01 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:24.726 11:49:01 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:24.726 11:49:01 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:24.726 11:49:01 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:24.726 11:49:01 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:24.726 11:49:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:24.726 11:49:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:24.726 11:49:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:24.726 11:49:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.726 11:49:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:24.987 11:49:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:24.987 { 00:04:24.987 "nbd_device": "/dev/nbd0", 00:04:24.987 "bdev_name": "Malloc0" 00:04:24.987 }, 00:04:24.987 { 00:04:24.987 "nbd_device": "/dev/nbd1", 00:04:24.987 "bdev_name": "Malloc1" 00:04:24.987 } 00:04:24.987 ]' 00:04:24.987 11:49:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:24.987 { 00:04:24.987 "nbd_device": "/dev/nbd0", 00:04:24.987 "bdev_name": "Malloc0" 00:04:24.987 }, 00:04:24.987 { 00:04:24.987 "nbd_device": "/dev/nbd1", 00:04:24.987 "bdev_name": "Malloc1" 00:04:24.987 } 00:04:24.987 ]' 00:04:24.987 11:49:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:24.987 11:49:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:24.987 /dev/nbd1' 00:04:24.987 11:49:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:24.987 /dev/nbd1' 00:04:24.987 11:49:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:24.987 11:49:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:24.987 11:49:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:24.987 11:49:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:24.987 11:49:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:24.987 11:49:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:24.987 11:49:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.987 11:49:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:24.987 11:49:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:24.987 11:49:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:24.987 11:49:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:24.987 11:49:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:24.987 256+0 records in 00:04:24.987 256+0 records out 00:04:24.987 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122325 s, 85.7 MB/s 00:04:24.987 11:49:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:24.987 11:49:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:24.987 256+0 records in 00:04:24.987 256+0 records out 00:04:24.987 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115051 s, 91.1 MB/s 00:04:24.987 11:49:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:24.987 11:49:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:25.248 256+0 records in 00:04:25.248 256+0 records out 00:04:25.248 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0125362 s, 83.6 MB/s 00:04:25.248 11:49:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:25.248 11:49:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.248 11:49:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:25.248 11:49:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:25.248 11:49:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:25.248 11:49:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:25.248 11:49:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:25.248 11:49:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:25.248 11:49:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:25.248 11:49:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:25.248 11:49:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:25.248 11:49:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:25.248 11:49:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:25.248 11:49:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.248 11:49:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.248 11:49:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:25.248 11:49:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:25.248 11:49:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:25.248 11:49:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:25.248 11:49:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:25.248 11:49:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:25.248 11:49:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:25.248 11:49:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:25.248 11:49:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:25.248 11:49:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:25.248 11:49:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:25.248 11:49:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:25.248 11:49:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:25.248 11:49:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:25.510 11:49:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:25.510 11:49:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:25.510 11:49:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:25.510 11:49:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:25.510 11:49:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:25.510 11:49:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:25.510 11:49:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:25.510 11:49:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:25.510 11:49:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:25.510 11:49:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.510 11:49:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:25.771 11:49:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:25.771 11:49:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:25.771 11:49:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:25.771 11:49:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:25.771 11:49:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:25.771 11:49:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:25.771 11:49:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:25.771 11:49:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:25.771 11:49:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:25.771 11:49:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:25.771 11:49:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:25.771 11:49:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:25.771 11:49:02 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:26.031 11:49:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:26.031 [2024-10-21 11:49:02.496274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:26.031 [2024-10-21 11:49:02.526819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:26.031 [2024-10-21 11:49:02.526819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.031 [2024-10-21 11:49:02.556303] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:26.031 [2024-10-21 11:49:02.556338] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:29.328 11:49:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:29.328 11:49:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:29.328 spdk_app_start Round 2 00:04:29.328 11:49:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 744621 /var/tmp/spdk-nbd.sock 00:04:29.328 11:49:05 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 744621 ']' 00:04:29.328 11:49:05 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:29.328 11:49:05 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:29.328 11:49:05 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:29.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:29.328 11:49:05 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:29.328 11:49:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:29.328 11:49:05 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:29.328 11:49:05 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:29.328 11:49:05 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:29.328 Malloc0 00:04:29.328 11:49:05 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:29.588 Malloc1 00:04:29.588 11:49:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:29.588 11:49:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.588 11:49:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:29.588 11:49:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:29.588 11:49:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.588 11:49:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:29.588 11:49:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:29.588 11:49:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.588 11:49:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:29.588 11:49:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:29.588 11:49:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.588 11:49:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:29.588 11:49:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:29.588 11:49:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:29.588 11:49:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:29.588 11:49:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:29.588 /dev/nbd0 00:04:29.588 11:49:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:29.588 11:49:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:29.588 11:49:06 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:29.588 11:49:06 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:29.588 11:49:06 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:29.588 11:49:06 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:29.588 11:49:06 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:29.848 11:49:06 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:29.848 11:49:06 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:29.848 11:49:06 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:29.848 11:49:06 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:29.848 1+0 records in 00:04:29.848 1+0 records out 00:04:29.848 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000215942 s, 19.0 MB/s 00:04:29.848 11:49:06 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:29.848 11:49:06 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:29.848 11:49:06 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:29.848 11:49:06 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:29.848 11:49:06 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:29.848 11:49:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:29.849 11:49:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:29.849 11:49:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:29.849 /dev/nbd1 00:04:29.849 11:49:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:29.849 11:49:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:29.849 11:49:06 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:29.849 11:49:06 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:29.849 11:49:06 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:29.849 11:49:06 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:29.849 11:49:06 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:29.849 11:49:06 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:29.849 11:49:06 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:29.849 11:49:06 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:29.849 11:49:06 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:29.849 1+0 records in 00:04:29.849 1+0 records out 00:04:29.849 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000254757 s, 16.1 MB/s 00:04:29.849 11:49:06 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:29.849 11:49:06 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:29.849 11:49:06 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:29.849 11:49:06 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:29.849 11:49:06 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:29.849 11:49:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:29.849 11:49:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:29.849 11:49:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:29.849 11:49:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.109 11:49:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:30.109 11:49:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:30.109 { 00:04:30.109 "nbd_device": "/dev/nbd0", 00:04:30.109 "bdev_name": "Malloc0" 00:04:30.109 }, 00:04:30.109 { 00:04:30.109 "nbd_device": "/dev/nbd1", 00:04:30.109 "bdev_name": "Malloc1" 00:04:30.109 } 00:04:30.109 ]' 00:04:30.109 11:49:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:30.109 { 00:04:30.109 "nbd_device": "/dev/nbd0", 00:04:30.109 "bdev_name": "Malloc0" 00:04:30.109 }, 00:04:30.109 { 00:04:30.109 "nbd_device": "/dev/nbd1", 00:04:30.109 "bdev_name": "Malloc1" 00:04:30.109 } 00:04:30.109 ]' 00:04:30.109 11:49:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:30.109 11:49:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:30.109 /dev/nbd1' 00:04:30.109 11:49:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:30.109 /dev/nbd1' 00:04:30.109 11:49:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:30.109 11:49:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:30.109 11:49:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:30.109 11:49:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:30.109 11:49:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:30.109 11:49:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:30.109 11:49:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.109 11:49:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:30.109 11:49:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:30.109 11:49:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:30.109 11:49:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:30.109 11:49:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:30.109 256+0 records in 00:04:30.110 256+0 records out 00:04:30.110 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122356 s, 85.7 MB/s 00:04:30.110 11:49:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:30.110 11:49:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:30.110 256+0 records in 00:04:30.110 256+0 records out 00:04:30.110 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121142 s, 86.6 MB/s 00:04:30.110 11:49:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:30.110 11:49:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:30.369 256+0 records in 00:04:30.369 256+0 records out 00:04:30.369 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0133545 s, 78.5 MB/s 00:04:30.369 11:49:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:30.369 11:49:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.369 11:49:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:30.369 11:49:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:30.370 11:49:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:30.370 11:49:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:30.370 11:49:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:30.370 11:49:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:30.370 11:49:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:30.370 11:49:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:30.370 11:49:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:30.370 11:49:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:30.370 11:49:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:30.370 11:49:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.370 11:49:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.370 11:49:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:30.370 11:49:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:30.370 11:49:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:30.370 11:49:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:30.370 11:49:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:30.370 11:49:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:30.370 11:49:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:30.370 11:49:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:30.370 11:49:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:30.370 11:49:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:30.370 11:49:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:30.370 11:49:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:30.370 11:49:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:30.370 11:49:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:30.630 11:49:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:30.630 11:49:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:30.630 11:49:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:30.630 11:49:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:30.630 11:49:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:30.630 11:49:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:30.630 11:49:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:30.630 11:49:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:30.630 11:49:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:30.630 11:49:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.630 11:49:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:30.890 11:49:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:30.890 11:49:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:30.890 11:49:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:30.890 11:49:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:30.890 11:49:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:30.890 11:49:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:30.890 11:49:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:30.890 11:49:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:30.890 11:49:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:30.890 11:49:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:30.890 11:49:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:30.890 11:49:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:30.890 11:49:07 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:31.150 11:49:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:31.150 [2024-10-21 11:49:07.632653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:31.150 [2024-10-21 11:49:07.663139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:31.150 [2024-10-21 11:49:07.663140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.150 [2024-10-21 11:49:07.692149] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:31.150 [2024-10-21 11:49:07.692181] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:34.445 11:49:10 event.app_repeat -- event/event.sh@38 -- # waitforlisten 744621 /var/tmp/spdk-nbd.sock 00:04:34.445 11:49:10 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 744621 ']' 00:04:34.445 11:49:10 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:34.445 11:49:10 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:34.445 11:49:10 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:34.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:34.445 11:49:10 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:34.445 11:49:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:34.445 11:49:10 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:34.445 11:49:10 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:34.445 11:49:10 event.app_repeat -- event/event.sh@39 -- # killprocess 744621 00:04:34.445 11:49:10 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 744621 ']' 00:04:34.445 11:49:10 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 744621 00:04:34.445 11:49:10 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:04:34.445 11:49:10 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:34.445 11:49:10 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 744621 00:04:34.445 11:49:10 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:34.445 11:49:10 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:34.445 11:49:10 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 744621' 00:04:34.445 killing process with pid 744621 00:04:34.445 11:49:10 event.app_repeat -- common/autotest_common.sh@969 -- # kill 744621 00:04:34.445 11:49:10 event.app_repeat -- common/autotest_common.sh@974 -- # wait 744621 00:04:34.445 spdk_app_start is called in Round 0. 00:04:34.445 Shutdown signal received, stop current app iteration 00:04:34.445 Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 reinitialization... 00:04:34.445 spdk_app_start is called in Round 1. 00:04:34.445 Shutdown signal received, stop current app iteration 00:04:34.445 Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 reinitialization... 00:04:34.445 spdk_app_start is called in Round 2. 00:04:34.445 Shutdown signal received, stop current app iteration 00:04:34.445 Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 reinitialization... 00:04:34.445 spdk_app_start is called in Round 3. 00:04:34.445 Shutdown signal received, stop current app iteration 00:04:34.445 11:49:10 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:34.445 11:49:10 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:34.445 00:04:34.445 real 0m15.827s 00:04:34.445 user 0m34.748s 00:04:34.445 sys 0m2.303s 00:04:34.445 11:49:10 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:34.445 11:49:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:34.445 ************************************ 00:04:34.445 END TEST app_repeat 00:04:34.445 ************************************ 00:04:34.445 11:49:10 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:34.445 11:49:10 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:34.445 11:49:10 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:34.445 11:49:10 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:34.445 11:49:10 event -- common/autotest_common.sh@10 -- # set +x 00:04:34.445 ************************************ 00:04:34.445 START TEST cpu_locks 00:04:34.445 ************************************ 00:04:34.445 11:49:10 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:34.706 * Looking for test storage... 00:04:34.706 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:34.706 11:49:11 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:34.706 11:49:11 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:04:34.706 11:49:11 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:34.706 11:49:11 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:34.706 11:49:11 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:34.706 11:49:11 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:34.706 11:49:11 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:34.706 11:49:11 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:34.706 11:49:11 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:34.706 11:49:11 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:34.706 11:49:11 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:34.706 11:49:11 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:34.706 11:49:11 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:34.706 11:49:11 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:34.706 11:49:11 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:34.706 11:49:11 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:34.706 11:49:11 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:34.706 11:49:11 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:34.706 11:49:11 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:34.706 11:49:11 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:34.706 11:49:11 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:34.706 11:49:11 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:34.706 11:49:11 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:34.706 11:49:11 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:34.706 11:49:11 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:34.706 11:49:11 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:34.706 11:49:11 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:34.706 11:49:11 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:34.706 11:49:11 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:34.706 11:49:11 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:34.706 11:49:11 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:34.706 11:49:11 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:34.706 11:49:11 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:34.706 11:49:11 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:34.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.706 --rc genhtml_branch_coverage=1 00:04:34.706 --rc genhtml_function_coverage=1 00:04:34.707 --rc genhtml_legend=1 00:04:34.707 --rc geninfo_all_blocks=1 00:04:34.707 --rc geninfo_unexecuted_blocks=1 00:04:34.707 00:04:34.707 ' 00:04:34.707 11:49:11 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:34.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.707 --rc genhtml_branch_coverage=1 00:04:34.707 --rc genhtml_function_coverage=1 00:04:34.707 --rc genhtml_legend=1 00:04:34.707 --rc geninfo_all_blocks=1 00:04:34.707 --rc geninfo_unexecuted_blocks=1 00:04:34.707 00:04:34.707 ' 00:04:34.707 11:49:11 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:34.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.707 --rc genhtml_branch_coverage=1 00:04:34.707 --rc genhtml_function_coverage=1 00:04:34.707 --rc genhtml_legend=1 00:04:34.707 --rc geninfo_all_blocks=1 00:04:34.707 --rc geninfo_unexecuted_blocks=1 00:04:34.707 00:04:34.707 ' 00:04:34.707 11:49:11 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:34.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.707 --rc genhtml_branch_coverage=1 00:04:34.707 --rc genhtml_function_coverage=1 00:04:34.707 --rc genhtml_legend=1 00:04:34.707 --rc geninfo_all_blocks=1 00:04:34.707 --rc geninfo_unexecuted_blocks=1 00:04:34.707 00:04:34.707 ' 00:04:34.707 11:49:11 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:34.707 11:49:11 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:34.707 11:49:11 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:34.707 11:49:11 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:34.707 11:49:11 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:34.707 11:49:11 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:34.707 11:49:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:34.707 ************************************ 00:04:34.707 START TEST default_locks 00:04:34.707 ************************************ 00:04:34.707 11:49:11 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:04:34.707 11:49:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=747931 00:04:34.707 11:49:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 747931 00:04:34.707 11:49:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:34.707 11:49:11 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 747931 ']' 00:04:34.707 11:49:11 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.707 11:49:11 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:34.707 11:49:11 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.707 11:49:11 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:34.707 11:49:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:34.707 [2024-10-21 11:49:11.272983] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:04:34.707 [2024-10-21 11:49:11.273035] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid747931 ] 00:04:34.966 [2024-10-21 11:49:11.318658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.966 [2024-10-21 11:49:11.352141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.966 11:49:11 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:34.967 11:49:11 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:04:34.967 11:49:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 747931 00:04:34.967 11:49:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 747931 00:04:34.967 11:49:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:35.535 lslocks: write error 00:04:35.535 11:49:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 747931 00:04:35.535 11:49:12 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 747931 ']' 00:04:35.535 11:49:12 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 747931 00:04:35.535 11:49:12 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:04:35.535 11:49:12 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:35.535 11:49:12 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 747931 00:04:35.795 11:49:12 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:35.795 11:49:12 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:35.795 11:49:12 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 747931' 00:04:35.795 killing process with pid 747931 00:04:35.795 11:49:12 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 747931 00:04:35.795 11:49:12 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 747931 00:04:35.795 11:49:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 747931 00:04:35.795 11:49:12 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:04:35.795 11:49:12 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 747931 00:04:35.795 11:49:12 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:35.795 11:49:12 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:35.795 11:49:12 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:35.795 11:49:12 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:35.795 11:49:12 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 747931 00:04:35.795 11:49:12 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 747931 ']' 00:04:35.795 11:49:12 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.795 11:49:12 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:35.795 11:49:12 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.796 11:49:12 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:35.796 11:49:12 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:35.796 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (747931) - No such process 00:04:35.796 ERROR: process (pid: 747931) is no longer running 00:04:35.796 11:49:12 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:35.796 11:49:12 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:04:35.796 11:49:12 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:04:35.796 11:49:12 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:35.796 11:49:12 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:35.796 11:49:12 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:35.796 11:49:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:35.796 11:49:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:35.796 11:49:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:35.796 11:49:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:35.796 00:04:35.796 real 0m1.134s 00:04:35.796 user 0m1.147s 00:04:35.796 sys 0m0.540s 00:04:35.796 11:49:12 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:35.796 11:49:12 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:35.796 ************************************ 00:04:35.796 END TEST default_locks 00:04:35.796 ************************************ 00:04:35.796 11:49:12 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:35.796 11:49:12 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:35.796 11:49:12 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.796 11:49:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:36.056 ************************************ 00:04:36.056 START TEST default_locks_via_rpc 00:04:36.056 ************************************ 00:04:36.056 11:49:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:04:36.056 11:49:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=748252 00:04:36.056 11:49:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 748252 00:04:36.056 11:49:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:36.056 11:49:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 748252 ']' 00:04:36.056 11:49:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.056 11:49:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:36.056 11:49:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.056 11:49:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:36.056 11:49:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.056 [2024-10-21 11:49:12.487003] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:04:36.056 [2024-10-21 11:49:12.487059] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid748252 ] 00:04:36.056 [2024-10-21 11:49:12.565729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.056 [2024-10-21 11:49:12.597894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.996 11:49:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:36.996 11:49:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:36.996 11:49:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:36.996 11:49:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.996 11:49:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.996 11:49:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.996 11:49:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:36.996 11:49:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:36.996 11:49:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:36.996 11:49:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:36.996 11:49:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:36.996 11:49:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.996 11:49:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.996 11:49:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.996 11:49:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 748252 00:04:36.996 11:49:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 748252 00:04:36.996 11:49:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:37.256 11:49:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 748252 00:04:37.256 11:49:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 748252 ']' 00:04:37.256 11:49:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 748252 00:04:37.256 11:49:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:04:37.256 11:49:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:37.256 11:49:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 748252 00:04:37.515 11:49:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:37.515 11:49:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:37.515 11:49:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 748252' 00:04:37.515 killing process with pid 748252 00:04:37.515 11:49:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 748252 00:04:37.515 11:49:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 748252 00:04:37.515 00:04:37.515 real 0m1.641s 00:04:37.515 user 0m1.768s 00:04:37.515 sys 0m0.570s 00:04:37.515 11:49:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.515 11:49:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.515 ************************************ 00:04:37.515 END TEST default_locks_via_rpc 00:04:37.515 ************************************ 00:04:37.515 11:49:14 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:37.515 11:49:14 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.515 11:49:14 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.515 11:49:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:37.774 ************************************ 00:04:37.774 START TEST non_locking_app_on_locked_coremask 00:04:37.775 ************************************ 00:04:37.775 11:49:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:04:37.775 11:49:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=748615 00:04:37.775 11:49:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 748615 /var/tmp/spdk.sock 00:04:37.775 11:49:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:37.775 11:49:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 748615 ']' 00:04:37.775 11:49:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.775 11:49:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:37.775 11:49:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.775 11:49:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:37.775 11:49:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:37.775 [2024-10-21 11:49:14.188223] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:04:37.775 [2024-10-21 11:49:14.188263] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid748615 ] 00:04:37.775 [2024-10-21 11:49:14.255750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.775 [2024-10-21 11:49:14.285703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.034 11:49:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:38.034 11:49:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:38.034 11:49:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=748629 00:04:38.034 11:49:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 748629 /var/tmp/spdk2.sock 00:04:38.034 11:49:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 748629 ']' 00:04:38.034 11:49:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:38.034 11:49:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:38.034 11:49:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:38.034 11:49:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:38.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:38.034 11:49:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:38.034 11:49:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:38.034 [2024-10-21 11:49:14.528596] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:04:38.034 [2024-10-21 11:49:14.528654] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid748629 ] 00:04:38.034 [2024-10-21 11:49:14.602082] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:38.034 [2024-10-21 11:49:14.602105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.294 [2024-10-21 11:49:14.664346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.864 11:49:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:38.864 11:49:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:38.864 11:49:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 748615 00:04:38.864 11:49:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 748615 00:04:38.864 11:49:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:39.123 lslocks: write error 00:04:39.123 11:49:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 748615 00:04:39.123 11:49:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 748615 ']' 00:04:39.123 11:49:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 748615 00:04:39.123 11:49:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:39.123 11:49:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:39.123 11:49:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 748615 00:04:39.123 11:49:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:39.123 11:49:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:39.123 11:49:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 748615' 00:04:39.123 killing process with pid 748615 00:04:39.123 11:49:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 748615 00:04:39.123 11:49:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 748615 00:04:39.693 11:49:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 748629 00:04:39.693 11:49:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 748629 ']' 00:04:39.693 11:49:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 748629 00:04:39.693 11:49:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:39.693 11:49:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:39.693 11:49:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 748629 00:04:39.693 11:49:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:39.693 11:49:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:39.693 11:49:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 748629' 00:04:39.693 killing process with pid 748629 00:04:39.693 11:49:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 748629 00:04:39.693 11:49:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 748629 00:04:39.954 00:04:39.954 real 0m2.160s 00:04:39.954 user 0m2.391s 00:04:39.954 sys 0m0.757s 00:04:39.954 11:49:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.954 11:49:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:39.954 ************************************ 00:04:39.954 END TEST non_locking_app_on_locked_coremask 00:04:39.954 ************************************ 00:04:39.954 11:49:16 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:39.954 11:49:16 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:39.954 11:49:16 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:39.954 11:49:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:39.954 ************************************ 00:04:39.954 START TEST locking_app_on_unlocked_coremask 00:04:39.954 ************************************ 00:04:39.954 11:49:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:04:39.954 11:49:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=749088 00:04:39.954 11:49:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 749088 /var/tmp/spdk.sock 00:04:39.954 11:49:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:39.954 11:49:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 749088 ']' 00:04:39.954 11:49:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.954 11:49:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:39.954 11:49:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.954 11:49:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:39.954 11:49:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:39.954 [2024-10-21 11:49:16.424468] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:04:39.954 [2024-10-21 11:49:16.424526] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid749088 ] 00:04:39.954 [2024-10-21 11:49:16.500929] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:39.954 [2024-10-21 11:49:16.500957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.954 [2024-10-21 11:49:16.535245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.894 11:49:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:40.894 11:49:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:40.894 11:49:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:40.894 11:49:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=749324 00:04:40.894 11:49:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 749324 /var/tmp/spdk2.sock 00:04:40.894 11:49:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 749324 ']' 00:04:40.894 11:49:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:40.894 11:49:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:40.894 11:49:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:40.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:40.894 11:49:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:40.894 11:49:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:40.894 [2024-10-21 11:49:17.246553] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:04:40.894 [2024-10-21 11:49:17.246604] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid749324 ] 00:04:40.894 [2024-10-21 11:49:17.318879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.894 [2024-10-21 11:49:17.376973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.463 11:49:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:41.463 11:49:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:41.463 11:49:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 749324 00:04:41.463 11:49:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 749324 00:04:41.463 11:49:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:42.033 lslocks: write error 00:04:42.033 11:49:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 749088 00:04:42.033 11:49:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 749088 ']' 00:04:42.033 11:49:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 749088 00:04:42.033 11:49:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:42.033 11:49:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:42.033 11:49:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 749088 00:04:42.293 11:49:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:42.293 11:49:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:42.293 11:49:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 749088' 00:04:42.293 killing process with pid 749088 00:04:42.293 11:49:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 749088 00:04:42.293 11:49:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 749088 00:04:42.553 11:49:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 749324 00:04:42.553 11:49:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 749324 ']' 00:04:42.553 11:49:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 749324 00:04:42.553 11:49:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:42.553 11:49:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:42.553 11:49:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 749324 00:04:42.553 11:49:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:42.553 11:49:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:42.553 11:49:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 749324' 00:04:42.553 killing process with pid 749324 00:04:42.553 11:49:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 749324 00:04:42.553 11:49:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 749324 00:04:42.814 00:04:42.814 real 0m2.879s 00:04:42.814 user 0m3.200s 00:04:42.814 sys 0m0.879s 00:04:42.814 11:49:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.814 11:49:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:42.814 ************************************ 00:04:42.814 END TEST locking_app_on_unlocked_coremask 00:04:42.814 ************************************ 00:04:42.814 11:49:19 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:42.814 11:49:19 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:42.814 11:49:19 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.814 11:49:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:42.814 ************************************ 00:04:42.814 START TEST locking_app_on_locked_coremask 00:04:42.814 ************************************ 00:04:42.814 11:49:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:04:42.814 11:49:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=749703 00:04:42.814 11:49:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 749703 /var/tmp/spdk.sock 00:04:42.814 11:49:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:42.814 11:49:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 749703 ']' 00:04:42.814 11:49:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.814 11:49:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:42.814 11:49:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.814 11:49:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:42.814 11:49:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:42.814 [2024-10-21 11:49:19.391975] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:04:42.814 [2024-10-21 11:49:19.392025] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid749703 ] 00:04:43.074 [2024-10-21 11:49:19.465306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.074 [2024-10-21 11:49:19.497124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.644 11:49:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:43.644 11:49:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:43.644 11:49:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=749988 00:04:43.644 11:49:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 749988 /var/tmp/spdk2.sock 00:04:43.644 11:49:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:04:43.644 11:49:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:43.644 11:49:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 749988 /var/tmp/spdk2.sock 00:04:43.644 11:49:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:43.644 11:49:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:43.644 11:49:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:43.644 11:49:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:43.644 11:49:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 749988 /var/tmp/spdk2.sock 00:04:43.645 11:49:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 749988 ']' 00:04:43.645 11:49:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:43.645 11:49:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:43.645 11:49:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:43.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:43.645 11:49:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:43.645 11:49:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:43.905 [2024-10-21 11:49:20.242005] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:04:43.905 [2024-10-21 11:49:20.242058] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid749988 ] 00:04:43.905 [2024-10-21 11:49:20.312652] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 749703 has claimed it. 00:04:43.905 [2024-10-21 11:49:20.312686] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:44.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (749988) - No such process 00:04:44.476 ERROR: process (pid: 749988) is no longer running 00:04:44.476 11:49:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:44.476 11:49:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:04:44.477 11:49:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:04:44.477 11:49:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:44.477 11:49:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:44.477 11:49:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:44.477 11:49:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 749703 00:04:44.477 11:49:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 749703 00:04:44.477 11:49:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:44.738 lslocks: write error 00:04:44.738 11:49:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 749703 00:04:44.738 11:49:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 749703 ']' 00:04:44.738 11:49:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 749703 00:04:44.738 11:49:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:44.738 11:49:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:44.738 11:49:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 749703 00:04:44.738 11:49:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:44.738 11:49:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:44.738 11:49:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 749703' 00:04:44.738 killing process with pid 749703 00:04:44.738 11:49:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 749703 00:04:44.738 11:49:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 749703 00:04:44.999 00:04:44.999 real 0m2.176s 00:04:44.999 user 0m2.492s 00:04:44.999 sys 0m0.585s 00:04:44.999 11:49:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:44.999 11:49:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:44.999 ************************************ 00:04:44.999 END TEST locking_app_on_locked_coremask 00:04:44.999 ************************************ 00:04:44.999 11:49:21 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:44.999 11:49:21 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:44.999 11:49:21 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:44.999 11:49:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:44.999 ************************************ 00:04:44.999 START TEST locking_overlapped_coremask 00:04:44.999 ************************************ 00:04:44.999 11:49:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:04:44.999 11:49:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=750215 00:04:44.999 11:49:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 750215 /var/tmp/spdk.sock 00:04:44.999 11:49:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:44.999 11:49:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 750215 ']' 00:04:44.999 11:49:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.999 11:49:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:44.999 11:49:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.999 11:49:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:44.999 11:49:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:45.260 [2024-10-21 11:49:21.635885] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:04:45.260 [2024-10-21 11:49:21.635945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid750215 ] 00:04:45.260 [2024-10-21 11:49:21.715945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:45.260 [2024-10-21 11:49:21.759478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:45.260 [2024-10-21 11:49:21.759617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.260 [2024-10-21 11:49:21.759619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:46.203 11:49:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:46.203 11:49:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:46.203 11:49:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=750415 00:04:46.203 11:49:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 750415 /var/tmp/spdk2.sock 00:04:46.203 11:49:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:04:46.203 11:49:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:46.203 11:49:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 750415 /var/tmp/spdk2.sock 00:04:46.203 11:49:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:46.203 11:49:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:46.203 11:49:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:46.203 11:49:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:46.203 11:49:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 750415 /var/tmp/spdk2.sock 00:04:46.203 11:49:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 750415 ']' 00:04:46.203 11:49:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:46.203 11:49:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:46.203 11:49:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:46.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:46.203 11:49:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:46.203 11:49:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:46.203 [2024-10-21 11:49:22.502541] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:04:46.203 [2024-10-21 11:49:22.502595] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid750415 ] 00:04:46.203 [2024-10-21 11:49:22.597439] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 750215 has claimed it. 00:04:46.203 [2024-10-21 11:49:22.597478] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:46.775 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (750415) - No such process 00:04:46.775 ERROR: process (pid: 750415) is no longer running 00:04:46.775 11:49:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:46.775 11:49:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:04:46.775 11:49:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:04:46.775 11:49:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:46.775 11:49:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:46.775 11:49:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:46.775 11:49:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:46.775 11:49:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:46.775 11:49:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:46.775 11:49:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:46.775 11:49:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 750215 00:04:46.775 11:49:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 750215 ']' 00:04:46.775 11:49:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 750215 00:04:46.775 11:49:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:04:46.775 11:49:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:46.775 11:49:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 750215 00:04:46.775 11:49:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:46.775 11:49:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:46.775 11:49:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 750215' 00:04:46.775 killing process with pid 750215 00:04:46.775 11:49:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 750215 00:04:46.775 11:49:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 750215 00:04:47.036 00:04:47.036 real 0m1.793s 00:04:47.036 user 0m5.177s 00:04:47.036 sys 0m0.409s 00:04:47.036 11:49:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:47.036 11:49:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:47.036 ************************************ 00:04:47.036 END TEST locking_overlapped_coremask 00:04:47.036 ************************************ 00:04:47.036 11:49:23 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:47.036 11:49:23 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:47.036 11:49:23 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:47.036 11:49:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:47.036 ************************************ 00:04:47.036 START TEST locking_overlapped_coremask_via_rpc 00:04:47.036 ************************************ 00:04:47.036 11:49:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:04:47.036 11:49:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=750660 00:04:47.036 11:49:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 750660 /var/tmp/spdk.sock 00:04:47.036 11:49:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:47.036 11:49:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 750660 ']' 00:04:47.036 11:49:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.036 11:49:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:47.036 11:49:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.036 11:49:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:47.036 11:49:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.036 [2024-10-21 11:49:23.499561] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:04:47.036 [2024-10-21 11:49:23.499603] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid750660 ] 00:04:47.036 [2024-10-21 11:49:23.543227] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:47.037 [2024-10-21 11:49:23.543248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:47.037 [2024-10-21 11:49:23.575289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:47.037 [2024-10-21 11:49:23.575451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.037 [2024-10-21 11:49:23.575453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:47.297 11:49:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:47.297 11:49:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:47.297 11:49:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=750786 00:04:47.297 11:49:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 750786 /var/tmp/spdk2.sock 00:04:47.297 11:49:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 750786 ']' 00:04:47.297 11:49:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:47.297 11:49:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:47.297 11:49:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:47.297 11:49:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:47.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:47.297 11:49:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:47.297 11:49:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.297 [2024-10-21 11:49:23.811829] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:04:47.297 [2024-10-21 11:49:23.811884] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid750786 ] 00:04:47.557 [2024-10-21 11:49:23.901862] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:47.557 [2024-10-21 11:49:23.901890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:47.557 [2024-10-21 11:49:23.979409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:47.557 [2024-10-21 11:49:23.979454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:47.557 [2024-10-21 11:49:23.979455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:04:48.128 11:49:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:48.128 11:49:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:48.128 11:49:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:48.128 11:49:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.128 11:49:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.128 11:49:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.128 11:49:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:48.128 11:49:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:48.128 11:49:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:48.128 11:49:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:48.128 11:49:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:48.128 11:49:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:48.128 11:49:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:48.128 11:49:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:48.128 11:49:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.128 11:49:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.128 [2024-10-21 11:49:24.608399] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 750660 has claimed it. 00:04:48.128 request: 00:04:48.128 { 00:04:48.129 "method": "framework_enable_cpumask_locks", 00:04:48.129 "req_id": 1 00:04:48.129 } 00:04:48.129 Got JSON-RPC error response 00:04:48.129 response: 00:04:48.129 { 00:04:48.129 "code": -32603, 00:04:48.129 "message": "Failed to claim CPU core: 2" 00:04:48.129 } 00:04:48.129 11:49:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:48.129 11:49:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:48.129 11:49:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:48.129 11:49:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:48.129 11:49:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:48.129 11:49:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 750660 /var/tmp/spdk.sock 00:04:48.129 11:49:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 750660 ']' 00:04:48.129 11:49:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.129 11:49:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:48.129 11:49:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.129 11:49:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:48.129 11:49:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.390 11:49:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:48.390 11:49:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:48.390 11:49:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 750786 /var/tmp/spdk2.sock 00:04:48.390 11:49:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 750786 ']' 00:04:48.390 11:49:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:48.390 11:49:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:48.390 11:49:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:48.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:48.390 11:49:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:48.390 11:49:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.390 11:49:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:48.390 11:49:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:48.390 11:49:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:48.390 11:49:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:48.390 11:49:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:48.390 11:49:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:48.390 00:04:48.390 real 0m1.538s 00:04:48.391 user 0m0.727s 00:04:48.391 sys 0m0.127s 00:04:48.391 11:49:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:48.391 11:49:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.391 ************************************ 00:04:48.391 END TEST locking_overlapped_coremask_via_rpc 00:04:48.391 ************************************ 00:04:48.651 11:49:25 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:48.652 11:49:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 750660 ]] 00:04:48.652 11:49:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 750660 00:04:48.652 11:49:25 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 750660 ']' 00:04:48.652 11:49:25 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 750660 00:04:48.652 11:49:25 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:04:48.652 11:49:25 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:48.652 11:49:25 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 750660 00:04:48.652 11:49:25 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:48.652 11:49:25 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:48.652 11:49:25 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 750660' 00:04:48.652 killing process with pid 750660 00:04:48.652 11:49:25 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 750660 00:04:48.652 11:49:25 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 750660 00:04:48.912 11:49:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 750786 ]] 00:04:48.912 11:49:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 750786 00:04:48.912 11:49:25 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 750786 ']' 00:04:48.912 11:49:25 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 750786 00:04:48.912 11:49:25 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:04:48.912 11:49:25 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:48.912 11:49:25 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 750786 00:04:48.912 11:49:25 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:04:48.912 11:49:25 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:04:48.912 11:49:25 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 750786' 00:04:48.912 killing process with pid 750786 00:04:48.912 11:49:25 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 750786 00:04:48.912 11:49:25 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 750786 00:04:49.173 11:49:25 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:49.173 11:49:25 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:04:49.173 11:49:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 750660 ]] 00:04:49.173 11:49:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 750660 00:04:49.173 11:49:25 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 750660 ']' 00:04:49.173 11:49:25 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 750660 00:04:49.173 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (750660) - No such process 00:04:49.173 11:49:25 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 750660 is not found' 00:04:49.173 Process with pid 750660 is not found 00:04:49.173 11:49:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 750786 ]] 00:04:49.173 11:49:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 750786 00:04:49.173 11:49:25 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 750786 ']' 00:04:49.173 11:49:25 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 750786 00:04:49.173 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (750786) - No such process 00:04:49.173 11:49:25 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 750786 is not found' 00:04:49.173 Process with pid 750786 is not found 00:04:49.173 11:49:25 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:49.173 00:04:49.173 real 0m14.604s 00:04:49.173 user 0m25.562s 00:04:49.173 sys 0m4.767s 00:04:49.173 11:49:25 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.173 11:49:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:49.173 ************************************ 00:04:49.173 END TEST cpu_locks 00:04:49.173 ************************************ 00:04:49.173 00:04:49.173 real 0m40.436s 00:04:49.173 user 1m19.804s 00:04:49.173 sys 0m8.121s 00:04:49.173 11:49:25 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.173 11:49:25 event -- common/autotest_common.sh@10 -- # set +x 00:04:49.173 ************************************ 00:04:49.173 END TEST event 00:04:49.173 ************************************ 00:04:49.173 11:49:25 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:49.173 11:49:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:49.173 11:49:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.173 11:49:25 -- common/autotest_common.sh@10 -- # set +x 00:04:49.173 ************************************ 00:04:49.173 START TEST thread 00:04:49.173 ************************************ 00:04:49.173 11:49:25 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:49.434 * Looking for test storage... 00:04:49.434 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:49.434 11:49:25 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:49.434 11:49:25 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:04:49.434 11:49:25 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:49.434 11:49:25 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:49.434 11:49:25 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.434 11:49:25 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.434 11:49:25 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.434 11:49:25 thread -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.434 11:49:25 thread -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.434 11:49:25 thread -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.434 11:49:25 thread -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.434 11:49:25 thread -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.434 11:49:25 thread -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.434 11:49:25 thread -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.434 11:49:25 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.434 11:49:25 thread -- scripts/common.sh@344 -- # case "$op" in 00:04:49.434 11:49:25 thread -- scripts/common.sh@345 -- # : 1 00:04:49.434 11:49:25 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.434 11:49:25 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.434 11:49:25 thread -- scripts/common.sh@365 -- # decimal 1 00:04:49.434 11:49:25 thread -- scripts/common.sh@353 -- # local d=1 00:04:49.434 11:49:25 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.434 11:49:25 thread -- scripts/common.sh@355 -- # echo 1 00:04:49.434 11:49:25 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.434 11:49:25 thread -- scripts/common.sh@366 -- # decimal 2 00:04:49.434 11:49:25 thread -- scripts/common.sh@353 -- # local d=2 00:04:49.434 11:49:25 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.434 11:49:25 thread -- scripts/common.sh@355 -- # echo 2 00:04:49.434 11:49:25 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.434 11:49:25 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.434 11:49:25 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.434 11:49:25 thread -- scripts/common.sh@368 -- # return 0 00:04:49.435 11:49:25 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.435 11:49:25 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:49.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.435 --rc genhtml_branch_coverage=1 00:04:49.435 --rc genhtml_function_coverage=1 00:04:49.435 --rc genhtml_legend=1 00:04:49.435 --rc geninfo_all_blocks=1 00:04:49.435 --rc geninfo_unexecuted_blocks=1 00:04:49.435 00:04:49.435 ' 00:04:49.435 11:49:25 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:49.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.435 --rc genhtml_branch_coverage=1 00:04:49.435 --rc genhtml_function_coverage=1 00:04:49.435 --rc genhtml_legend=1 00:04:49.435 --rc geninfo_all_blocks=1 00:04:49.435 --rc geninfo_unexecuted_blocks=1 00:04:49.435 00:04:49.435 ' 00:04:49.435 11:49:25 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:49.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.435 --rc genhtml_branch_coverage=1 00:04:49.435 --rc genhtml_function_coverage=1 00:04:49.435 --rc genhtml_legend=1 00:04:49.435 --rc geninfo_all_blocks=1 00:04:49.435 --rc geninfo_unexecuted_blocks=1 00:04:49.435 00:04:49.435 ' 00:04:49.435 11:49:25 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:49.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.435 --rc genhtml_branch_coverage=1 00:04:49.435 --rc genhtml_function_coverage=1 00:04:49.435 --rc genhtml_legend=1 00:04:49.435 --rc geninfo_all_blocks=1 00:04:49.435 --rc geninfo_unexecuted_blocks=1 00:04:49.435 00:04:49.435 ' 00:04:49.435 11:49:25 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:49.435 11:49:25 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:04:49.435 11:49:25 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.435 11:49:25 thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.435 ************************************ 00:04:49.435 START TEST thread_poller_perf 00:04:49.435 ************************************ 00:04:49.435 11:49:25 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:49.435 [2024-10-21 11:49:25.955117] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:04:49.435 [2024-10-21 11:49:25.955220] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid751230 ] 00:04:49.696 [2024-10-21 11:49:26.033776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.696 [2024-10-21 11:49:26.064638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.696 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:50.638 [2024-10-21T09:49:27.233Z] ====================================== 00:04:50.638 [2024-10-21T09:49:27.233Z] busy:2407708604 (cyc) 00:04:50.638 [2024-10-21T09:49:27.233Z] total_run_count: 419000 00:04:50.638 [2024-10-21T09:49:27.233Z] tsc_hz: 2400000000 (cyc) 00:04:50.638 [2024-10-21T09:49:27.233Z] ====================================== 00:04:50.638 [2024-10-21T09:49:27.233Z] poller_cost: 5746 (cyc), 2394 (nsec) 00:04:50.638 00:04:50.638 real 0m1.164s 00:04:50.638 user 0m1.081s 00:04:50.638 sys 0m0.078s 00:04:50.638 11:49:27 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:50.638 11:49:27 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:50.638 ************************************ 00:04:50.638 END TEST thread_poller_perf 00:04:50.638 ************************************ 00:04:50.638 11:49:27 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:50.638 11:49:27 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:04:50.638 11:49:27 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:50.638 11:49:27 thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.638 ************************************ 00:04:50.638 START TEST thread_poller_perf 00:04:50.638 ************************************ 00:04:50.638 11:49:27 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:50.638 [2024-10-21 11:49:27.198108] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:04:50.638 [2024-10-21 11:49:27.198212] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid751580 ] 00:04:50.898 [2024-10-21 11:49:27.276738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.898 [2024-10-21 11:49:27.305697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.898 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:51.839 [2024-10-21T09:49:28.434Z] ====================================== 00:04:51.839 [2024-10-21T09:49:28.434Z] busy:2401299110 (cyc) 00:04:51.839 [2024-10-21T09:49:28.434Z] total_run_count: 5564000 00:04:51.839 [2024-10-21T09:49:28.434Z] tsc_hz: 2400000000 (cyc) 00:04:51.839 [2024-10-21T09:49:28.434Z] ====================================== 00:04:51.839 [2024-10-21T09:49:28.434Z] poller_cost: 431 (cyc), 179 (nsec) 00:04:51.839 00:04:51.839 real 0m1.156s 00:04:51.839 user 0m1.084s 00:04:51.839 sys 0m0.069s 00:04:51.839 11:49:28 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:51.839 11:49:28 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:51.839 ************************************ 00:04:51.839 END TEST thread_poller_perf 00:04:51.839 ************************************ 00:04:51.839 11:49:28 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:51.839 00:04:51.839 real 0m2.676s 00:04:51.839 user 0m2.349s 00:04:51.839 sys 0m0.342s 00:04:51.839 11:49:28 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:51.839 11:49:28 thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.839 ************************************ 00:04:51.839 END TEST thread 00:04:51.839 ************************************ 00:04:51.839 11:49:28 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:04:51.839 11:49:28 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:51.839 11:49:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:51.839 11:49:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:51.839 11:49:28 -- common/autotest_common.sh@10 -- # set +x 00:04:52.101 ************************************ 00:04:52.101 START TEST app_cmdline 00:04:52.101 ************************************ 00:04:52.101 11:49:28 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:52.101 * Looking for test storage... 00:04:52.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:52.101 11:49:28 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:52.101 11:49:28 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:04:52.101 11:49:28 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:52.101 11:49:28 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:52.101 11:49:28 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:52.101 11:49:28 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:52.101 11:49:28 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:52.101 11:49:28 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.101 11:49:28 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:04:52.101 11:49:28 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:04:52.101 11:49:28 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:04:52.101 11:49:28 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:04:52.101 11:49:28 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:04:52.101 11:49:28 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:04:52.101 11:49:28 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:52.101 11:49:28 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:04:52.101 11:49:28 app_cmdline -- scripts/common.sh@345 -- # : 1 00:04:52.101 11:49:28 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:52.101 11:49:28 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.101 11:49:28 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:04:52.101 11:49:28 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:04:52.101 11:49:28 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.101 11:49:28 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:04:52.101 11:49:28 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:04:52.101 11:49:28 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:04:52.101 11:49:28 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:04:52.101 11:49:28 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.101 11:49:28 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:04:52.101 11:49:28 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:04:52.101 11:49:28 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:52.101 11:49:28 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:52.101 11:49:28 app_cmdline -- scripts/common.sh@368 -- # return 0 00:04:52.101 11:49:28 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.101 11:49:28 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:52.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.101 --rc genhtml_branch_coverage=1 00:04:52.101 --rc genhtml_function_coverage=1 00:04:52.101 --rc genhtml_legend=1 00:04:52.101 --rc geninfo_all_blocks=1 00:04:52.101 --rc geninfo_unexecuted_blocks=1 00:04:52.101 00:04:52.101 ' 00:04:52.101 11:49:28 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:52.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.101 --rc genhtml_branch_coverage=1 00:04:52.101 --rc genhtml_function_coverage=1 00:04:52.101 --rc genhtml_legend=1 00:04:52.101 --rc geninfo_all_blocks=1 00:04:52.101 --rc geninfo_unexecuted_blocks=1 00:04:52.101 00:04:52.101 ' 00:04:52.101 11:49:28 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:52.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.101 --rc genhtml_branch_coverage=1 00:04:52.101 --rc genhtml_function_coverage=1 00:04:52.101 --rc genhtml_legend=1 00:04:52.101 --rc geninfo_all_blocks=1 00:04:52.101 --rc geninfo_unexecuted_blocks=1 00:04:52.101 00:04:52.101 ' 00:04:52.101 11:49:28 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:52.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.101 --rc genhtml_branch_coverage=1 00:04:52.101 --rc genhtml_function_coverage=1 00:04:52.101 --rc genhtml_legend=1 00:04:52.101 --rc geninfo_all_blocks=1 00:04:52.101 --rc geninfo_unexecuted_blocks=1 00:04:52.101 00:04:52.101 ' 00:04:52.101 11:49:28 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:04:52.101 11:49:28 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=751948 00:04:52.101 11:49:28 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 751948 00:04:52.101 11:49:28 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 751948 ']' 00:04:52.101 11:49:28 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:04:52.101 11:49:28 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.101 11:49:28 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:52.101 11:49:28 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.101 11:49:28 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:52.101 11:49:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:52.361 [2024-10-21 11:49:28.717646] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:04:52.361 [2024-10-21 11:49:28.717728] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid751948 ] 00:04:52.361 [2024-10-21 11:49:28.798990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.361 [2024-10-21 11:49:28.834311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.945 11:49:29 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:52.945 11:49:29 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:04:52.945 11:49:29 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:04:53.205 { 00:04:53.205 "version": "SPDK v25.01-pre git sha1 1042d663d", 00:04:53.205 "fields": { 00:04:53.205 "major": 25, 00:04:53.205 "minor": 1, 00:04:53.205 "patch": 0, 00:04:53.205 "suffix": "-pre", 00:04:53.205 "commit": "1042d663d" 00:04:53.205 } 00:04:53.205 } 00:04:53.205 11:49:29 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:04:53.205 11:49:29 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:04:53.205 11:49:29 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:04:53.205 11:49:29 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:04:53.205 11:49:29 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:04:53.205 11:49:29 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:04:53.205 11:49:29 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:53.205 11:49:29 app_cmdline -- app/cmdline.sh@26 -- # sort 00:04:53.205 11:49:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:53.205 11:49:29 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:53.205 11:49:29 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:04:53.205 11:49:29 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:04:53.205 11:49:29 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:53.205 11:49:29 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:04:53.205 11:49:29 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:53.205 11:49:29 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:53.205 11:49:29 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:53.205 11:49:29 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:53.205 11:49:29 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:53.205 11:49:29 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:53.205 11:49:29 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:53.205 11:49:29 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:53.205 11:49:29 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:04:53.205 11:49:29 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:53.465 request: 00:04:53.465 { 00:04:53.465 "method": "env_dpdk_get_mem_stats", 00:04:53.465 "req_id": 1 00:04:53.465 } 00:04:53.465 Got JSON-RPC error response 00:04:53.465 response: 00:04:53.465 { 00:04:53.465 "code": -32601, 00:04:53.465 "message": "Method not found" 00:04:53.465 } 00:04:53.465 11:49:29 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:04:53.465 11:49:29 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:53.465 11:49:29 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:53.465 11:49:29 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:53.465 11:49:29 app_cmdline -- app/cmdline.sh@1 -- # killprocess 751948 00:04:53.465 11:49:29 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 751948 ']' 00:04:53.465 11:49:29 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 751948 00:04:53.465 11:49:29 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:04:53.465 11:49:29 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:53.465 11:49:29 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 751948 00:04:53.465 11:49:29 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:53.465 11:49:29 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:53.465 11:49:29 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 751948' 00:04:53.465 killing process with pid 751948 00:04:53.465 11:49:29 app_cmdline -- common/autotest_common.sh@969 -- # kill 751948 00:04:53.465 11:49:29 app_cmdline -- common/autotest_common.sh@974 -- # wait 751948 00:04:53.726 00:04:53.726 real 0m1.685s 00:04:53.726 user 0m2.023s 00:04:53.726 sys 0m0.440s 00:04:53.726 11:49:30 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:53.726 11:49:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:53.726 ************************************ 00:04:53.726 END TEST app_cmdline 00:04:53.726 ************************************ 00:04:53.726 11:49:30 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:53.726 11:49:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:53.726 11:49:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:53.726 11:49:30 -- common/autotest_common.sh@10 -- # set +x 00:04:53.726 ************************************ 00:04:53.726 START TEST version 00:04:53.726 ************************************ 00:04:53.726 11:49:30 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:53.726 * Looking for test storage... 00:04:53.726 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:53.726 11:49:30 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:53.726 11:49:30 version -- common/autotest_common.sh@1691 -- # lcov --version 00:04:53.726 11:49:30 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:54.033 11:49:30 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:54.033 11:49:30 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.033 11:49:30 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.033 11:49:30 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.033 11:49:30 version -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.033 11:49:30 version -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.033 11:49:30 version -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.033 11:49:30 version -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.033 11:49:30 version -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.033 11:49:30 version -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.033 11:49:30 version -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.033 11:49:30 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.033 11:49:30 version -- scripts/common.sh@344 -- # case "$op" in 00:04:54.033 11:49:30 version -- scripts/common.sh@345 -- # : 1 00:04:54.033 11:49:30 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.033 11:49:30 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.033 11:49:30 version -- scripts/common.sh@365 -- # decimal 1 00:04:54.033 11:49:30 version -- scripts/common.sh@353 -- # local d=1 00:04:54.033 11:49:30 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.033 11:49:30 version -- scripts/common.sh@355 -- # echo 1 00:04:54.033 11:49:30 version -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.033 11:49:30 version -- scripts/common.sh@366 -- # decimal 2 00:04:54.033 11:49:30 version -- scripts/common.sh@353 -- # local d=2 00:04:54.033 11:49:30 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.033 11:49:30 version -- scripts/common.sh@355 -- # echo 2 00:04:54.033 11:49:30 version -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.033 11:49:30 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.033 11:49:30 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.033 11:49:30 version -- scripts/common.sh@368 -- # return 0 00:04:54.033 11:49:30 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.033 11:49:30 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:54.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.033 --rc genhtml_branch_coverage=1 00:04:54.033 --rc genhtml_function_coverage=1 00:04:54.033 --rc genhtml_legend=1 00:04:54.033 --rc geninfo_all_blocks=1 00:04:54.033 --rc geninfo_unexecuted_blocks=1 00:04:54.033 00:04:54.033 ' 00:04:54.033 11:49:30 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:54.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.033 --rc genhtml_branch_coverage=1 00:04:54.033 --rc genhtml_function_coverage=1 00:04:54.033 --rc genhtml_legend=1 00:04:54.033 --rc geninfo_all_blocks=1 00:04:54.033 --rc geninfo_unexecuted_blocks=1 00:04:54.033 00:04:54.033 ' 00:04:54.033 11:49:30 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:54.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.033 --rc genhtml_branch_coverage=1 00:04:54.033 --rc genhtml_function_coverage=1 00:04:54.033 --rc genhtml_legend=1 00:04:54.033 --rc geninfo_all_blocks=1 00:04:54.033 --rc geninfo_unexecuted_blocks=1 00:04:54.033 00:04:54.033 ' 00:04:54.033 11:49:30 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:54.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.033 --rc genhtml_branch_coverage=1 00:04:54.033 --rc genhtml_function_coverage=1 00:04:54.033 --rc genhtml_legend=1 00:04:54.033 --rc geninfo_all_blocks=1 00:04:54.033 --rc geninfo_unexecuted_blocks=1 00:04:54.033 00:04:54.033 ' 00:04:54.033 11:49:30 version -- app/version.sh@17 -- # get_header_version major 00:04:54.033 11:49:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:54.033 11:49:30 version -- app/version.sh@14 -- # cut -f2 00:04:54.033 11:49:30 version -- app/version.sh@14 -- # tr -d '"' 00:04:54.033 11:49:30 version -- app/version.sh@17 -- # major=25 00:04:54.033 11:49:30 version -- app/version.sh@18 -- # get_header_version minor 00:04:54.033 11:49:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:54.033 11:49:30 version -- app/version.sh@14 -- # cut -f2 00:04:54.033 11:49:30 version -- app/version.sh@14 -- # tr -d '"' 00:04:54.033 11:49:30 version -- app/version.sh@18 -- # minor=1 00:04:54.033 11:49:30 version -- app/version.sh@19 -- # get_header_version patch 00:04:54.033 11:49:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:54.033 11:49:30 version -- app/version.sh@14 -- # cut -f2 00:04:54.033 11:49:30 version -- app/version.sh@14 -- # tr -d '"' 00:04:54.033 11:49:30 version -- app/version.sh@19 -- # patch=0 00:04:54.033 11:49:30 version -- app/version.sh@20 -- # get_header_version suffix 00:04:54.033 11:49:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:54.033 11:49:30 version -- app/version.sh@14 -- # cut -f2 00:04:54.033 11:49:30 version -- app/version.sh@14 -- # tr -d '"' 00:04:54.033 11:49:30 version -- app/version.sh@20 -- # suffix=-pre 00:04:54.033 11:49:30 version -- app/version.sh@22 -- # version=25.1 00:04:54.033 11:49:30 version -- app/version.sh@25 -- # (( patch != 0 )) 00:04:54.033 11:49:30 version -- app/version.sh@28 -- # version=25.1rc0 00:04:54.033 11:49:30 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:04:54.033 11:49:30 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:04:54.033 11:49:30 version -- app/version.sh@30 -- # py_version=25.1rc0 00:04:54.033 11:49:30 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:04:54.033 00:04:54.033 real 0m0.283s 00:04:54.033 user 0m0.171s 00:04:54.033 sys 0m0.163s 00:04:54.033 11:49:30 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:54.033 11:49:30 version -- common/autotest_common.sh@10 -- # set +x 00:04:54.033 ************************************ 00:04:54.033 END TEST version 00:04:54.033 ************************************ 00:04:54.033 11:49:30 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:04:54.033 11:49:30 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:04:54.033 11:49:30 -- spdk/autotest.sh@194 -- # uname -s 00:04:54.033 11:49:30 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:04:54.033 11:49:30 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:54.033 11:49:30 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:54.033 11:49:30 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:04:54.033 11:49:30 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:04:54.033 11:49:30 -- spdk/autotest.sh@256 -- # timing_exit lib 00:04:54.033 11:49:30 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:54.033 11:49:30 -- common/autotest_common.sh@10 -- # set +x 00:04:54.033 11:49:30 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:04:54.034 11:49:30 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:04:54.034 11:49:30 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:04:54.034 11:49:30 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:04:54.034 11:49:30 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:04:54.034 11:49:30 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:04:54.034 11:49:30 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:54.034 11:49:30 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:04:54.034 11:49:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:54.034 11:49:30 -- common/autotest_common.sh@10 -- # set +x 00:04:54.034 ************************************ 00:04:54.034 START TEST nvmf_tcp 00:04:54.034 ************************************ 00:04:54.034 11:49:30 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:54.296 * Looking for test storage... 00:04:54.296 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:54.296 11:49:30 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:54.296 11:49:30 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:04:54.296 11:49:30 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:54.296 11:49:30 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:54.296 11:49:30 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.296 11:49:30 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.296 11:49:30 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.296 11:49:30 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.296 11:49:30 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.296 11:49:30 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.296 11:49:30 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.296 11:49:30 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.296 11:49:30 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.296 11:49:30 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.296 11:49:30 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.296 11:49:30 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:54.296 11:49:30 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:04:54.296 11:49:30 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.296 11:49:30 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.296 11:49:30 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:54.296 11:49:30 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:04:54.296 11:49:30 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.296 11:49:30 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:04:54.296 11:49:30 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.296 11:49:30 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:54.296 11:49:30 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:04:54.296 11:49:30 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.296 11:49:30 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:04:54.296 11:49:30 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.296 11:49:30 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.296 11:49:30 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.296 11:49:30 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:04:54.296 11:49:30 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.296 11:49:30 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:54.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.296 --rc genhtml_branch_coverage=1 00:04:54.296 --rc genhtml_function_coverage=1 00:04:54.296 --rc genhtml_legend=1 00:04:54.296 --rc geninfo_all_blocks=1 00:04:54.296 --rc geninfo_unexecuted_blocks=1 00:04:54.296 00:04:54.296 ' 00:04:54.296 11:49:30 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:54.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.296 --rc genhtml_branch_coverage=1 00:04:54.296 --rc genhtml_function_coverage=1 00:04:54.296 --rc genhtml_legend=1 00:04:54.296 --rc geninfo_all_blocks=1 00:04:54.296 --rc geninfo_unexecuted_blocks=1 00:04:54.296 00:04:54.296 ' 00:04:54.296 11:49:30 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:54.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.296 --rc genhtml_branch_coverage=1 00:04:54.296 --rc genhtml_function_coverage=1 00:04:54.296 --rc genhtml_legend=1 00:04:54.296 --rc geninfo_all_blocks=1 00:04:54.296 --rc geninfo_unexecuted_blocks=1 00:04:54.296 00:04:54.296 ' 00:04:54.296 11:49:30 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:54.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.297 --rc genhtml_branch_coverage=1 00:04:54.297 --rc genhtml_function_coverage=1 00:04:54.297 --rc genhtml_legend=1 00:04:54.297 --rc geninfo_all_blocks=1 00:04:54.297 --rc geninfo_unexecuted_blocks=1 00:04:54.297 00:04:54.297 ' 00:04:54.297 11:49:30 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:04:54.297 11:49:30 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:54.297 11:49:30 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:54.297 11:49:30 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:04:54.297 11:49:30 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:54.297 11:49:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:54.297 ************************************ 00:04:54.297 START TEST nvmf_target_core 00:04:54.297 ************************************ 00:04:54.297 11:49:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:54.560 * Looking for test storage... 00:04:54.560 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:54.560 11:49:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:54.560 11:49:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:04:54.560 11:49:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:54.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.560 --rc genhtml_branch_coverage=1 00:04:54.560 --rc genhtml_function_coverage=1 00:04:54.560 --rc genhtml_legend=1 00:04:54.560 --rc geninfo_all_blocks=1 00:04:54.560 --rc geninfo_unexecuted_blocks=1 00:04:54.560 00:04:54.560 ' 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:54.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.560 --rc genhtml_branch_coverage=1 00:04:54.560 --rc genhtml_function_coverage=1 00:04:54.560 --rc genhtml_legend=1 00:04:54.560 --rc geninfo_all_blocks=1 00:04:54.560 --rc geninfo_unexecuted_blocks=1 00:04:54.560 00:04:54.560 ' 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:54.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.560 --rc genhtml_branch_coverage=1 00:04:54.560 --rc genhtml_function_coverage=1 00:04:54.560 --rc genhtml_legend=1 00:04:54.560 --rc geninfo_all_blocks=1 00:04:54.560 --rc geninfo_unexecuted_blocks=1 00:04:54.560 00:04:54.560 ' 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:54.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.560 --rc genhtml_branch_coverage=1 00:04:54.560 --rc genhtml_function_coverage=1 00:04:54.560 --rc genhtml_legend=1 00:04:54.560 --rc geninfo_all_blocks=1 00:04:54.560 --rc geninfo_unexecuted_blocks=1 00:04:54.560 00:04:54.560 ' 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:04:54.560 11:49:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:54.561 11:49:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:54.561 11:49:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:54.561 11:49:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.561 11:49:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.561 11:49:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.561 11:49:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:04:54.561 11:49:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.561 11:49:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:04:54.561 11:49:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:54.561 11:49:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:54.561 11:49:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:54.561 11:49:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:54.561 11:49:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:54.561 11:49:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:54.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:54.561 11:49:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:54.561 11:49:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:54.561 11:49:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:54.561 11:49:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:04:54.561 11:49:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:04:54.561 11:49:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:04:54.561 11:49:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:54.561 11:49:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:04:54.561 11:49:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:54.561 11:49:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:54.561 ************************************ 00:04:54.561 START TEST nvmf_abort 00:04:54.561 ************************************ 00:04:54.561 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:54.822 * Looking for test storage... 00:04:54.822 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:04:54.822 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:54.822 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:54.822 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:04:54.822 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:54.822 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.822 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.822 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.822 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.822 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.822 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.822 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.822 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.822 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.822 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.822 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.822 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:04:54.822 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:04:54.822 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.822 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.822 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:04:54.822 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:04:54.822 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.822 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:04:54.822 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.822 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:04:54.822 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:04:54.822 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.822 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:04:54.822 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.822 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.822 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.822 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:04:54.822 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.822 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:54.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.822 --rc genhtml_branch_coverage=1 00:04:54.822 --rc genhtml_function_coverage=1 00:04:54.822 --rc genhtml_legend=1 00:04:54.822 --rc geninfo_all_blocks=1 00:04:54.822 --rc geninfo_unexecuted_blocks=1 00:04:54.822 00:04:54.822 ' 00:04:54.822 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:54.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.822 --rc genhtml_branch_coverage=1 00:04:54.822 --rc genhtml_function_coverage=1 00:04:54.822 --rc genhtml_legend=1 00:04:54.822 --rc geninfo_all_blocks=1 00:04:54.822 --rc geninfo_unexecuted_blocks=1 00:04:54.822 00:04:54.822 ' 00:04:54.822 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:54.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.822 --rc genhtml_branch_coverage=1 00:04:54.822 --rc genhtml_function_coverage=1 00:04:54.822 --rc genhtml_legend=1 00:04:54.822 --rc geninfo_all_blocks=1 00:04:54.822 --rc geninfo_unexecuted_blocks=1 00:04:54.822 00:04:54.822 ' 00:04:54.822 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:54.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.822 --rc genhtml_branch_coverage=1 00:04:54.822 --rc genhtml_function_coverage=1 00:04:54.822 --rc genhtml_legend=1 00:04:54.822 --rc geninfo_all_blocks=1 00:04:54.822 --rc geninfo_unexecuted_blocks=1 00:04:54.822 00:04:54.822 ' 00:04:54.822 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:54.822 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:04:54.822 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:54.822 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:54.822 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:54.823 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:54.823 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:54.823 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:54.823 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:54.823 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:54.823 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:54.823 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:54.823 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:54.823 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:54.823 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:54.823 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:54.823 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:54.823 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:54.823 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:54.823 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:04:54.823 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:54.823 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:54.823 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:54.823 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.823 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.823 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.823 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:04:54.823 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.823 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:04:54.823 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:54.823 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:54.823 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:54.823 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:54.823 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:54.823 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:54.823 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:54.823 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:54.823 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:54.823 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:54.823 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:04:54.823 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:04:54.823 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:04:54.823 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:04:54.823 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:54.823 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:04:54.823 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:04:54.823 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:04:54.823 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:54.823 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:54.823 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:54.823 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:04:54.823 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:04:54.823 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:04:54.823 11:49:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:05:02.964 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:05:02.964 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:05:02.964 Found net devices under 0000:4b:00.0: cvl_0_0 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:05:02.964 Found net devices under 0000:4b:00.1: cvl_0_1 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:02.964 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:02.965 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:02.965 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:02.965 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:02.965 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:02.965 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:02.965 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:02.965 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:02.965 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:02.965 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:02.965 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:02.965 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:02.965 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:02.965 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:02.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:02.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.689 ms 00:05:02.965 00:05:02.965 --- 10.0.0.2 ping statistics --- 00:05:02.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:02.965 rtt min/avg/max/mdev = 0.689/0.689/0.689/0.000 ms 00:05:02.965 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:02.965 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:02.965 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:05:02.965 00:05:02.965 --- 10.0.0.1 ping statistics --- 00:05:02.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:02.965 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:05:02.965 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:02.965 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:05:02.965 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:05:02.965 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:02.965 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:05:02.965 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:05:02.965 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:02.965 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:05:02.965 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:05:02.965 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:02.965 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:05:02.965 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:02.965 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:02.965 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=756301 00:05:02.965 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 756301 00:05:02.965 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:02.965 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 756301 ']' 00:05:02.965 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.965 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:02.965 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.965 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:02.965 11:49:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:02.965 [2024-10-21 11:49:38.978591] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:05:02.965 [2024-10-21 11:49:38.978669] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:02.965 [2024-10-21 11:49:39.075369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:02.965 [2024-10-21 11:49:39.129198] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:02.965 [2024-10-21 11:49:39.129259] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:02.965 [2024-10-21 11:49:39.129268] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:02.965 [2024-10-21 11:49:39.129275] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:02.965 [2024-10-21 11:49:39.129281] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:02.965 [2024-10-21 11:49:39.131306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:02.965 [2024-10-21 11:49:39.131469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:02.965 [2024-10-21 11:49:39.131587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:03.225 11:49:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:03.225 11:49:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:05:03.225 11:49:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:05:03.225 11:49:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:03.225 11:49:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:03.486 11:49:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:03.486 11:49:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:03.486 11:49:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.486 11:49:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:03.486 [2024-10-21 11:49:39.855053] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:03.486 11:49:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.486 11:49:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:03.486 11:49:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.486 11:49:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:03.486 Malloc0 00:05:03.486 11:49:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.486 11:49:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:03.486 11:49:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.486 11:49:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:03.486 Delay0 00:05:03.486 11:49:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.486 11:49:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:03.486 11:49:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.486 11:49:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:03.486 11:49:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.486 11:49:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:03.486 11:49:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.486 11:49:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:03.486 11:49:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.486 11:49:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:03.486 11:49:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.486 11:49:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:03.486 [2024-10-21 11:49:39.940885] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:03.486 11:49:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.486 11:49:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:03.487 11:49:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.487 11:49:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:03.487 11:49:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.487 11:49:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:03.487 [2024-10-21 11:49:40.041371] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:06.033 Initializing NVMe Controllers 00:05:06.033 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:06.033 controller IO queue size 128 less than required 00:05:06.033 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:06.033 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:06.033 Initialization complete. Launching workers. 00:05:06.033 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28445 00:05:06.033 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28506, failed to submit 62 00:05:06.033 success 28449, unsuccessful 57, failed 0 00:05:06.033 11:49:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:06.033 11:49:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.033 11:49:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:06.034 11:49:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.034 11:49:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:06.034 11:49:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:06.034 11:49:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:05:06.034 11:49:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:06.034 11:49:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:06.034 11:49:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:06.034 11:49:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:06.034 11:49:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:06.034 rmmod nvme_tcp 00:05:06.034 rmmod nvme_fabrics 00:05:06.034 rmmod nvme_keyring 00:05:06.034 11:49:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:06.034 11:49:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:06.034 11:49:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:06.034 11:49:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 756301 ']' 00:05:06.034 11:49:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 756301 00:05:06.034 11:49:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 756301 ']' 00:05:06.034 11:49:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 756301 00:05:06.034 11:49:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:05:06.034 11:49:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:06.034 11:49:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 756301 00:05:06.034 11:49:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:05:06.034 11:49:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:05:06.034 11:49:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 756301' 00:05:06.034 killing process with pid 756301 00:05:06.034 11:49:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 756301 00:05:06.034 11:49:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 756301 00:05:06.034 11:49:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:05:06.034 11:49:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:05:06.034 11:49:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:05:06.034 11:49:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:06.034 11:49:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:05:06.034 11:49:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:05:06.034 11:49:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:05:06.034 11:49:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:06.034 11:49:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:06.034 11:49:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:06.034 11:49:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:06.034 11:49:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:07.947 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:07.947 00:05:07.947 real 0m13.356s 00:05:07.947 user 0m13.863s 00:05:07.947 sys 0m6.641s 00:05:07.947 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:07.947 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:07.947 ************************************ 00:05:07.947 END TEST nvmf_abort 00:05:07.947 ************************************ 00:05:08.208 11:49:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:08.208 11:49:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:08.208 11:49:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:08.208 11:49:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:08.208 ************************************ 00:05:08.208 START TEST nvmf_ns_hotplug_stress 00:05:08.208 ************************************ 00:05:08.208 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:08.208 * Looking for test storage... 00:05:08.208 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:08.208 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:08.208 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:05:08.208 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:08.208 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:08.208 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.208 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.208 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.208 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.208 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.208 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.208 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.208 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.208 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.208 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.208 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.208 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:08.208 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:08.208 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.208 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.208 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:08.208 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:08.208 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.208 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:08.208 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.208 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:08.208 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:08.208 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.208 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:08.208 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.208 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.208 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.208 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:08.208 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.208 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:08.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.208 --rc genhtml_branch_coverage=1 00:05:08.208 --rc genhtml_function_coverage=1 00:05:08.208 --rc genhtml_legend=1 00:05:08.208 --rc geninfo_all_blocks=1 00:05:08.208 --rc geninfo_unexecuted_blocks=1 00:05:08.208 00:05:08.208 ' 00:05:08.208 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:08.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.208 --rc genhtml_branch_coverage=1 00:05:08.208 --rc genhtml_function_coverage=1 00:05:08.208 --rc genhtml_legend=1 00:05:08.208 --rc geninfo_all_blocks=1 00:05:08.208 --rc geninfo_unexecuted_blocks=1 00:05:08.208 00:05:08.208 ' 00:05:08.208 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:08.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.208 --rc genhtml_branch_coverage=1 00:05:08.208 --rc genhtml_function_coverage=1 00:05:08.208 --rc genhtml_legend=1 00:05:08.208 --rc geninfo_all_blocks=1 00:05:08.208 --rc geninfo_unexecuted_blocks=1 00:05:08.208 00:05:08.208 ' 00:05:08.209 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:08.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.209 --rc genhtml_branch_coverage=1 00:05:08.209 --rc genhtml_function_coverage=1 00:05:08.209 --rc genhtml_legend=1 00:05:08.209 --rc geninfo_all_blocks=1 00:05:08.209 --rc geninfo_unexecuted_blocks=1 00:05:08.209 00:05:08.209 ' 00:05:08.209 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:08.209 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:08.209 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:08.209 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:08.209 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:08.209 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:08.209 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:08.209 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:08.209 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:08.209 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:08.209 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:08.209 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:08.470 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:08.470 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:08.470 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:08.470 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:08.470 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:08.470 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:08.470 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:08.470 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:08.470 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:08.470 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:08.470 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:08.470 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.470 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.470 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.470 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:08.470 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.470 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:08.470 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:08.470 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:08.470 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:08.470 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:08.470 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:08.470 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:08.470 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:08.470 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:08.470 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:08.470 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:08.470 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:08.470 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:08.470 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:05:08.470 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:08.470 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:05:08.470 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:05:08.470 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:05:08.470 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:08.470 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:08.470 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:08.470 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:05:08.470 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:05:08.470 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:08.470 11:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:05:16.612 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:05:16.612 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:05:16.612 Found net devices under 0000:4b:00.0: cvl_0_0 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:05:16.612 Found net devices under 0000:4b:00.1: cvl_0_1 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:16.612 11:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:16.612 11:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:16.612 11:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:16.612 11:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:16.612 11:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:16.612 11:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:16.612 11:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:16.612 11:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:16.612 11:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:16.612 11:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:16.612 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:16.612 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.675 ms 00:05:16.612 00:05:16.612 --- 10.0.0.2 ping statistics --- 00:05:16.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:16.612 rtt min/avg/max/mdev = 0.675/0.675/0.675/0.000 ms 00:05:16.612 11:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:16.612 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:16.612 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:05:16.612 00:05:16.612 --- 10.0.0.1 ping statistics --- 00:05:16.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:16.612 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:05:16.612 11:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:16.612 11:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:05:16.612 11:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:05:16.612 11:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:16.612 11:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:05:16.612 11:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:05:16.612 11:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:16.612 11:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:05:16.612 11:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:05:16.612 11:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:16.612 11:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:05:16.612 11:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:16.612 11:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:16.612 11:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=761194 00:05:16.613 11:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 761194 00:05:16.613 11:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:16.613 11:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 761194 ']' 00:05:16.613 11:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.613 11:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:16.613 11:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.613 11:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:16.613 11:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:16.613 [2024-10-21 11:49:52.394176] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:05:16.613 [2024-10-21 11:49:52.394241] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:16.613 [2024-10-21 11:49:52.484704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:16.613 [2024-10-21 11:49:52.536863] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:16.613 [2024-10-21 11:49:52.536916] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:16.613 [2024-10-21 11:49:52.536925] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:16.613 [2024-10-21 11:49:52.536933] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:16.613 [2024-10-21 11:49:52.536939] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:16.613 [2024-10-21 11:49:52.538833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:16.613 [2024-10-21 11:49:52.538968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:16.613 [2024-10-21 11:49:52.538968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.874 11:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:16.874 11:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:05:16.874 11:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:05:16.874 11:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:16.874 11:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:16.874 11:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:16.874 11:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:16.874 11:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:16.874 [2024-10-21 11:49:53.432931] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:17.134 11:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:17.134 11:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:17.394 [2024-10-21 11:49:53.828111] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:17.394 11:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:17.655 11:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:17.655 Malloc0 00:05:17.937 11:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:17.937 Delay0 00:05:17.937 11:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:18.197 11:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:18.458 NULL1 00:05:18.458 11:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:18.458 11:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=761892 00:05:18.458 11:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:18.458 11:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:18.458 11:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:18.719 11:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:18.980 11:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:18.980 11:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:18.980 true 00:05:18.980 11:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:18.980 11:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:19.241 11:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:19.502 11:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:19.502 11:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:19.502 true 00:05:19.763 11:49:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:19.763 11:49:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:19.763 11:49:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:20.024 11:49:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:20.024 11:49:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:20.285 true 00:05:20.285 11:49:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:20.285 11:49:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:20.285 11:49:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:20.545 11:49:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:20.545 11:49:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:20.805 true 00:05:20.805 11:49:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:20.805 11:49:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:20.805 11:49:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:21.066 11:49:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:21.066 11:49:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:21.327 true 00:05:21.327 11:49:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:21.327 11:49:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:21.327 11:49:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:21.619 11:49:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:21.619 11:49:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:21.911 true 00:05:21.911 11:49:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:21.911 11:49:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:21.911 11:49:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:22.188 11:49:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:22.188 11:49:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:22.188 true 00:05:22.484 11:49:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:22.484 11:49:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:22.484 11:49:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:22.766 11:49:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:22.766 11:49:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:22.766 true 00:05:22.766 11:49:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:22.766 11:49:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:23.026 11:49:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:23.287 11:49:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:23.287 11:49:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:23.287 true 00:05:23.287 11:49:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:23.287 11:49:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:23.548 11:49:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:23.808 11:50:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:23.808 11:50:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:23.808 true 00:05:23.808 11:50:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:23.808 11:50:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:24.069 11:50:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:24.329 11:50:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:24.329 11:50:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:24.329 true 00:05:24.329 11:50:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:24.329 11:50:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:24.589 11:50:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:24.850 11:50:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:24.850 11:50:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:24.850 true 00:05:24.850 11:50:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:24.850 11:50:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:25.110 11:50:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:25.370 11:50:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:25.370 11:50:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:25.370 true 00:05:25.630 11:50:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:25.630 11:50:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:25.630 11:50:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:25.889 11:50:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:25.889 11:50:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:26.149 true 00:05:26.149 11:50:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:26.149 11:50:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:26.149 11:50:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:26.409 11:50:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:26.409 11:50:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:26.670 true 00:05:26.670 11:50:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:26.670 11:50:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:26.670 11:50:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:26.929 11:50:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:26.929 11:50:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:27.189 true 00:05:27.189 11:50:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:27.189 11:50:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:27.449 11:50:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:27.449 11:50:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:27.449 11:50:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:27.710 true 00:05:27.710 11:50:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:27.710 11:50:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:27.969 11:50:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:27.969 11:50:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:27.969 11:50:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:28.229 true 00:05:28.229 11:50:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:28.229 11:50:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.489 11:50:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.750 11:50:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:28.750 11:50:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:28.750 true 00:05:28.750 11:50:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:28.750 11:50:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.010 11:50:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.270 11:50:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:29.270 11:50:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:29.270 true 00:05:29.270 11:50:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:29.270 11:50:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.530 11:50:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.791 11:50:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:29.791 11:50:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:29.791 true 00:05:30.051 11:50:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:30.051 11:50:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.051 11:50:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.313 11:50:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:30.313 11:50:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:30.573 true 00:05:30.573 11:50:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:30.573 11:50:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.573 11:50:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.834 11:50:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:30.834 11:50:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:31.094 true 00:05:31.094 11:50:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:31.094 11:50:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.355 11:50:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.355 11:50:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:31.355 11:50:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:31.616 true 00:05:31.616 11:50:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:31.616 11:50:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.877 11:50:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.877 11:50:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:31.877 11:50:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:32.138 true 00:05:32.138 11:50:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:32.138 11:50:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.399 11:50:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.659 11:50:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:32.659 11:50:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:32.659 true 00:05:32.659 11:50:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:32.659 11:50:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.920 11:50:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.180 11:50:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:33.180 11:50:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:33.180 true 00:05:33.180 11:50:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:33.180 11:50:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.440 11:50:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.700 11:50:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:33.700 11:50:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:33.700 true 00:05:33.960 11:50:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:33.960 11:50:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.960 11:50:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.220 11:50:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:05:34.220 11:50:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:05:34.481 true 00:05:34.481 11:50:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:34.481 11:50:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.481 11:50:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.741 11:50:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:05:34.741 11:50:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:05:35.001 true 00:05:35.001 11:50:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:35.001 11:50:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.262 11:50:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.262 11:50:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:05:35.262 11:50:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:05:35.522 true 00:05:35.522 11:50:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:35.522 11:50:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.782 11:50:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.782 11:50:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:05:35.782 11:50:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:05:36.042 true 00:05:36.042 11:50:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:36.042 11:50:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.303 11:50:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.303 11:50:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:05:36.303 11:50:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:05:36.563 true 00:05:36.563 11:50:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:36.563 11:50:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.824 11:50:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.084 11:50:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:05:37.084 11:50:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:05:37.084 true 00:05:37.084 11:50:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:37.084 11:50:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.344 11:50:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.605 11:50:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:05:37.605 11:50:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:05:37.605 true 00:05:37.605 11:50:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:37.605 11:50:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.866 11:50:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.126 11:50:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:05:38.126 11:50:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:05:38.126 true 00:05:38.388 11:50:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:38.388 11:50:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.388 11:50:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.649 11:50:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:05:38.649 11:50:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:05:38.910 true 00:05:38.910 11:50:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:38.910 11:50:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.910 11:50:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.171 11:50:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:05:39.171 11:50:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:05:39.431 true 00:05:39.431 11:50:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:39.431 11:50:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.431 11:50:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.692 11:50:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:05:39.692 11:50:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:05:39.954 true 00:05:39.954 11:50:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:39.954 11:50:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.214 11:50:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.214 11:50:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:05:40.214 11:50:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:05:40.475 true 00:05:40.475 11:50:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:40.475 11:50:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.735 11:50:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.735 11:50:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:05:40.735 11:50:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:05:40.996 true 00:05:40.996 11:50:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:40.996 11:50:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.255 11:50:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.514 11:50:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:05:41.514 11:50:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:05:41.514 true 00:05:41.514 11:50:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:41.514 11:50:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.774 11:50:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.034 11:50:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:05:42.034 11:50:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:05:42.034 true 00:05:42.034 11:50:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:42.034 11:50:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.295 11:50:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.555 11:50:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:05:42.555 11:50:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:05:42.555 true 00:05:42.815 11:50:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:42.815 11:50:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.815 11:50:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.076 11:50:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:05:43.076 11:50:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:05:43.336 true 00:05:43.336 11:50:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:43.336 11:50:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.336 11:50:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.597 11:50:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:05:43.597 11:50:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:05:43.857 true 00:05:43.857 11:50:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:43.857 11:50:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.857 11:50:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.118 11:50:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:05:44.118 11:50:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:05:44.378 true 00:05:44.378 11:50:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:44.378 11:50:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.638 11:50:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.638 11:50:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:05:44.638 11:50:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:05:44.899 true 00:05:44.899 11:50:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:44.899 11:50:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.161 11:50:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.161 11:50:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:05:45.161 11:50:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:05:45.422 true 00:05:45.422 11:50:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:45.422 11:50:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.712 11:50:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.972 11:50:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:05:45.972 11:50:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:05:45.972 true 00:05:45.972 11:50:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:45.972 11:50:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.233 11:50:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.493 11:50:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:05:46.493 11:50:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:05:46.493 true 00:05:46.493 11:50:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:46.493 11:50:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.754 11:50:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.014 11:50:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:05:47.014 11:50:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:05:47.014 true 00:05:47.274 11:50:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:47.274 11:50:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.274 11:50:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.535 11:50:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:05:47.535 11:50:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:05:47.795 true 00:05:47.795 11:50:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:47.795 11:50:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.795 11:50:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.055 11:50:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:05:48.055 11:50:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:05:48.316 true 00:05:48.316 11:50:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:48.316 11:50:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.316 11:50:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.577 11:50:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:05:48.577 11:50:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:05:48.846 true 00:05:48.846 11:50:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:48.846 11:50:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.846 Initializing NVMe Controllers 00:05:48.846 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:48.846 Controller IO queue size 128, less than required. 00:05:48.846 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:48.846 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:05:48.846 Initialization complete. Launching workers. 00:05:48.846 ======================================================== 00:05:48.846 Latency(us) 00:05:48.846 Device Information : IOPS MiB/s Average min max 00:05:48.846 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30797.87 15.04 4156.02 1125.42 11168.75 00:05:48.846 ======================================================== 00:05:48.846 Total : 30797.87 15.04 4156.02 1125.42 11168.75 00:05:48.846 00:05:49.142 11:50:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.142 11:50:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:05:49.142 11:50:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:05:49.428 true 00:05:49.428 11:50:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761892 00:05:49.428 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (761892) - No such process 00:05:49.428 11:50:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 761892 00:05:49.428 11:50:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.428 11:50:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:49.688 11:50:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:05:49.688 11:50:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:05:49.688 11:50:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:05:49.688 11:50:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:49.688 11:50:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:05:49.949 null0 00:05:49.949 11:50:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:49.949 11:50:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:49.949 11:50:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:05:49.949 null1 00:05:50.210 11:50:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:50.210 11:50:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:50.210 11:50:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:05:50.210 null2 00:05:50.210 11:50:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:50.210 11:50:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:50.210 11:50:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:05:50.470 null3 00:05:50.470 11:50:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:50.470 11:50:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:50.470 11:50:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:05:50.730 null4 00:05:50.730 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:50.730 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:50.730 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:05:50.730 null5 00:05:50.730 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:50.730 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:50.730 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:05:50.992 null6 00:05:50.992 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:50.992 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:50.992 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:05:51.254 null7 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 768462 768463 768465 768467 768469 768471 768473 768475 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:51.254 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:51.515 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.515 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:51.515 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:51.515 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:51.515 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:51.515 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:51.515 11:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:51.515 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.515 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.515 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:51.515 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.515 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.515 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:51.515 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.515 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.515 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:51.515 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.515 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.515 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:51.515 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.516 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.516 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:51.516 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.516 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.516 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.516 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.516 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:51.516 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:51.516 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.516 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.516 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:51.777 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:51.777 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:51.777 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.777 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:51.777 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:51.777 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:51.777 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:51.777 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:52.038 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.038 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.038 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:52.038 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.038 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.038 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:52.038 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.038 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.038 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:52.038 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.038 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.038 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:52.038 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.038 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.038 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:52.038 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.038 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.038 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:52.038 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.038 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.038 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:52.038 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:52.038 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.038 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.038 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:52.038 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:52.038 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:52.038 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:52.299 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:52.299 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:52.299 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.299 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.299 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.299 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:52.299 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:52.299 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.299 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.299 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:52.299 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.299 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.299 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:52.299 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.299 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.299 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:52.299 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.299 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.299 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:52.299 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.299 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.299 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:52.299 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.299 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.299 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:52.559 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:52.559 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.559 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.559 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:52.559 11:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:52.559 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:52.559 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:52.559 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.559 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:52.559 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:52.559 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.559 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.559 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:52.559 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:52.819 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.819 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.819 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:52.819 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.819 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.819 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:52.819 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.819 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.819 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:52.819 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.819 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.819 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:52.819 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:52.819 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.819 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.819 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:52.819 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.819 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.820 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:52.820 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.820 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.820 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:52.820 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:52.820 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:52.820 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:52.820 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.820 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.820 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:52.820 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:52.820 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.079 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:53.079 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:53.079 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.079 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.079 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:53.079 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.079 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.079 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:53.079 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.079 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.079 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:53.079 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.079 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.079 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:53.079 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.079 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.079 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:53.079 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.079 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.079 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:53.079 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.079 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.079 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:53.079 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:53.337 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:53.337 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:53.337 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:53.337 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.337 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:53.337 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:53.337 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.337 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.337 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:53.337 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:53.337 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.337 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.337 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:53.337 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.338 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.338 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:53.596 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.596 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.596 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:53.596 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:53.596 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.596 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.596 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:53.596 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.596 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.596 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:53.597 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.597 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.597 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:53.597 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.597 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.597 11:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:53.597 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:53.597 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:53.597 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:53.597 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.597 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.597 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:53.597 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:53.597 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.597 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:53.857 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:53.857 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.857 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.857 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:53.857 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.857 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.857 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:53.857 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.857 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.857 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:53.857 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:53.857 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.857 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.857 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:53.857 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.857 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.857 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:53.857 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.857 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.857 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:53.857 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.857 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.857 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:53.857 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:54.118 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:54.118 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:54.118 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.118 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.118 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:54.119 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:54.119 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.119 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:54.119 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:54.119 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.119 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.119 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:54.119 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.119 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.119 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:54.119 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.119 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.119 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:54.380 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.380 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.380 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:54.380 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.380 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.380 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:54.380 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.380 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.380 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:54.380 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.380 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.380 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:54.380 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:54.381 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:54.381 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:54.381 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:54.381 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:54.381 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.381 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.381 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.381 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:54.381 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:54.381 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:54.381 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.381 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.381 11:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:54.642 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.642 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.642 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:54.642 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.642 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.642 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:54.642 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.642 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.642 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:54.642 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.642 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.642 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:54.642 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.642 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.642 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:54.642 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:54.642 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:54.642 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.642 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.642 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:54.642 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:54.903 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:54.903 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.903 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.903 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.903 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:54.903 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:54.903 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:54.903 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.903 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.903 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.903 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.903 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.903 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.903 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.903 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.903 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.903 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.163 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.163 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.163 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.163 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.163 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:05:55.163 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:05:55.163 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:05:55.163 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:05:55.163 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:55.163 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:05:55.163 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:55.163 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:55.163 rmmod nvme_tcp 00:05:55.163 rmmod nvme_fabrics 00:05:55.163 rmmod nvme_keyring 00:05:55.163 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:55.163 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:05:55.163 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:05:55.163 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 761194 ']' 00:05:55.163 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 761194 00:05:55.163 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 761194 ']' 00:05:55.163 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 761194 00:05:55.163 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:05:55.163 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:55.163 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 761194 00:05:55.163 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:05:55.163 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:05:55.163 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 761194' 00:05:55.163 killing process with pid 761194 00:05:55.163 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 761194 00:05:55.163 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 761194 00:05:55.424 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:05:55.424 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:05:55.424 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:05:55.424 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:05:55.424 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:05:55.424 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:05:55.424 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:05:55.424 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:55.424 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:55.424 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:55.424 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:55.424 11:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:57.338 11:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:57.338 00:05:57.338 real 0m49.265s 00:05:57.338 user 3m21.002s 00:05:57.338 sys 0m17.426s 00:05:57.338 11:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.338 11:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:57.338 ************************************ 00:05:57.338 END TEST nvmf_ns_hotplug_stress 00:05:57.338 ************************************ 00:05:57.338 11:50:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:57.338 11:50:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:57.338 11:50:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.338 11:50:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:57.599 ************************************ 00:05:57.599 START TEST nvmf_delete_subsystem 00:05:57.599 ************************************ 00:05:57.599 11:50:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:57.599 * Looking for test storage... 00:05:57.599 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:57.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.599 --rc genhtml_branch_coverage=1 00:05:57.599 --rc genhtml_function_coverage=1 00:05:57.599 --rc genhtml_legend=1 00:05:57.599 --rc geninfo_all_blocks=1 00:05:57.599 --rc geninfo_unexecuted_blocks=1 00:05:57.599 00:05:57.599 ' 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:57.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.599 --rc genhtml_branch_coverage=1 00:05:57.599 --rc genhtml_function_coverage=1 00:05:57.599 --rc genhtml_legend=1 00:05:57.599 --rc geninfo_all_blocks=1 00:05:57.599 --rc geninfo_unexecuted_blocks=1 00:05:57.599 00:05:57.599 ' 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:57.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.599 --rc genhtml_branch_coverage=1 00:05:57.599 --rc genhtml_function_coverage=1 00:05:57.599 --rc genhtml_legend=1 00:05:57.599 --rc geninfo_all_blocks=1 00:05:57.599 --rc geninfo_unexecuted_blocks=1 00:05:57.599 00:05:57.599 ' 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:57.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.599 --rc genhtml_branch_coverage=1 00:05:57.599 --rc genhtml_function_coverage=1 00:05:57.599 --rc genhtml_legend=1 00:05:57.599 --rc geninfo_all_blocks=1 00:05:57.599 --rc geninfo_unexecuted_blocks=1 00:05:57.599 00:05:57.599 ' 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:57.599 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:57.600 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:57.600 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:05:57.600 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:57.600 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:57.600 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:57.600 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.600 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.600 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.600 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:05:57.600 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.600 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:05:57.600 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:57.600 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:57.600 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:57.600 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:57.600 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:57.600 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:57.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:57.600 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:57.600 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:57.600 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:57.600 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:05:57.600 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:05:57.600 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:57.600 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:05:57.600 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:05:57.600 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:05:57.600 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:57.600 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:57.600 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:57.600 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:05:57.600 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:05:57.600 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:05:57.600 11:50:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:05.740 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:05.740 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:05.740 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:05.740 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:05.740 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:05.740 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:05.740 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:05.740 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:05.740 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:05.740 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:05.740 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:05.740 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:05.740 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:05.740 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:05.740 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:05.740 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:05.740 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:05.740 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:05.740 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:05.740 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:05.740 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:05.740 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:05.740 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:05.740 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:05.740 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:05.740 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:05.740 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:05.740 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:05.740 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:05.740 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:05.740 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:05.740 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:05.740 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:05.740 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:05.740 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:05.740 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:05.740 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:05.740 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:05.740 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:05.740 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:05.740 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:05.740 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:05.741 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:05.741 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:05.741 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:05.741 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:05.741 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:06:05.741 00:06:05.741 --- 10.0.0.2 ping statistics --- 00:06:05.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:05.741 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:05.741 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:05.741 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:06:05.741 00:06:05.741 --- 10.0.0.1 ping statistics --- 00:06:05.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:05.741 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=773643 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 773643 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 773643 ']' 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:05.741 [2024-10-21 11:50:41.678411] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:06:05.741 [2024-10-21 11:50:41.678476] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:05.741 [2024-10-21 11:50:41.745225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:05.741 [2024-10-21 11:50:41.791412] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:05.741 [2024-10-21 11:50:41.791466] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:05.741 [2024-10-21 11:50:41.791472] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:05.741 [2024-10-21 11:50:41.791477] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:05.741 [2024-10-21 11:50:41.791482] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:05.741 [2024-10-21 11:50:41.792884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.741 [2024-10-21 11:50:41.792891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.741 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:05.741 [2024-10-21 11:50:41.941182] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:05.742 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.742 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:05.742 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.742 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:05.742 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.742 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:05.742 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.742 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:05.742 [2024-10-21 11:50:41.965531] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:05.742 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.742 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:05.742 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.742 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:05.742 NULL1 00:06:05.742 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.742 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:05.742 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.742 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:05.742 Delay0 00:06:05.742 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.742 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.742 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.742 11:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:05.742 11:50:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.742 11:50:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=773691 00:06:05.742 11:50:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:05.742 11:50:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:05.742 [2024-10-21 11:50:42.082445] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:07.655 11:50:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:07.655 11:50:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.655 11:50:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 starting I/O failed: -6 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 starting I/O failed: -6 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 starting I/O failed: -6 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 starting I/O failed: -6 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Write completed with error (sct=0, sc=8) 00:06:07.916 starting I/O failed: -6 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Write completed with error (sct=0, sc=8) 00:06:07.916 starting I/O failed: -6 00:06:07.916 Write completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 starting I/O failed: -6 00:06:07.916 Write completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Write completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 starting I/O failed: -6 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Write completed with error (sct=0, sc=8) 00:06:07.916 starting I/O failed: -6 00:06:07.916 Write completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Write completed with error (sct=0, sc=8) 00:06:07.916 Write completed with error (sct=0, sc=8) 00:06:07.916 starting I/O failed: -6 00:06:07.916 Write completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 starting I/O failed: -6 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 starting I/O failed: -6 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 [2024-10-21 11:50:44.298355] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162b120 is same with the state(6) to be set 00:06:07.916 Write completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Write completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Write completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Write completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Write completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Write completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Write completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Write completed with error (sct=0, sc=8) 00:06:07.916 Write completed with error (sct=0, sc=8) 00:06:07.916 Write completed with error (sct=0, sc=8) 00:06:07.916 Write completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Write completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Write completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Write completed with error (sct=0, sc=8) 00:06:07.916 Write completed with error (sct=0, sc=8) 00:06:07.916 Write completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Read completed with error (sct=0, sc=8) 00:06:07.916 Write completed with error (sct=0, sc=8) 00:06:07.916 Write completed with error (sct=0, sc=8) 00:06:07.917 Write completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Write completed with error (sct=0, sc=8) 00:06:07.917 Write completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 starting I/O failed: -6 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Write completed with error (sct=0, sc=8) 00:06:07.917 Write completed with error (sct=0, sc=8) 00:06:07.917 starting I/O failed: -6 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Write completed with error (sct=0, sc=8) 00:06:07.917 starting I/O failed: -6 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 starting I/O failed: -6 00:06:07.917 Write completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 starting I/O failed: -6 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Write completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Write completed with error (sct=0, sc=8) 00:06:07.917 starting I/O failed: -6 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 starting I/O failed: -6 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 starting I/O failed: -6 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 starting I/O failed: -6 00:06:07.917 Write completed with error (sct=0, sc=8) 00:06:07.917 Write completed with error (sct=0, sc=8) 00:06:07.917 Write completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 starting I/O failed: -6 00:06:07.917 Write completed with error (sct=0, sc=8) 00:06:07.917 Write completed with error (sct=0, sc=8) 00:06:07.917 Write completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 starting I/O failed: -6 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Write completed with error (sct=0, sc=8) 00:06:07.917 [2024-10-21 11:50:44.301747] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f409000d310 is same with the state(6) to be set 00:06:07.917 Write completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Write completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Write completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Write completed with error (sct=0, sc=8) 00:06:07.917 Write completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Write completed with error (sct=0, sc=8) 00:06:07.917 Write completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Write completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Write completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Write completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Write completed with error (sct=0, sc=8) 00:06:07.917 Write completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Write completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Write completed with error (sct=0, sc=8) 00:06:07.917 Write completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Write completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Write completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Write completed with error (sct=0, sc=8) 00:06:07.917 Write completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:07.917 Read completed with error (sct=0, sc=8) 00:06:08.860 [2024-10-21 11:50:45.267627] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631050 is same with the state(6) to be set 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Write completed with error (sct=0, sc=8) 00:06:08.860 Write completed with error (sct=0, sc=8) 00:06:08.860 Write completed with error (sct=0, sc=8) 00:06:08.860 Write completed with error (sct=0, sc=8) 00:06:08.860 Write completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Write completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Write completed with error (sct=0, sc=8) 00:06:08.860 Write completed with error (sct=0, sc=8) 00:06:08.860 Write completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Write completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Write completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 [2024-10-21 11:50:45.301598] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634490 is same with the state(6) to be set 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Write completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Write completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Write completed with error (sct=0, sc=8) 00:06:08.860 Write completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Write completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Write completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Write completed with error (sct=0, sc=8) 00:06:08.860 Write completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 [2024-10-21 11:50:45.302092] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693ae0 is same with the state(6) to be set 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Write completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Write completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Write completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Write completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Write completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Write completed with error (sct=0, sc=8) 00:06:08.860 [2024-10-21 11:50:45.303742] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f409000cfe0 is same with the state(6) to be set 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Write completed with error (sct=0, sc=8) 00:06:08.860 Write completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Write completed with error (sct=0, sc=8) 00:06:08.860 Write completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Write completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Write completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 Read completed with error (sct=0, sc=8) 00:06:08.860 [2024-10-21 11:50:45.304155] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f409000d640 is same with the state(6) to be set 00:06:08.860 Initializing NVMe Controllers 00:06:08.860 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:08.860 Controller IO queue size 128, less than required. 00:06:08.860 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:08.860 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:08.860 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:08.860 Initialization complete. Launching workers. 00:06:08.860 ======================================================== 00:06:08.860 Latency(us) 00:06:08.860 Device Information : IOPS MiB/s Average min max 00:06:08.860 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 177.14 0.09 880034.65 365.47 1007615.60 00:06:08.860 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 170.67 0.08 893588.37 300.03 1011511.94 00:06:08.860 ======================================================== 00:06:08.860 Total : 347.81 0.17 886685.48 300.03 1011511.94 00:06:08.860 00:06:08.860 [2024-10-21 11:50:45.304582] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1631050 (9): Bad file descriptor 00:06:08.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:08.860 11:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.860 11:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:08.860 11:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 773691 00:06:08.860 11:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:09.431 11:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:09.431 11:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 773691 00:06:09.431 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (773691) - No such process 00:06:09.431 11:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 773691 00:06:09.431 11:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:06:09.431 11:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 773691 00:06:09.431 11:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:06:09.431 11:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.431 11:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:06:09.431 11:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.431 11:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 773691 00:06:09.431 11:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:06:09.431 11:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:09.431 11:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:09.431 11:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:09.431 11:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:09.431 11:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.431 11:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:09.431 11:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.431 11:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:09.431 11:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.431 11:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:09.431 [2024-10-21 11:50:45.837184] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:09.431 11:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.432 11:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.432 11:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.432 11:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:09.432 11:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.432 11:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=774553 00:06:09.432 11:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:09.432 11:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:09.432 11:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 774553 00:06:09.432 11:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:09.432 [2024-10-21 11:50:45.923568] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:10.002 11:50:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:10.002 11:50:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 774553 00:06:10.002 11:50:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:10.573 11:50:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:10.573 11:50:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 774553 00:06:10.573 11:50:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:10.834 11:50:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:10.834 11:50:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 774553 00:06:10.834 11:50:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:11.405 11:50:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:11.405 11:50:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 774553 00:06:11.405 11:50:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:11.975 11:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:11.975 11:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 774553 00:06:11.975 11:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:12.550 11:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:12.550 11:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 774553 00:06:12.550 11:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:12.550 Initializing NVMe Controllers 00:06:12.550 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:12.550 Controller IO queue size 128, less than required. 00:06:12.550 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:12.550 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:12.550 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:12.550 Initialization complete. Launching workers. 00:06:12.550 ======================================================== 00:06:12.550 Latency(us) 00:06:12.550 Device Information : IOPS MiB/s Average min max 00:06:12.550 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002254.01 1000181.87 1005310.03 00:06:12.550 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003510.07 1000239.73 1008518.39 00:06:12.550 ======================================================== 00:06:12.550 Total : 256.00 0.12 1002882.04 1000181.87 1008518.39 00:06:12.550 00:06:12.811 11:50:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:12.811 11:50:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 774553 00:06:12.811 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (774553) - No such process 00:06:12.811 11:50:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 774553 00:06:12.811 11:50:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:12.811 11:50:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:12.811 11:50:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:06:12.811 11:50:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:12.811 11:50:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:12.811 11:50:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:12.811 11:50:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:12.811 11:50:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:12.811 rmmod nvme_tcp 00:06:13.072 rmmod nvme_fabrics 00:06:13.072 rmmod nvme_keyring 00:06:13.072 11:50:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:13.072 11:50:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:13.072 11:50:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:13.072 11:50:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 773643 ']' 00:06:13.072 11:50:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 773643 00:06:13.072 11:50:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 773643 ']' 00:06:13.072 11:50:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 773643 00:06:13.072 11:50:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:06:13.072 11:50:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:13.072 11:50:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 773643 00:06:13.072 11:50:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:13.072 11:50:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:13.072 11:50:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 773643' 00:06:13.072 killing process with pid 773643 00:06:13.072 11:50:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 773643 00:06:13.072 11:50:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 773643 00:06:13.072 11:50:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:06:13.072 11:50:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:06:13.072 11:50:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:06:13.072 11:50:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:13.072 11:50:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:06:13.072 11:50:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:06:13.072 11:50:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:06:13.072 11:50:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:13.072 11:50:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:13.072 11:50:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:13.072 11:50:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:13.072 11:50:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:15.617 00:06:15.617 real 0m17.768s 00:06:15.617 user 0m29.653s 00:06:15.617 sys 0m6.742s 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.617 ************************************ 00:06:15.617 END TEST nvmf_delete_subsystem 00:06:15.617 ************************************ 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:15.617 ************************************ 00:06:15.617 START TEST nvmf_host_management 00:06:15.617 ************************************ 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:15.617 * Looking for test storage... 00:06:15.617 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:15.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.617 --rc genhtml_branch_coverage=1 00:06:15.617 --rc genhtml_function_coverage=1 00:06:15.617 --rc genhtml_legend=1 00:06:15.617 --rc geninfo_all_blocks=1 00:06:15.617 --rc geninfo_unexecuted_blocks=1 00:06:15.617 00:06:15.617 ' 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:15.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.617 --rc genhtml_branch_coverage=1 00:06:15.617 --rc genhtml_function_coverage=1 00:06:15.617 --rc genhtml_legend=1 00:06:15.617 --rc geninfo_all_blocks=1 00:06:15.617 --rc geninfo_unexecuted_blocks=1 00:06:15.617 00:06:15.617 ' 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:15.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.617 --rc genhtml_branch_coverage=1 00:06:15.617 --rc genhtml_function_coverage=1 00:06:15.617 --rc genhtml_legend=1 00:06:15.617 --rc geninfo_all_blocks=1 00:06:15.617 --rc geninfo_unexecuted_blocks=1 00:06:15.617 00:06:15.617 ' 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:15.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.617 --rc genhtml_branch_coverage=1 00:06:15.617 --rc genhtml_function_coverage=1 00:06:15.617 --rc genhtml_legend=1 00:06:15.617 --rc geninfo_all_blocks=1 00:06:15.617 --rc geninfo_unexecuted_blocks=1 00:06:15.617 00:06:15.617 ' 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:15.617 11:50:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:15.617 11:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:15.617 11:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:15.617 11:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:15.617 11:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:15.617 11:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:15.617 11:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:15.618 11:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:15.618 11:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:15.618 11:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:15.618 11:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:15.618 11:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:15.618 11:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.618 11:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.618 11:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.618 11:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:15.618 11:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.618 11:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:15.618 11:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:15.618 11:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:15.618 11:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:15.618 11:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:15.618 11:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:15.618 11:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:15.618 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:15.618 11:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:15.618 11:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:15.618 11:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:15.618 11:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:15.618 11:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:15.618 11:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:15.618 11:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:06:15.618 11:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:15.618 11:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:15.618 11:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:15.618 11:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:15.618 11:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:15.618 11:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:15.618 11:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:15.618 11:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:15.618 11:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:15.618 11:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:15.618 11:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:23.766 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:23.766 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:23.766 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:23.766 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:23.766 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:23.766 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:23.766 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:23.766 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:23.766 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:23.766 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:23.766 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:23.766 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:23.767 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:23.767 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:23.767 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:23.767 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:23.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:23.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.528 ms 00:06:23.767 00:06:23.767 --- 10.0.0.2 ping statistics --- 00:06:23.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:23.767 rtt min/avg/max/mdev = 0.528/0.528/0.528/0.000 ms 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:23.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:23.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:06:23.767 00:06:23.767 --- 10.0.0.1 ping statistics --- 00:06:23.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:23.767 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:06:23.767 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:06:23.768 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:23.768 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:23.768 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:23.768 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:06:23.768 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:23.768 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:23.768 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=779472 00:06:23.768 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 779472 00:06:23.768 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:23.768 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 779472 ']' 00:06:23.768 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.768 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:23.768 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.768 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:23.768 11:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:23.768 [2024-10-21 11:50:59.591200] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:06:23.768 [2024-10-21 11:50:59.591267] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:23.768 [2024-10-21 11:50:59.680751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:23.768 [2024-10-21 11:50:59.733634] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:23.768 [2024-10-21 11:50:59.733684] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:23.768 [2024-10-21 11:50:59.733693] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:23.768 [2024-10-21 11:50:59.733700] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:23.768 [2024-10-21 11:50:59.733706] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:23.768 [2024-10-21 11:50:59.736116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.768 [2024-10-21 11:50:59.736279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:23.768 [2024-10-21 11:50:59.736447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:23.768 [2024-10-21 11:50:59.736449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.030 11:51:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:24.030 11:51:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:06:24.030 11:51:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:06:24.030 11:51:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:24.030 11:51:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:24.030 11:51:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:24.030 11:51:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:24.030 11:51:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.030 11:51:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:24.030 [2024-10-21 11:51:00.468061] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:24.030 11:51:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.030 11:51:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:24.030 11:51:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:24.030 11:51:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:24.030 11:51:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:24.030 11:51:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:24.030 11:51:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:24.030 11:51:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.030 11:51:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:24.030 Malloc0 00:06:24.030 [2024-10-21 11:51:00.545888] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:24.031 11:51:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.031 11:51:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:24.031 11:51:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:24.031 11:51:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:24.031 11:51:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=779782 00:06:24.031 11:51:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 779782 /var/tmp/bdevperf.sock 00:06:24.031 11:51:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 779782 ']' 00:06:24.031 11:51:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:24.031 11:51:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:24.031 11:51:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:24.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:24.031 11:51:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:24.031 11:51:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:24.031 11:51:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:24.031 11:51:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:24.031 11:51:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:06:24.031 11:51:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:06:24.031 11:51:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:06:24.031 11:51:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:06:24.031 { 00:06:24.031 "params": { 00:06:24.031 "name": "Nvme$subsystem", 00:06:24.031 "trtype": "$TEST_TRANSPORT", 00:06:24.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:24.031 "adrfam": "ipv4", 00:06:24.031 "trsvcid": "$NVMF_PORT", 00:06:24.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:24.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:24.031 "hdgst": ${hdgst:-false}, 00:06:24.031 "ddgst": ${ddgst:-false} 00:06:24.031 }, 00:06:24.031 "method": "bdev_nvme_attach_controller" 00:06:24.031 } 00:06:24.031 EOF 00:06:24.031 )") 00:06:24.031 11:51:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:06:24.031 11:51:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:06:24.031 11:51:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:06:24.031 11:51:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:06:24.031 "params": { 00:06:24.031 "name": "Nvme0", 00:06:24.031 "trtype": "tcp", 00:06:24.031 "traddr": "10.0.0.2", 00:06:24.031 "adrfam": "ipv4", 00:06:24.031 "trsvcid": "4420", 00:06:24.031 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:24.031 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:24.031 "hdgst": false, 00:06:24.031 "ddgst": false 00:06:24.031 }, 00:06:24.031 "method": "bdev_nvme_attach_controller" 00:06:24.031 }' 00:06:24.294 [2024-10-21 11:51:00.665119] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:06:24.294 [2024-10-21 11:51:00.665199] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid779782 ] 00:06:24.294 [2024-10-21 11:51:00.750145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.294 [2024-10-21 11:51:00.804148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.554 Running I/O for 10 seconds... 00:06:25.128 11:51:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:25.128 11:51:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:06:25.128 11:51:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:25.128 11:51:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.128 11:51:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:25.128 11:51:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.128 11:51:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:25.128 11:51:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:25.128 11:51:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:25.128 11:51:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:25.128 11:51:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:25.128 11:51:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:25.128 11:51:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:25.128 11:51:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:25.128 11:51:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:25.128 11:51:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:25.128 11:51:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.128 11:51:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:25.128 11:51:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.128 11:51:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=801 00:06:25.128 11:51:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 801 -ge 100 ']' 00:06:25.128 11:51:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:25.128 11:51:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:25.128 11:51:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:25.128 11:51:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:25.128 11:51:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.128 11:51:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:25.128 [2024-10-21 11:51:01.553759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0210 is same with the state(6) to be set 00:06:25.128 [2024-10-21 11:51:01.553832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0210 is same with the state(6) to be set 00:06:25.128 [2024-10-21 11:51:01.553841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0210 is same with the state(6) to be set 00:06:25.128 [2024-10-21 11:51:01.553849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0210 is same with the state(6) to be set 00:06:25.128 11:51:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.128 11:51:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:25.128 11:51:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.128 11:51:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:25.128 [2024-10-21 11:51:01.560966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:116992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.128 [2024-10-21 11:51:01.561027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.128 [2024-10-21 11:51:01.561048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:117120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.128 [2024-10-21 11:51:01.561057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.128 [2024-10-21 11:51:01.561067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.128 [2024-10-21 11:51:01.561076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.128 [2024-10-21 11:51:01.561086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.561094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.561104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.561121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.561132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:117632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.561140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.561149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.561157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.561167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:117888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.561175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.561184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:118016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.561192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.561202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:118144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.561210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.561219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:118272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.561227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.561236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:118400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.561244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.561254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:118528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.561261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.561271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:118656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.561279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.561288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:118784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.561295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.561305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:118912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.561312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.561332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:119040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.561340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.561353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.561361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.561370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:119168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.561378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.561387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:119296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.561394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.561404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:119424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.561412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.561421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:119552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.561428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.561438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:119680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.561447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.561457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:119808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.561466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.561475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.561483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.561493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.561500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.561510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.561518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.561528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:119936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.561536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.561545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.561553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.561562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:120192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.561575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.561585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.561592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.561601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.561609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.561619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:120576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.561626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.561636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.561643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.561653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:120832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.561660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.561670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.561678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.561687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:121088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.561696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.561706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:121216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.561714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.561723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.561731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.561741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:121344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.561748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.561758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:121472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.561765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.561775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.561785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.561796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:115328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.561804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.561813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.561821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.561831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.561839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.561849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:121856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.561856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.561865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:121984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.561872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.561881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.561889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.561899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:115712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.561906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.561915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:122112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.561922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.561932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:122240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.561940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.561950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:115840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.561957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.561966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.561973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.561984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:122368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.561992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.562003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:122496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.562012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.562021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:122624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.562028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.562038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:122752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.562047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.562056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:116096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.562064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.562074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.562081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.562090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.562098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.562108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.562115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.562125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.562132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.562141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:116736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.562149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.562159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:116864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.129 [2024-10-21 11:51:01.562167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.129 [2024-10-21 11:51:01.562209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:06:25.129 [2024-10-21 11:51:01.562278] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe9c9e0 was disconnected and freed. reset controller. 00:06:25.129 [2024-10-21 11:51:01.563525] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:06:25.129 task offset: 116992 on job bdev=Nvme0n1 fails 00:06:25.129 00:06:25.129 Latency(us) 00:06:25.129 [2024-10-21T09:51:01.724Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:25.129 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:25.129 Job: Nvme0n1 ended in about 0.55 seconds with error 00:06:25.129 Verification LBA range: start 0x0 length 0x400 00:06:25.129 Nvme0n1 : 0.55 1635.04 102.19 116.79 0.00 35579.15 1740.80 36918.61 00:06:25.129 [2024-10-21T09:51:01.724Z] =================================================================================================================== 00:06:25.129 [2024-10-21T09:51:01.724Z] Total : 1635.04 102.19 116.79 0.00 35579.15 1740.80 36918.61 00:06:25.129 [2024-10-21 11:51:01.565769] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:25.129 [2024-10-21 11:51:01.565805] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc84a10 (9): Bad file descriptor 00:06:25.129 11:51:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.129 11:51:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:25.129 [2024-10-21 11:51:01.618996] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:26.072 11:51:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 779782 00:06:26.072 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (779782) - No such process 00:06:26.072 11:51:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:26.072 11:51:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:26.072 11:51:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:26.072 11:51:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:26.072 11:51:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:06:26.072 11:51:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:06:26.072 11:51:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:06:26.072 11:51:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:06:26.072 { 00:06:26.072 "params": { 00:06:26.072 "name": "Nvme$subsystem", 00:06:26.072 "trtype": "$TEST_TRANSPORT", 00:06:26.072 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:26.072 "adrfam": "ipv4", 00:06:26.072 "trsvcid": "$NVMF_PORT", 00:06:26.072 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:26.072 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:26.072 "hdgst": ${hdgst:-false}, 00:06:26.072 "ddgst": ${ddgst:-false} 00:06:26.072 }, 00:06:26.072 "method": "bdev_nvme_attach_controller" 00:06:26.072 } 00:06:26.072 EOF 00:06:26.072 )") 00:06:26.072 11:51:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:06:26.072 11:51:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:06:26.072 11:51:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:06:26.072 11:51:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:06:26.072 "params": { 00:06:26.072 "name": "Nvme0", 00:06:26.072 "trtype": "tcp", 00:06:26.072 "traddr": "10.0.0.2", 00:06:26.072 "adrfam": "ipv4", 00:06:26.072 "trsvcid": "4420", 00:06:26.072 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:26.072 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:26.072 "hdgst": false, 00:06:26.072 "ddgst": false 00:06:26.072 }, 00:06:26.072 "method": "bdev_nvme_attach_controller" 00:06:26.072 }' 00:06:26.072 [2024-10-21 11:51:02.630480] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:06:26.072 [2024-10-21 11:51:02.630535] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid780211 ] 00:06:26.334 [2024-10-21 11:51:02.707063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.334 [2024-10-21 11:51:02.743770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.595 Running I/O for 1 seconds... 00:06:27.537 1536.00 IOPS, 96.00 MiB/s 00:06:27.537 Latency(us) 00:06:27.537 [2024-10-21T09:51:04.132Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:27.537 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:27.537 Verification LBA range: start 0x0 length 0x400 00:06:27.537 Nvme0n1 : 1.02 1574.78 98.42 0.00 0.00 39947.43 8628.91 32112.64 00:06:27.537 [2024-10-21T09:51:04.132Z] =================================================================================================================== 00:06:27.537 [2024-10-21T09:51:04.132Z] Total : 1574.78 98.42 0.00 0.00 39947.43 8628.91 32112.64 00:06:27.798 11:51:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:27.798 11:51:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:27.798 11:51:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:27.798 11:51:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:27.798 11:51:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:27.798 11:51:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:06:27.798 11:51:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:27.798 11:51:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:27.798 11:51:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:27.798 11:51:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:27.798 11:51:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:27.798 rmmod nvme_tcp 00:06:27.798 rmmod nvme_fabrics 00:06:27.798 rmmod nvme_keyring 00:06:27.798 11:51:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:27.798 11:51:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:27.798 11:51:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:27.798 11:51:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 779472 ']' 00:06:27.798 11:51:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 779472 00:06:27.798 11:51:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 779472 ']' 00:06:27.798 11:51:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 779472 00:06:27.798 11:51:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:06:27.798 11:51:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:27.798 11:51:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 779472 00:06:27.798 11:51:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:27.798 11:51:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:27.798 11:51:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 779472' 00:06:27.798 killing process with pid 779472 00:06:27.798 11:51:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 779472 00:06:27.798 11:51:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 779472 00:06:28.059 [2024-10-21 11:51:04.412754] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:28.059 11:51:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:06:28.059 11:51:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:06:28.059 11:51:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:06:28.059 11:51:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:28.059 11:51:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:06:28.059 11:51:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:06:28.059 11:51:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:06:28.059 11:51:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:28.059 11:51:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:28.059 11:51:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:28.059 11:51:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:28.059 11:51:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:29.975 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:29.975 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:29.975 00:06:29.975 real 0m14.731s 00:06:29.975 user 0m23.601s 00:06:29.975 sys 0m6.787s 00:06:29.975 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:29.975 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:29.975 ************************************ 00:06:29.975 END TEST nvmf_host_management 00:06:29.975 ************************************ 00:06:29.975 11:51:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:29.975 11:51:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:29.975 11:51:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:29.975 11:51:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:30.237 ************************************ 00:06:30.237 START TEST nvmf_lvol 00:06:30.237 ************************************ 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:30.237 * Looking for test storage... 00:06:30.237 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:30.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.237 --rc genhtml_branch_coverage=1 00:06:30.237 --rc genhtml_function_coverage=1 00:06:30.237 --rc genhtml_legend=1 00:06:30.237 --rc geninfo_all_blocks=1 00:06:30.237 --rc geninfo_unexecuted_blocks=1 00:06:30.237 00:06:30.237 ' 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:30.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.237 --rc genhtml_branch_coverage=1 00:06:30.237 --rc genhtml_function_coverage=1 00:06:30.237 --rc genhtml_legend=1 00:06:30.237 --rc geninfo_all_blocks=1 00:06:30.237 --rc geninfo_unexecuted_blocks=1 00:06:30.237 00:06:30.237 ' 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:30.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.237 --rc genhtml_branch_coverage=1 00:06:30.237 --rc genhtml_function_coverage=1 00:06:30.237 --rc genhtml_legend=1 00:06:30.237 --rc geninfo_all_blocks=1 00:06:30.237 --rc geninfo_unexecuted_blocks=1 00:06:30.237 00:06:30.237 ' 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:30.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.237 --rc genhtml_branch_coverage=1 00:06:30.237 --rc genhtml_function_coverage=1 00:06:30.237 --rc genhtml_legend=1 00:06:30.237 --rc geninfo_all_blocks=1 00:06:30.237 --rc geninfo_unexecuted_blocks=1 00:06:30.237 00:06:30.237 ' 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:30.237 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:30.238 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:30.238 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:30.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:30.238 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:30.238 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:30.238 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:30.238 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:30.238 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:30.499 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:30.499 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:30.499 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:30.499 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:30.499 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:06:30.499 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:30.499 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:30.499 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:30.499 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:30.499 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:30.499 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:30.499 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:30.499 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:30.499 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:30.499 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:30.499 11:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:38.643 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:38.643 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:38.643 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:38.643 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:38.643 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:38.643 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:38.643 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:38.644 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:38.644 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:38.644 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:38.644 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:38.644 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:38.644 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:06:38.644 00:06:38.644 --- 10.0.0.2 ping statistics --- 00:06:38.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:38.644 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:38.644 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:38.644 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:06:38.644 00:06:38.644 --- 10.0.0.1 ping statistics --- 00:06:38.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:38.644 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=785360 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 785360 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 785360 ']' 00:06:38.644 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.645 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:38.645 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.645 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:38.645 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:38.645 [2024-10-21 11:51:14.469363] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:06:38.645 [2024-10-21 11:51:14.469426] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:38.645 [2024-10-21 11:51:14.537903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:38.645 [2024-10-21 11:51:14.585242] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:38.645 [2024-10-21 11:51:14.585289] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:38.645 [2024-10-21 11:51:14.585295] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:38.645 [2024-10-21 11:51:14.585304] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:38.645 [2024-10-21 11:51:14.585309] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:38.645 [2024-10-21 11:51:14.590349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.645 [2024-10-21 11:51:14.590540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.645 [2024-10-21 11:51:14.590637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.645 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:38.645 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:06:38.645 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:06:38.645 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:38.645 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:38.645 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:38.645 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:38.645 [2024-10-21 11:51:14.904363] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:38.645 11:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:38.645 11:51:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:38.645 11:51:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:38.905 11:51:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:38.905 11:51:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:39.166 11:51:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:39.427 11:51:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=5c5397c8-5795-4088-a4f5-0e322e0876d5 00:06:39.427 11:51:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5c5397c8-5795-4088-a4f5-0e322e0876d5 lvol 20 00:06:39.427 11:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=3007a1e5-2a8d-47cb-94cd-8313f5d19c11 00:06:39.427 11:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:39.689 11:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3007a1e5-2a8d-47cb-94cd-8313f5d19c11 00:06:39.950 11:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:40.212 [2024-10-21 11:51:16.577456] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:40.212 11:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:40.474 11:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=785751 00:06:40.474 11:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:40.474 11:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:41.417 11:51:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 3007a1e5-2a8d-47cb-94cd-8313f5d19c11 MY_SNAPSHOT 00:06:41.678 11:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=29a35e0d-b13f-45f4-9a42-2f40885eb8e8 00:06:41.678 11:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 3007a1e5-2a8d-47cb-94cd-8313f5d19c11 30 00:06:41.678 11:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 29a35e0d-b13f-45f4-9a42-2f40885eb8e8 MY_CLONE 00:06:41.939 11:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=6fd6ca30-9e79-4919-9eb0-609add2a622a 00:06:41.939 11:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 6fd6ca30-9e79-4919-9eb0-609add2a622a 00:06:42.509 11:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 785751 00:06:50.644 Initializing NVMe Controllers 00:06:50.644 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:50.644 Controller IO queue size 128, less than required. 00:06:50.644 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:50.644 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:50.644 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:50.644 Initialization complete. Launching workers. 00:06:50.644 ======================================================== 00:06:50.644 Latency(us) 00:06:50.644 Device Information : IOPS MiB/s Average min max 00:06:50.644 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15970.00 62.38 8016.55 1824.04 46616.79 00:06:50.644 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17083.70 66.73 7494.48 507.50 53274.17 00:06:50.644 ======================================================== 00:06:50.644 Total : 33053.70 129.12 7746.72 507.50 53274.17 00:06:50.644 00:06:50.644 11:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:50.905 11:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3007a1e5-2a8d-47cb-94cd-8313f5d19c11 00:06:51.165 11:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5c5397c8-5795-4088-a4f5-0e322e0876d5 00:06:51.426 11:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:51.426 11:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:51.426 11:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:51.426 11:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:06:51.426 11:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:06:51.426 11:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:51.426 11:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:06:51.426 11:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:51.426 11:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:51.426 rmmod nvme_tcp 00:06:51.426 rmmod nvme_fabrics 00:06:51.426 rmmod nvme_keyring 00:06:51.426 11:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:51.426 11:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:06:51.426 11:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:06:51.426 11:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 785360 ']' 00:06:51.426 11:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 785360 00:06:51.426 11:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 785360 ']' 00:06:51.426 11:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 785360 00:06:51.426 11:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:06:51.426 11:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:51.426 11:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 785360 00:06:51.426 11:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:51.426 11:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:51.426 11:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 785360' 00:06:51.426 killing process with pid 785360 00:06:51.426 11:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 785360 00:06:51.426 11:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 785360 00:06:51.687 11:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:06:51.687 11:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:06:51.687 11:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:06:51.687 11:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:06:51.687 11:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:06:51.687 11:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:06:51.687 11:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:06:51.687 11:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:51.687 11:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:51.687 11:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:51.687 11:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:51.687 11:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:53.600 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:53.600 00:06:53.600 real 0m23.517s 00:06:53.600 user 1m3.293s 00:06:53.600 sys 0m8.815s 00:06:53.600 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.600 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:53.600 ************************************ 00:06:53.601 END TEST nvmf_lvol 00:06:53.601 ************************************ 00:06:53.601 11:51:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:53.601 11:51:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:53.601 11:51:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.601 11:51:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:53.862 ************************************ 00:06:53.862 START TEST nvmf_lvs_grow 00:06:53.862 ************************************ 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:53.863 * Looking for test storage... 00:06:53.863 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:53.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.863 --rc genhtml_branch_coverage=1 00:06:53.863 --rc genhtml_function_coverage=1 00:06:53.863 --rc genhtml_legend=1 00:06:53.863 --rc geninfo_all_blocks=1 00:06:53.863 --rc geninfo_unexecuted_blocks=1 00:06:53.863 00:06:53.863 ' 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:53.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.863 --rc genhtml_branch_coverage=1 00:06:53.863 --rc genhtml_function_coverage=1 00:06:53.863 --rc genhtml_legend=1 00:06:53.863 --rc geninfo_all_blocks=1 00:06:53.863 --rc geninfo_unexecuted_blocks=1 00:06:53.863 00:06:53.863 ' 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:53.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.863 --rc genhtml_branch_coverage=1 00:06:53.863 --rc genhtml_function_coverage=1 00:06:53.863 --rc genhtml_legend=1 00:06:53.863 --rc geninfo_all_blocks=1 00:06:53.863 --rc geninfo_unexecuted_blocks=1 00:06:53.863 00:06:53.863 ' 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:53.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.863 --rc genhtml_branch_coverage=1 00:06:53.863 --rc genhtml_function_coverage=1 00:06:53.863 --rc genhtml_legend=1 00:06:53.863 --rc geninfo_all_blocks=1 00:06:53.863 --rc geninfo_unexecuted_blocks=1 00:06:53.863 00:06:53.863 ' 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:53.863 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:53.863 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:53.864 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:53.864 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:53.864 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:53.864 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:53.864 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:53.864 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:53.864 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:53.864 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:06:53.864 11:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:02.166 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:02.166 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:02.166 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:02.166 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:02.166 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:02.166 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.467 ms 00:07:02.166 00:07:02.166 --- 10.0.0.2 ping statistics --- 00:07:02.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:02.166 rtt min/avg/max/mdev = 0.467/0.467/0.467/0.000 ms 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:02.166 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:02.166 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:07:02.166 00:07:02.166 --- 10.0.0.1 ping statistics --- 00:07:02.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:02.166 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:02.166 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:02.167 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:02.167 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:02.167 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:02.167 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:02.167 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:02.167 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=792336 00:07:02.167 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 792336 00:07:02.167 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:02.167 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 792336 ']' 00:07:02.167 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.167 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:02.167 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.167 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:02.167 11:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:02.167 [2024-10-21 11:51:38.007070] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:07:02.167 [2024-10-21 11:51:38.007139] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:02.167 [2024-10-21 11:51:38.096977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.167 [2024-10-21 11:51:38.147393] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:02.167 [2024-10-21 11:51:38.147439] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:02.167 [2024-10-21 11:51:38.147448] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:02.167 [2024-10-21 11:51:38.147455] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:02.167 [2024-10-21 11:51:38.147462] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:02.167 [2024-10-21 11:51:38.148218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.428 11:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:02.428 11:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:07:02.428 11:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:02.428 11:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:02.428 11:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:02.428 11:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:02.428 11:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:02.690 [2024-10-21 11:51:39.030793] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:02.690 11:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:02.690 11:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:02.690 11:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.690 11:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:02.690 ************************************ 00:07:02.690 START TEST lvs_grow_clean 00:07:02.690 ************************************ 00:07:02.690 11:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:07:02.690 11:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:02.690 11:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:02.690 11:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:02.690 11:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:02.690 11:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:02.690 11:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:02.690 11:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:02.690 11:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:02.690 11:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:02.951 11:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:02.951 11:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:02.951 11:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=fda8269c-b6c1-4062-b14b-021f936d88c8 00:07:02.951 11:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fda8269c-b6c1-4062-b14b-021f936d88c8 00:07:02.951 11:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:03.212 11:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:03.212 11:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:03.212 11:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fda8269c-b6c1-4062-b14b-021f936d88c8 lvol 150 00:07:03.473 11:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=d4f3edfd-9378-482b-b9e3-abb4b8fdf0ab 00:07:03.473 11:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:03.473 11:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:03.473 [2024-10-21 11:51:40.036767] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:03.473 [2024-10-21 11:51:40.036848] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:03.473 true 00:07:03.473 11:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fda8269c-b6c1-4062-b14b-021f936d88c8 00:07:03.473 11:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:03.734 11:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:03.734 11:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:03.994 11:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d4f3edfd-9378-482b-b9e3-abb4b8fdf0ab 00:07:04.254 11:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:04.254 [2024-10-21 11:51:40.775108] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:04.254 11:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:04.514 11:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=792845 00:07:04.514 11:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:04.514 11:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:04.514 11:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 792845 /var/tmp/bdevperf.sock 00:07:04.514 11:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 792845 ']' 00:07:04.514 11:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:04.515 11:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:04.515 11:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:04.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:04.515 11:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:04.515 11:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:04.515 [2024-10-21 11:51:41.011570] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:07:04.515 [2024-10-21 11:51:41.011642] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid792845 ] 00:07:04.515 [2024-10-21 11:51:41.093123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.775 [2024-10-21 11:51:41.145995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.347 11:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.347 11:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:07:05.347 11:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:05.608 Nvme0n1 00:07:05.608 11:51:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:05.869 [ 00:07:05.869 { 00:07:05.869 "name": "Nvme0n1", 00:07:05.869 "aliases": [ 00:07:05.869 "d4f3edfd-9378-482b-b9e3-abb4b8fdf0ab" 00:07:05.869 ], 00:07:05.869 "product_name": "NVMe disk", 00:07:05.869 "block_size": 4096, 00:07:05.869 "num_blocks": 38912, 00:07:05.869 "uuid": "d4f3edfd-9378-482b-b9e3-abb4b8fdf0ab", 00:07:05.869 "numa_id": 0, 00:07:05.869 "assigned_rate_limits": { 00:07:05.869 "rw_ios_per_sec": 0, 00:07:05.869 "rw_mbytes_per_sec": 0, 00:07:05.869 "r_mbytes_per_sec": 0, 00:07:05.869 "w_mbytes_per_sec": 0 00:07:05.869 }, 00:07:05.869 "claimed": false, 00:07:05.869 "zoned": false, 00:07:05.869 "supported_io_types": { 00:07:05.869 "read": true, 00:07:05.869 "write": true, 00:07:05.869 "unmap": true, 00:07:05.869 "flush": true, 00:07:05.869 "reset": true, 00:07:05.869 "nvme_admin": true, 00:07:05.869 "nvme_io": true, 00:07:05.869 "nvme_io_md": false, 00:07:05.869 "write_zeroes": true, 00:07:05.869 "zcopy": false, 00:07:05.869 "get_zone_info": false, 00:07:05.869 "zone_management": false, 00:07:05.869 "zone_append": false, 00:07:05.869 "compare": true, 00:07:05.869 "compare_and_write": true, 00:07:05.869 "abort": true, 00:07:05.869 "seek_hole": false, 00:07:05.869 "seek_data": false, 00:07:05.869 "copy": true, 00:07:05.869 "nvme_iov_md": false 00:07:05.869 }, 00:07:05.869 "memory_domains": [ 00:07:05.869 { 00:07:05.869 "dma_device_id": "system", 00:07:05.869 "dma_device_type": 1 00:07:05.869 } 00:07:05.869 ], 00:07:05.869 "driver_specific": { 00:07:05.869 "nvme": [ 00:07:05.869 { 00:07:05.869 "trid": { 00:07:05.869 "trtype": "TCP", 00:07:05.869 "adrfam": "IPv4", 00:07:05.869 "traddr": "10.0.0.2", 00:07:05.869 "trsvcid": "4420", 00:07:05.869 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:05.869 }, 00:07:05.869 "ctrlr_data": { 00:07:05.869 "cntlid": 1, 00:07:05.869 "vendor_id": "0x8086", 00:07:05.869 "model_number": "SPDK bdev Controller", 00:07:05.869 "serial_number": "SPDK0", 00:07:05.869 "firmware_revision": "25.01", 00:07:05.869 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:05.869 "oacs": { 00:07:05.869 "security": 0, 00:07:05.869 "format": 0, 00:07:05.869 "firmware": 0, 00:07:05.869 "ns_manage": 0 00:07:05.869 }, 00:07:05.869 "multi_ctrlr": true, 00:07:05.869 "ana_reporting": false 00:07:05.869 }, 00:07:05.869 "vs": { 00:07:05.869 "nvme_version": "1.3" 00:07:05.869 }, 00:07:05.869 "ns_data": { 00:07:05.869 "id": 1, 00:07:05.869 "can_share": true 00:07:05.869 } 00:07:05.869 } 00:07:05.869 ], 00:07:05.869 "mp_policy": "active_passive" 00:07:05.869 } 00:07:05.869 } 00:07:05.869 ] 00:07:05.869 11:51:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=793169 00:07:05.869 11:51:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:05.869 11:51:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:05.869 Running I/O for 10 seconds... 00:07:06.811 Latency(us) 00:07:06.811 [2024-10-21T09:51:43.406Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:06.811 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:06.811 Nvme0n1 : 1.00 24684.00 96.42 0.00 0.00 0.00 0.00 0.00 00:07:06.811 [2024-10-21T09:51:43.406Z] =================================================================================================================== 00:07:06.811 [2024-10-21T09:51:43.406Z] Total : 24684.00 96.42 0.00 0.00 0.00 0.00 0.00 00:07:06.811 00:07:07.753 11:51:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u fda8269c-b6c1-4062-b14b-021f936d88c8 00:07:08.014 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:08.014 Nvme0n1 : 2.00 25052.00 97.86 0.00 0.00 0.00 0.00 0.00 00:07:08.014 [2024-10-21T09:51:44.609Z] =================================================================================================================== 00:07:08.014 [2024-10-21T09:51:44.609Z] Total : 25052.00 97.86 0.00 0.00 0.00 0.00 0.00 00:07:08.014 00:07:08.014 true 00:07:08.014 11:51:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fda8269c-b6c1-4062-b14b-021f936d88c8 00:07:08.014 11:51:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:08.274 11:51:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:08.274 11:51:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:08.274 11:51:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 793169 00:07:08.845 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:08.845 Nvme0n1 : 3.00 25175.33 98.34 0.00 0.00 0.00 0.00 0.00 00:07:08.845 [2024-10-21T09:51:45.440Z] =================================================================================================================== 00:07:08.845 [2024-10-21T09:51:45.440Z] Total : 25175.33 98.34 0.00 0.00 0.00 0.00 0.00 00:07:08.845 00:07:10.225 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:10.225 Nvme0n1 : 4.00 25264.50 98.69 0.00 0.00 0.00 0.00 0.00 00:07:10.225 [2024-10-21T09:51:46.820Z] =================================================================================================================== 00:07:10.225 [2024-10-21T09:51:46.820Z] Total : 25264.50 98.69 0.00 0.00 0.00 0.00 0.00 00:07:10.225 00:07:11.165 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:11.165 Nvme0n1 : 5.00 25318.20 98.90 0.00 0.00 0.00 0.00 0.00 00:07:11.165 [2024-10-21T09:51:47.760Z] =================================================================================================================== 00:07:11.165 [2024-10-21T09:51:47.760Z] Total : 25318.20 98.90 0.00 0.00 0.00 0.00 0.00 00:07:11.165 00:07:12.104 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:12.104 Nvme0n1 : 6.00 25354.00 99.04 0.00 0.00 0.00 0.00 0.00 00:07:12.104 [2024-10-21T09:51:48.699Z] =================================================================================================================== 00:07:12.104 [2024-10-21T09:51:48.699Z] Total : 25354.00 99.04 0.00 0.00 0.00 0.00 0.00 00:07:12.104 00:07:13.043 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:13.043 Nvme0n1 : 7.00 25388.57 99.17 0.00 0.00 0.00 0.00 0.00 00:07:13.043 [2024-10-21T09:51:49.638Z] =================================================================================================================== 00:07:13.043 [2024-10-21T09:51:49.638Z] Total : 25388.57 99.17 0.00 0.00 0.00 0.00 0.00 00:07:13.043 00:07:13.985 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:13.985 Nvme0n1 : 8.00 25405.50 99.24 0.00 0.00 0.00 0.00 0.00 00:07:13.985 [2024-10-21T09:51:50.580Z] =================================================================================================================== 00:07:13.985 [2024-10-21T09:51:50.580Z] Total : 25405.50 99.24 0.00 0.00 0.00 0.00 0.00 00:07:13.985 00:07:14.924 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:14.924 Nvme0n1 : 9.00 25426.44 99.32 0.00 0.00 0.00 0.00 0.00 00:07:14.924 [2024-10-21T09:51:51.519Z] =================================================================================================================== 00:07:14.924 [2024-10-21T09:51:51.519Z] Total : 25426.44 99.32 0.00 0.00 0.00 0.00 0.00 00:07:14.924 00:07:15.865 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:15.865 Nvme0n1 : 10.00 25443.70 99.39 0.00 0.00 0.00 0.00 0.00 00:07:15.865 [2024-10-21T09:51:52.460Z] =================================================================================================================== 00:07:15.865 [2024-10-21T09:51:52.460Z] Total : 25443.70 99.39 0.00 0.00 0.00 0.00 0.00 00:07:15.865 00:07:15.865 00:07:15.865 Latency(us) 00:07:15.865 [2024-10-21T09:51:52.460Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:15.865 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:15.865 Nvme0n1 : 10.00 25446.04 99.40 0.00 0.00 5026.64 2594.13 19005.44 00:07:15.865 [2024-10-21T09:51:52.460Z] =================================================================================================================== 00:07:15.865 [2024-10-21T09:51:52.460Z] Total : 25446.04 99.40 0.00 0.00 5026.64 2594.13 19005.44 00:07:15.865 { 00:07:15.865 "results": [ 00:07:15.865 { 00:07:15.865 "job": "Nvme0n1", 00:07:15.865 "core_mask": "0x2", 00:07:15.865 "workload": "randwrite", 00:07:15.865 "status": "finished", 00:07:15.865 "queue_depth": 128, 00:07:15.865 "io_size": 4096, 00:07:15.865 "runtime": 10.004109, 00:07:15.865 "iops": 25446.044220429827, 00:07:15.865 "mibps": 99.39861023605401, 00:07:15.865 "io_failed": 0, 00:07:15.865 "io_timeout": 0, 00:07:15.865 "avg_latency_us": 5026.639631371163, 00:07:15.865 "min_latency_us": 2594.133333333333, 00:07:15.865 "max_latency_us": 19005.44 00:07:15.865 } 00:07:15.865 ], 00:07:15.865 "core_count": 1 00:07:15.865 } 00:07:15.865 11:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 792845 00:07:15.865 11:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 792845 ']' 00:07:15.865 11:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 792845 00:07:15.865 11:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:07:15.865 11:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:15.865 11:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 792845 00:07:16.124 11:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:16.124 11:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:16.124 11:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 792845' 00:07:16.124 killing process with pid 792845 00:07:16.124 11:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 792845 00:07:16.124 Received shutdown signal, test time was about 10.000000 seconds 00:07:16.124 00:07:16.124 Latency(us) 00:07:16.124 [2024-10-21T09:51:52.719Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:16.124 [2024-10-21T09:51:52.719Z] =================================================================================================================== 00:07:16.124 [2024-10-21T09:51:52.719Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:16.124 11:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 792845 00:07:16.124 11:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:16.384 11:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:16.644 11:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fda8269c-b6c1-4062-b14b-021f936d88c8 00:07:16.644 11:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:16.644 11:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:16.644 11:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:16.644 11:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:16.905 [2024-10-21 11:51:53.321585] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:16.905 11:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fda8269c-b6c1-4062-b14b-021f936d88c8 00:07:16.905 11:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:07:16.905 11:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fda8269c-b6c1-4062-b14b-021f936d88c8 00:07:16.905 11:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:16.905 11:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:16.905 11:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:16.905 11:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:16.905 11:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:16.905 11:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:16.905 11:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:16.905 11:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:16.905 11:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fda8269c-b6c1-4062-b14b-021f936d88c8 00:07:17.165 request: 00:07:17.165 { 00:07:17.165 "uuid": "fda8269c-b6c1-4062-b14b-021f936d88c8", 00:07:17.165 "method": "bdev_lvol_get_lvstores", 00:07:17.165 "req_id": 1 00:07:17.165 } 00:07:17.165 Got JSON-RPC error response 00:07:17.165 response: 00:07:17.165 { 00:07:17.165 "code": -19, 00:07:17.165 "message": "No such device" 00:07:17.165 } 00:07:17.165 11:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:07:17.165 11:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:17.165 11:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:17.165 11:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:17.165 11:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:17.165 aio_bdev 00:07:17.165 11:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d4f3edfd-9378-482b-b9e3-abb4b8fdf0ab 00:07:17.165 11:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=d4f3edfd-9378-482b-b9e3-abb4b8fdf0ab 00:07:17.165 11:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:17.165 11:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:07:17.165 11:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:17.165 11:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:17.165 11:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:17.424 11:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d4f3edfd-9378-482b-b9e3-abb4b8fdf0ab -t 2000 00:07:17.683 [ 00:07:17.683 { 00:07:17.683 "name": "d4f3edfd-9378-482b-b9e3-abb4b8fdf0ab", 00:07:17.683 "aliases": [ 00:07:17.683 "lvs/lvol" 00:07:17.683 ], 00:07:17.683 "product_name": "Logical Volume", 00:07:17.683 "block_size": 4096, 00:07:17.683 "num_blocks": 38912, 00:07:17.683 "uuid": "d4f3edfd-9378-482b-b9e3-abb4b8fdf0ab", 00:07:17.683 "assigned_rate_limits": { 00:07:17.683 "rw_ios_per_sec": 0, 00:07:17.683 "rw_mbytes_per_sec": 0, 00:07:17.683 "r_mbytes_per_sec": 0, 00:07:17.683 "w_mbytes_per_sec": 0 00:07:17.683 }, 00:07:17.683 "claimed": false, 00:07:17.683 "zoned": false, 00:07:17.683 "supported_io_types": { 00:07:17.683 "read": true, 00:07:17.683 "write": true, 00:07:17.683 "unmap": true, 00:07:17.683 "flush": false, 00:07:17.683 "reset": true, 00:07:17.683 "nvme_admin": false, 00:07:17.683 "nvme_io": false, 00:07:17.683 "nvme_io_md": false, 00:07:17.683 "write_zeroes": true, 00:07:17.683 "zcopy": false, 00:07:17.683 "get_zone_info": false, 00:07:17.683 "zone_management": false, 00:07:17.683 "zone_append": false, 00:07:17.683 "compare": false, 00:07:17.683 "compare_and_write": false, 00:07:17.683 "abort": false, 00:07:17.683 "seek_hole": true, 00:07:17.683 "seek_data": true, 00:07:17.683 "copy": false, 00:07:17.683 "nvme_iov_md": false 00:07:17.683 }, 00:07:17.683 "driver_specific": { 00:07:17.683 "lvol": { 00:07:17.683 "lvol_store_uuid": "fda8269c-b6c1-4062-b14b-021f936d88c8", 00:07:17.683 "base_bdev": "aio_bdev", 00:07:17.683 "thin_provision": false, 00:07:17.683 "num_allocated_clusters": 38, 00:07:17.683 "snapshot": false, 00:07:17.683 "clone": false, 00:07:17.683 "esnap_clone": false 00:07:17.683 } 00:07:17.683 } 00:07:17.683 } 00:07:17.683 ] 00:07:17.683 11:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:07:17.683 11:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fda8269c-b6c1-4062-b14b-021f936d88c8 00:07:17.683 11:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:17.683 11:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:17.684 11:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fda8269c-b6c1-4062-b14b-021f936d88c8 00:07:17.684 11:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:17.943 11:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:17.943 11:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d4f3edfd-9378-482b-b9e3-abb4b8fdf0ab 00:07:18.203 11:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fda8269c-b6c1-4062-b14b-021f936d88c8 00:07:18.203 11:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:18.464 11:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:18.464 00:07:18.464 real 0m15.880s 00:07:18.464 user 0m15.504s 00:07:18.464 sys 0m1.436s 00:07:18.464 11:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:18.464 11:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:18.464 ************************************ 00:07:18.464 END TEST lvs_grow_clean 00:07:18.464 ************************************ 00:07:18.464 11:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:18.464 11:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:18.464 11:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:18.464 11:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:18.464 ************************************ 00:07:18.464 START TEST lvs_grow_dirty 00:07:18.464 ************************************ 00:07:18.464 11:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:07:18.464 11:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:18.464 11:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:18.464 11:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:18.464 11:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:18.464 11:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:18.464 11:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:18.464 11:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:18.464 11:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:18.464 11:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:18.724 11:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:18.724 11:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:18.984 11:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=eab59ac6-4248-4166-ae21-3cce30b9b8f8 00:07:18.984 11:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eab59ac6-4248-4166-ae21-3cce30b9b8f8 00:07:18.984 11:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:19.244 11:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:19.244 11:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:19.244 11:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u eab59ac6-4248-4166-ae21-3cce30b9b8f8 lvol 150 00:07:19.244 11:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=4fbceebe-3204-40c2-8af0-a5312de608f5 00:07:19.244 11:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:19.244 11:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:19.503 [2024-10-21 11:51:55.944692] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:19.503 [2024-10-21 11:51:55.944732] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:19.503 true 00:07:19.503 11:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eab59ac6-4248-4166-ae21-3cce30b9b8f8 00:07:19.503 11:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:19.764 11:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:19.764 11:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:19.764 11:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4fbceebe-3204-40c2-8af0-a5312de608f5 00:07:20.025 11:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:20.025 [2024-10-21 11:51:56.602648] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:20.025 11:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:20.285 11:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=796136 00:07:20.285 11:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:20.285 11:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:20.285 11:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 796136 /var/tmp/bdevperf.sock 00:07:20.285 11:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 796136 ']' 00:07:20.285 11:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:20.285 11:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:20.285 11:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:20.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:20.285 11:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:20.285 11:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:20.285 [2024-10-21 11:51:56.836568] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:07:20.285 [2024-10-21 11:51:56.836620] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid796136 ] 00:07:20.545 [2024-10-21 11:51:56.911263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.545 [2024-10-21 11:51:56.941198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.115 11:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:21.115 11:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:07:21.115 11:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:21.374 Nvme0n1 00:07:21.636 11:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:21.636 [ 00:07:21.636 { 00:07:21.636 "name": "Nvme0n1", 00:07:21.636 "aliases": [ 00:07:21.636 "4fbceebe-3204-40c2-8af0-a5312de608f5" 00:07:21.636 ], 00:07:21.636 "product_name": "NVMe disk", 00:07:21.636 "block_size": 4096, 00:07:21.636 "num_blocks": 38912, 00:07:21.636 "uuid": "4fbceebe-3204-40c2-8af0-a5312de608f5", 00:07:21.636 "numa_id": 0, 00:07:21.636 "assigned_rate_limits": { 00:07:21.636 "rw_ios_per_sec": 0, 00:07:21.636 "rw_mbytes_per_sec": 0, 00:07:21.636 "r_mbytes_per_sec": 0, 00:07:21.636 "w_mbytes_per_sec": 0 00:07:21.636 }, 00:07:21.636 "claimed": false, 00:07:21.636 "zoned": false, 00:07:21.636 "supported_io_types": { 00:07:21.636 "read": true, 00:07:21.636 "write": true, 00:07:21.636 "unmap": true, 00:07:21.636 "flush": true, 00:07:21.636 "reset": true, 00:07:21.636 "nvme_admin": true, 00:07:21.636 "nvme_io": true, 00:07:21.636 "nvme_io_md": false, 00:07:21.636 "write_zeroes": true, 00:07:21.636 "zcopy": false, 00:07:21.636 "get_zone_info": false, 00:07:21.636 "zone_management": false, 00:07:21.636 "zone_append": false, 00:07:21.636 "compare": true, 00:07:21.636 "compare_and_write": true, 00:07:21.636 "abort": true, 00:07:21.636 "seek_hole": false, 00:07:21.636 "seek_data": false, 00:07:21.636 "copy": true, 00:07:21.636 "nvme_iov_md": false 00:07:21.636 }, 00:07:21.636 "memory_domains": [ 00:07:21.636 { 00:07:21.636 "dma_device_id": "system", 00:07:21.636 "dma_device_type": 1 00:07:21.636 } 00:07:21.636 ], 00:07:21.636 "driver_specific": { 00:07:21.636 "nvme": [ 00:07:21.636 { 00:07:21.636 "trid": { 00:07:21.636 "trtype": "TCP", 00:07:21.636 "adrfam": "IPv4", 00:07:21.636 "traddr": "10.0.0.2", 00:07:21.636 "trsvcid": "4420", 00:07:21.636 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:21.636 }, 00:07:21.636 "ctrlr_data": { 00:07:21.636 "cntlid": 1, 00:07:21.636 "vendor_id": "0x8086", 00:07:21.636 "model_number": "SPDK bdev Controller", 00:07:21.636 "serial_number": "SPDK0", 00:07:21.636 "firmware_revision": "25.01", 00:07:21.636 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:21.636 "oacs": { 00:07:21.636 "security": 0, 00:07:21.636 "format": 0, 00:07:21.636 "firmware": 0, 00:07:21.636 "ns_manage": 0 00:07:21.636 }, 00:07:21.636 "multi_ctrlr": true, 00:07:21.636 "ana_reporting": false 00:07:21.636 }, 00:07:21.636 "vs": { 00:07:21.636 "nvme_version": "1.3" 00:07:21.636 }, 00:07:21.636 "ns_data": { 00:07:21.636 "id": 1, 00:07:21.636 "can_share": true 00:07:21.636 } 00:07:21.636 } 00:07:21.636 ], 00:07:21.636 "mp_policy": "active_passive" 00:07:21.636 } 00:07:21.636 } 00:07:21.636 ] 00:07:21.636 11:51:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=796343 00:07:21.636 11:51:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:21.636 11:51:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:21.897 Running I/O for 10 seconds... 00:07:22.837 Latency(us) 00:07:22.837 [2024-10-21T09:51:59.432Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:22.837 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:22.837 Nvme0n1 : 1.00 25112.00 98.09 0.00 0.00 0.00 0.00 0.00 00:07:22.837 [2024-10-21T09:51:59.432Z] =================================================================================================================== 00:07:22.837 [2024-10-21T09:51:59.432Z] Total : 25112.00 98.09 0.00 0.00 0.00 0.00 0.00 00:07:22.837 00:07:23.780 11:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u eab59ac6-4248-4166-ae21-3cce30b9b8f8 00:07:23.780 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:23.780 Nvme0n1 : 2.00 25253.50 98.65 0.00 0.00 0.00 0.00 0.00 00:07:23.780 [2024-10-21T09:52:00.375Z] =================================================================================================================== 00:07:23.780 [2024-10-21T09:52:00.375Z] Total : 25253.50 98.65 0.00 0.00 0.00 0.00 0.00 00:07:23.780 00:07:23.780 true 00:07:23.780 11:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eab59ac6-4248-4166-ae21-3cce30b9b8f8 00:07:23.780 11:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:24.041 11:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:24.041 11:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:24.041 11:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 796343 00:07:24.982 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:24.982 Nvme0n1 : 3.00 25326.33 98.93 0.00 0.00 0.00 0.00 0.00 00:07:24.982 [2024-10-21T09:52:01.577Z] =================================================================================================================== 00:07:24.982 [2024-10-21T09:52:01.577Z] Total : 25326.33 98.93 0.00 0.00 0.00 0.00 0.00 00:07:24.982 00:07:25.923 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:25.923 Nvme0n1 : 4.00 25362.75 99.07 0.00 0.00 0.00 0.00 0.00 00:07:25.923 [2024-10-21T09:52:02.518Z] =================================================================================================================== 00:07:25.923 [2024-10-21T09:52:02.518Z] Total : 25362.75 99.07 0.00 0.00 0.00 0.00 0.00 00:07:25.923 00:07:26.862 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:26.862 Nvme0n1 : 5.00 25397.60 99.21 0.00 0.00 0.00 0.00 0.00 00:07:26.862 [2024-10-21T09:52:03.457Z] =================================================================================================================== 00:07:26.862 [2024-10-21T09:52:03.457Z] Total : 25397.60 99.21 0.00 0.00 0.00 0.00 0.00 00:07:26.862 00:07:27.801 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:27.801 Nvme0n1 : 6.00 25431.33 99.34 0.00 0.00 0.00 0.00 0.00 00:07:27.801 [2024-10-21T09:52:04.396Z] =================================================================================================================== 00:07:27.801 [2024-10-21T09:52:04.396Z] Total : 25431.33 99.34 0.00 0.00 0.00 0.00 0.00 00:07:27.801 00:07:28.740 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:28.740 Nvme0n1 : 7.00 25445.86 99.40 0.00 0.00 0.00 0.00 0.00 00:07:28.740 [2024-10-21T09:52:05.335Z] =================================================================================================================== 00:07:28.740 [2024-10-21T09:52:05.335Z] Total : 25445.86 99.40 0.00 0.00 0.00 0.00 0.00 00:07:28.740 00:07:30.119 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:30.119 Nvme0n1 : 8.00 25464.88 99.47 0.00 0.00 0.00 0.00 0.00 00:07:30.119 [2024-10-21T09:52:06.714Z] =================================================================================================================== 00:07:30.119 [2024-10-21T09:52:06.714Z] Total : 25464.88 99.47 0.00 0.00 0.00 0.00 0.00 00:07:30.119 00:07:30.689 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:30.689 Nvme0n1 : 9.00 25479.67 99.53 0.00 0.00 0.00 0.00 0.00 00:07:30.689 [2024-10-21T09:52:07.284Z] =================================================================================================================== 00:07:30.689 [2024-10-21T09:52:07.284Z] Total : 25479.67 99.53 0.00 0.00 0.00 0.00 0.00 00:07:30.689 00:07:32.073 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:32.073 Nvme0n1 : 10.00 25491.80 99.58 0.00 0.00 0.00 0.00 0.00 00:07:32.073 [2024-10-21T09:52:08.668Z] =================================================================================================================== 00:07:32.073 [2024-10-21T09:52:08.668Z] Total : 25491.80 99.58 0.00 0.00 0.00 0.00 0.00 00:07:32.073 00:07:32.073 00:07:32.073 Latency(us) 00:07:32.073 [2024-10-21T09:52:08.668Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:32.073 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:32.073 Nvme0n1 : 10.00 25495.73 99.59 0.00 0.00 5017.56 2990.08 10758.83 00:07:32.073 [2024-10-21T09:52:08.668Z] =================================================================================================================== 00:07:32.073 [2024-10-21T09:52:08.668Z] Total : 25495.73 99.59 0.00 0.00 5017.56 2990.08 10758.83 00:07:32.073 { 00:07:32.073 "results": [ 00:07:32.073 { 00:07:32.073 "job": "Nvme0n1", 00:07:32.073 "core_mask": "0x2", 00:07:32.073 "workload": "randwrite", 00:07:32.073 "status": "finished", 00:07:32.073 "queue_depth": 128, 00:07:32.073 "io_size": 4096, 00:07:32.073 "runtime": 10.003478, 00:07:32.073 "iops": 25495.732584207213, 00:07:32.073 "mibps": 99.59270540705943, 00:07:32.073 "io_failed": 0, 00:07:32.073 "io_timeout": 0, 00:07:32.073 "avg_latency_us": 5017.5607872044, 00:07:32.074 "min_latency_us": 2990.08, 00:07:32.074 "max_latency_us": 10758.826666666666 00:07:32.074 } 00:07:32.074 ], 00:07:32.074 "core_count": 1 00:07:32.074 } 00:07:32.074 11:52:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 796136 00:07:32.074 11:52:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 796136 ']' 00:07:32.074 11:52:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 796136 00:07:32.074 11:52:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:07:32.074 11:52:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:32.074 11:52:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 796136 00:07:32.074 11:52:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:32.074 11:52:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:32.074 11:52:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 796136' 00:07:32.074 killing process with pid 796136 00:07:32.074 11:52:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 796136 00:07:32.074 Received shutdown signal, test time was about 10.000000 seconds 00:07:32.074 00:07:32.074 Latency(us) 00:07:32.074 [2024-10-21T09:52:08.669Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:32.074 [2024-10-21T09:52:08.669Z] =================================================================================================================== 00:07:32.074 [2024-10-21T09:52:08.669Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:32.074 11:52:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 796136 00:07:32.074 11:52:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:32.074 11:52:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:32.334 11:52:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eab59ac6-4248-4166-ae21-3cce30b9b8f8 00:07:32.334 11:52:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:32.595 11:52:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:32.595 11:52:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:32.595 11:52:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 792336 00:07:32.595 11:52:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 792336 00:07:32.595 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 792336 Killed "${NVMF_APP[@]}" "$@" 00:07:32.595 11:52:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:32.595 11:52:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:32.595 11:52:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:32.595 11:52:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:32.595 11:52:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:32.595 11:52:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=798613 00:07:32.595 11:52:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 798613 00:07:32.595 11:52:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:32.595 11:52:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 798613 ']' 00:07:32.595 11:52:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.595 11:52:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:32.595 11:52:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.595 11:52:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:32.595 11:52:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:32.595 [2024-10-21 11:52:09.069231] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:07:32.595 [2024-10-21 11:52:09.069285] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:32.595 [2024-10-21 11:52:09.153217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.595 [2024-10-21 11:52:09.182249] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:32.595 [2024-10-21 11:52:09.182274] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:32.595 [2024-10-21 11:52:09.182280] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:32.595 [2024-10-21 11:52:09.182284] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:32.595 [2024-10-21 11:52:09.182288] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:32.595 [2024-10-21 11:52:09.182739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.536 11:52:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:33.536 11:52:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:07:33.536 11:52:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:33.536 11:52:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:33.536 11:52:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:33.536 11:52:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:33.537 11:52:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:33.537 [2024-10-21 11:52:10.055964] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:33.537 [2024-10-21 11:52:10.056048] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:33.537 [2024-10-21 11:52:10.056070] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:33.537 11:52:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:33.537 11:52:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 4fbceebe-3204-40c2-8af0-a5312de608f5 00:07:33.537 11:52:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=4fbceebe-3204-40c2-8af0-a5312de608f5 00:07:33.537 11:52:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:33.537 11:52:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:07:33.537 11:52:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:33.537 11:52:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:33.537 11:52:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:33.798 11:52:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4fbceebe-3204-40c2-8af0-a5312de608f5 -t 2000 00:07:34.058 [ 00:07:34.058 { 00:07:34.058 "name": "4fbceebe-3204-40c2-8af0-a5312de608f5", 00:07:34.058 "aliases": [ 00:07:34.058 "lvs/lvol" 00:07:34.058 ], 00:07:34.058 "product_name": "Logical Volume", 00:07:34.058 "block_size": 4096, 00:07:34.058 "num_blocks": 38912, 00:07:34.058 "uuid": "4fbceebe-3204-40c2-8af0-a5312de608f5", 00:07:34.058 "assigned_rate_limits": { 00:07:34.058 "rw_ios_per_sec": 0, 00:07:34.058 "rw_mbytes_per_sec": 0, 00:07:34.058 "r_mbytes_per_sec": 0, 00:07:34.058 "w_mbytes_per_sec": 0 00:07:34.058 }, 00:07:34.058 "claimed": false, 00:07:34.058 "zoned": false, 00:07:34.058 "supported_io_types": { 00:07:34.058 "read": true, 00:07:34.058 "write": true, 00:07:34.058 "unmap": true, 00:07:34.058 "flush": false, 00:07:34.058 "reset": true, 00:07:34.058 "nvme_admin": false, 00:07:34.058 "nvme_io": false, 00:07:34.058 "nvme_io_md": false, 00:07:34.058 "write_zeroes": true, 00:07:34.058 "zcopy": false, 00:07:34.058 "get_zone_info": false, 00:07:34.058 "zone_management": false, 00:07:34.058 "zone_append": false, 00:07:34.058 "compare": false, 00:07:34.058 "compare_and_write": false, 00:07:34.058 "abort": false, 00:07:34.058 "seek_hole": true, 00:07:34.058 "seek_data": true, 00:07:34.058 "copy": false, 00:07:34.058 "nvme_iov_md": false 00:07:34.058 }, 00:07:34.058 "driver_specific": { 00:07:34.058 "lvol": { 00:07:34.058 "lvol_store_uuid": "eab59ac6-4248-4166-ae21-3cce30b9b8f8", 00:07:34.058 "base_bdev": "aio_bdev", 00:07:34.058 "thin_provision": false, 00:07:34.058 "num_allocated_clusters": 38, 00:07:34.058 "snapshot": false, 00:07:34.058 "clone": false, 00:07:34.058 "esnap_clone": false 00:07:34.058 } 00:07:34.058 } 00:07:34.058 } 00:07:34.058 ] 00:07:34.058 11:52:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:07:34.059 11:52:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eab59ac6-4248-4166-ae21-3cce30b9b8f8 00:07:34.059 11:52:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:34.059 11:52:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:34.059 11:52:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eab59ac6-4248-4166-ae21-3cce30b9b8f8 00:07:34.059 11:52:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:34.320 11:52:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:34.320 11:52:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:34.581 [2024-10-21 11:52:10.920627] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:34.581 11:52:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eab59ac6-4248-4166-ae21-3cce30b9b8f8 00:07:34.581 11:52:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:07:34.581 11:52:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eab59ac6-4248-4166-ae21-3cce30b9b8f8 00:07:34.581 11:52:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:34.581 11:52:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:34.581 11:52:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:34.581 11:52:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:34.581 11:52:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:34.581 11:52:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:34.581 11:52:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:34.581 11:52:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:34.581 11:52:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eab59ac6-4248-4166-ae21-3cce30b9b8f8 00:07:34.581 request: 00:07:34.581 { 00:07:34.581 "uuid": "eab59ac6-4248-4166-ae21-3cce30b9b8f8", 00:07:34.581 "method": "bdev_lvol_get_lvstores", 00:07:34.581 "req_id": 1 00:07:34.581 } 00:07:34.581 Got JSON-RPC error response 00:07:34.581 response: 00:07:34.581 { 00:07:34.581 "code": -19, 00:07:34.581 "message": "No such device" 00:07:34.581 } 00:07:34.581 11:52:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:07:34.581 11:52:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:34.581 11:52:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:34.581 11:52:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:34.581 11:52:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:34.843 aio_bdev 00:07:34.843 11:52:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4fbceebe-3204-40c2-8af0-a5312de608f5 00:07:34.843 11:52:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=4fbceebe-3204-40c2-8af0-a5312de608f5 00:07:34.843 11:52:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:34.843 11:52:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:07:34.843 11:52:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:34.843 11:52:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:34.843 11:52:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:35.103 11:52:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4fbceebe-3204-40c2-8af0-a5312de608f5 -t 2000 00:07:35.103 [ 00:07:35.103 { 00:07:35.103 "name": "4fbceebe-3204-40c2-8af0-a5312de608f5", 00:07:35.103 "aliases": [ 00:07:35.103 "lvs/lvol" 00:07:35.103 ], 00:07:35.103 "product_name": "Logical Volume", 00:07:35.103 "block_size": 4096, 00:07:35.103 "num_blocks": 38912, 00:07:35.103 "uuid": "4fbceebe-3204-40c2-8af0-a5312de608f5", 00:07:35.103 "assigned_rate_limits": { 00:07:35.103 "rw_ios_per_sec": 0, 00:07:35.104 "rw_mbytes_per_sec": 0, 00:07:35.104 "r_mbytes_per_sec": 0, 00:07:35.104 "w_mbytes_per_sec": 0 00:07:35.104 }, 00:07:35.104 "claimed": false, 00:07:35.104 "zoned": false, 00:07:35.104 "supported_io_types": { 00:07:35.104 "read": true, 00:07:35.104 "write": true, 00:07:35.104 "unmap": true, 00:07:35.104 "flush": false, 00:07:35.104 "reset": true, 00:07:35.104 "nvme_admin": false, 00:07:35.104 "nvme_io": false, 00:07:35.104 "nvme_io_md": false, 00:07:35.104 "write_zeroes": true, 00:07:35.104 "zcopy": false, 00:07:35.104 "get_zone_info": false, 00:07:35.104 "zone_management": false, 00:07:35.104 "zone_append": false, 00:07:35.104 "compare": false, 00:07:35.104 "compare_and_write": false, 00:07:35.104 "abort": false, 00:07:35.104 "seek_hole": true, 00:07:35.104 "seek_data": true, 00:07:35.104 "copy": false, 00:07:35.104 "nvme_iov_md": false 00:07:35.104 }, 00:07:35.104 "driver_specific": { 00:07:35.104 "lvol": { 00:07:35.104 "lvol_store_uuid": "eab59ac6-4248-4166-ae21-3cce30b9b8f8", 00:07:35.104 "base_bdev": "aio_bdev", 00:07:35.104 "thin_provision": false, 00:07:35.104 "num_allocated_clusters": 38, 00:07:35.104 "snapshot": false, 00:07:35.104 "clone": false, 00:07:35.104 "esnap_clone": false 00:07:35.104 } 00:07:35.104 } 00:07:35.104 } 00:07:35.104 ] 00:07:35.104 11:52:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:07:35.104 11:52:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eab59ac6-4248-4166-ae21-3cce30b9b8f8 00:07:35.104 11:52:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:35.364 11:52:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:35.364 11:52:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:35.364 11:52:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eab59ac6-4248-4166-ae21-3cce30b9b8f8 00:07:35.624 11:52:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:35.624 11:52:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4fbceebe-3204-40c2-8af0-a5312de608f5 00:07:35.625 11:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u eab59ac6-4248-4166-ae21-3cce30b9b8f8 00:07:35.912 11:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:36.173 11:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:36.173 00:07:36.173 real 0m17.483s 00:07:36.173 user 0m45.774s 00:07:36.173 sys 0m3.047s 00:07:36.173 11:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:36.173 11:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:36.173 ************************************ 00:07:36.173 END TEST lvs_grow_dirty 00:07:36.173 ************************************ 00:07:36.173 11:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:36.173 11:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:07:36.173 11:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:07:36.173 11:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:07:36.173 11:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:36.173 11:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:07:36.173 11:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:07:36.173 11:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:07:36.173 11:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:36.173 nvmf_trace.0 00:07:36.173 11:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:07:36.173 11:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:36.173 11:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:36.173 11:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:36.173 11:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:36.173 11:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:36.173 11:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:36.173 11:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:36.173 rmmod nvme_tcp 00:07:36.173 rmmod nvme_fabrics 00:07:36.173 rmmod nvme_keyring 00:07:36.173 11:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:36.173 11:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:36.173 11:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:36.173 11:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 798613 ']' 00:07:36.173 11:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 798613 00:07:36.173 11:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 798613 ']' 00:07:36.173 11:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 798613 00:07:36.173 11:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:07:36.174 11:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:36.174 11:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 798613 00:07:36.434 11:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:36.434 11:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:36.434 11:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 798613' 00:07:36.434 killing process with pid 798613 00:07:36.434 11:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 798613 00:07:36.434 11:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 798613 00:07:36.434 11:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:36.434 11:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:36.434 11:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:36.434 11:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:36.434 11:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:07:36.434 11:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:36.434 11:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:07:36.434 11:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:36.434 11:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:36.434 11:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:36.434 11:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:36.434 11:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:38.980 11:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:38.980 00:07:38.980 real 0m44.747s 00:07:38.980 user 1m7.652s 00:07:38.980 sys 0m10.623s 00:07:38.980 11:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.980 11:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:38.980 ************************************ 00:07:38.980 END TEST nvmf_lvs_grow 00:07:38.980 ************************************ 00:07:38.980 11:52:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:38.980 11:52:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:38.980 11:52:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.980 11:52:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:38.980 ************************************ 00:07:38.980 START TEST nvmf_bdev_io_wait 00:07:38.980 ************************************ 00:07:38.980 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:38.980 * Looking for test storage... 00:07:38.980 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:38.980 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:38.980 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:07:38.980 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:38.980 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:38.980 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:38.980 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:38.980 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:38.980 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:38.980 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:38.980 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:38.980 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:38.980 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:38.980 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:38.980 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:38.980 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:38.980 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:38.980 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:38.980 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:38.980 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:38.980 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:38.980 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:38.980 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:38.980 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:38.980 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:38.980 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:38.980 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:38.980 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:38.980 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:38.980 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:38.980 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:38.980 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:38.980 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:38.980 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:38.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.981 --rc genhtml_branch_coverage=1 00:07:38.981 --rc genhtml_function_coverage=1 00:07:38.981 --rc genhtml_legend=1 00:07:38.981 --rc geninfo_all_blocks=1 00:07:38.981 --rc geninfo_unexecuted_blocks=1 00:07:38.981 00:07:38.981 ' 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:38.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.981 --rc genhtml_branch_coverage=1 00:07:38.981 --rc genhtml_function_coverage=1 00:07:38.981 --rc genhtml_legend=1 00:07:38.981 --rc geninfo_all_blocks=1 00:07:38.981 --rc geninfo_unexecuted_blocks=1 00:07:38.981 00:07:38.981 ' 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:38.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.981 --rc genhtml_branch_coverage=1 00:07:38.981 --rc genhtml_function_coverage=1 00:07:38.981 --rc genhtml_legend=1 00:07:38.981 --rc geninfo_all_blocks=1 00:07:38.981 --rc geninfo_unexecuted_blocks=1 00:07:38.981 00:07:38.981 ' 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:38.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.981 --rc genhtml_branch_coverage=1 00:07:38.981 --rc genhtml_function_coverage=1 00:07:38.981 --rc genhtml_legend=1 00:07:38.981 --rc geninfo_all_blocks=1 00:07:38.981 --rc geninfo_unexecuted_blocks=1 00:07:38.981 00:07:38.981 ' 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:38.981 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:38.981 11:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:47.266 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:47.266 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:47.266 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:47.266 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:47.267 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:47.267 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:47.267 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.706 ms 00:07:47.267 00:07:47.267 --- 10.0.0.2 ping statistics --- 00:07:47.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.267 rtt min/avg/max/mdev = 0.706/0.706/0.706/0.000 ms 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:47.267 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:47.267 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:07:47.267 00:07:47.267 --- 10.0.0.1 ping statistics --- 00:07:47.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.267 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=803694 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 803694 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 803694 ']' 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:47.267 11:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:47.267 [2024-10-21 11:52:22.792205] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:07:47.267 [2024-10-21 11:52:22.792269] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:47.267 [2024-10-21 11:52:22.881727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:47.267 [2024-10-21 11:52:22.936138] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:47.267 [2024-10-21 11:52:22.936198] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:47.267 [2024-10-21 11:52:22.936207] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:47.267 [2024-10-21 11:52:22.936214] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:47.267 [2024-10-21 11:52:22.936221] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:47.267 [2024-10-21 11:52:22.938301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.267 [2024-10-21 11:52:22.938461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.267 [2024-10-21 11:52:22.938657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:47.267 [2024-10-21 11:52:22.938659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.267 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:47.267 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:07:47.267 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:47.267 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:47.267 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:47.267 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:47.267 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:47.267 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.267 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:47.267 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.267 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:47.267 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.267 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:47.267 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.267 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:47.267 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.267 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:47.267 [2024-10-21 11:52:23.706912] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:47.267 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.267 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:47.267 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.267 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:47.267 Malloc0 00:07:47.267 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.267 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:47.267 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.267 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:47.267 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.267 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:47.267 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.267 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:47.267 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.267 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:47.268 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.268 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:47.268 [2024-10-21 11:52:23.766195] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:47.268 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.268 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=803749 00:07:47.268 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=803751 00:07:47.268 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:47.268 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:47.268 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:07:47.268 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:07:47.268 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:47.268 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:47.268 { 00:07:47.268 "params": { 00:07:47.268 "name": "Nvme$subsystem", 00:07:47.268 "trtype": "$TEST_TRANSPORT", 00:07:47.268 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:47.268 "adrfam": "ipv4", 00:07:47.268 "trsvcid": "$NVMF_PORT", 00:07:47.268 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:47.268 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:47.268 "hdgst": ${hdgst:-false}, 00:07:47.268 "ddgst": ${ddgst:-false} 00:07:47.268 }, 00:07:47.268 "method": "bdev_nvme_attach_controller" 00:07:47.268 } 00:07:47.268 EOF 00:07:47.268 )") 00:07:47.268 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=803753 00:07:47.268 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:47.268 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:47.268 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:07:47.268 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:07:47.268 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:47.268 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=803756 00:07:47.268 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:47.268 { 00:07:47.268 "params": { 00:07:47.268 "name": "Nvme$subsystem", 00:07:47.268 "trtype": "$TEST_TRANSPORT", 00:07:47.268 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:47.268 "adrfam": "ipv4", 00:07:47.268 "trsvcid": "$NVMF_PORT", 00:07:47.268 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:47.268 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:47.268 "hdgst": ${hdgst:-false}, 00:07:47.268 "ddgst": ${ddgst:-false} 00:07:47.268 }, 00:07:47.268 "method": "bdev_nvme_attach_controller" 00:07:47.268 } 00:07:47.268 EOF 00:07:47.268 )") 00:07:47.268 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:47.268 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:47.268 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:47.268 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:07:47.268 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:07:47.268 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:07:47.268 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:47.268 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:47.268 { 00:07:47.268 "params": { 00:07:47.268 "name": "Nvme$subsystem", 00:07:47.268 "trtype": "$TEST_TRANSPORT", 00:07:47.268 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:47.268 "adrfam": "ipv4", 00:07:47.268 "trsvcid": "$NVMF_PORT", 00:07:47.268 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:47.268 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:47.268 "hdgst": ${hdgst:-false}, 00:07:47.268 "ddgst": ${ddgst:-false} 00:07:47.268 }, 00:07:47.268 "method": "bdev_nvme_attach_controller" 00:07:47.268 } 00:07:47.268 EOF 00:07:47.268 )") 00:07:47.268 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:47.268 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:47.268 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:07:47.268 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:07:47.268 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:07:47.268 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:47.268 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:47.268 { 00:07:47.268 "params": { 00:07:47.268 "name": "Nvme$subsystem", 00:07:47.268 "trtype": "$TEST_TRANSPORT", 00:07:47.268 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:47.268 "adrfam": "ipv4", 00:07:47.268 "trsvcid": "$NVMF_PORT", 00:07:47.268 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:47.268 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:47.268 "hdgst": ${hdgst:-false}, 00:07:47.268 "ddgst": ${ddgst:-false} 00:07:47.268 }, 00:07:47.268 "method": "bdev_nvme_attach_controller" 00:07:47.268 } 00:07:47.268 EOF 00:07:47.268 )") 00:07:47.268 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:07:47.268 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 803749 00:07:47.268 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:07:47.268 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:07:47.268 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:07:47.268 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:07:47.268 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:07:47.268 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:47.268 "params": { 00:07:47.268 "name": "Nvme1", 00:07:47.268 "trtype": "tcp", 00:07:47.268 "traddr": "10.0.0.2", 00:07:47.268 "adrfam": "ipv4", 00:07:47.268 "trsvcid": "4420", 00:07:47.268 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:47.268 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:47.268 "hdgst": false, 00:07:47.268 "ddgst": false 00:07:47.268 }, 00:07:47.268 "method": "bdev_nvme_attach_controller" 00:07:47.268 }' 00:07:47.268 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:07:47.268 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:07:47.268 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:47.268 "params": { 00:07:47.268 "name": "Nvme1", 00:07:47.268 "trtype": "tcp", 00:07:47.268 "traddr": "10.0.0.2", 00:07:47.268 "adrfam": "ipv4", 00:07:47.268 "trsvcid": "4420", 00:07:47.268 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:47.268 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:47.268 "hdgst": false, 00:07:47.268 "ddgst": false 00:07:47.268 }, 00:07:47.268 "method": "bdev_nvme_attach_controller" 00:07:47.268 }' 00:07:47.268 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:07:47.268 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:47.268 "params": { 00:07:47.268 "name": "Nvme1", 00:07:47.268 "trtype": "tcp", 00:07:47.268 "traddr": "10.0.0.2", 00:07:47.268 "adrfam": "ipv4", 00:07:47.268 "trsvcid": "4420", 00:07:47.268 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:47.268 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:47.268 "hdgst": false, 00:07:47.268 "ddgst": false 00:07:47.268 }, 00:07:47.268 "method": "bdev_nvme_attach_controller" 00:07:47.268 }' 00:07:47.268 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:07:47.268 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:47.268 "params": { 00:07:47.268 "name": "Nvme1", 00:07:47.268 "trtype": "tcp", 00:07:47.268 "traddr": "10.0.0.2", 00:07:47.268 "adrfam": "ipv4", 00:07:47.268 "trsvcid": "4420", 00:07:47.268 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:47.268 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:47.268 "hdgst": false, 00:07:47.268 "ddgst": false 00:07:47.268 }, 00:07:47.268 "method": "bdev_nvme_attach_controller" 00:07:47.268 }' 00:07:47.268 [2024-10-21 11:52:23.820007] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:07:47.268 [2024-10-21 11:52:23.820058] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:47.268 [2024-10-21 11:52:23.824307] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:07:47.268 [2024-10-21 11:52:23.824359] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:47.268 [2024-10-21 11:52:23.825339] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:07:47.268 [2024-10-21 11:52:23.825388] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:47.268 [2024-10-21 11:52:23.826129] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:07:47.268 [2024-10-21 11:52:23.826174] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:47.529 [2024-10-21 11:52:23.963624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.529 [2024-10-21 11:52:23.992706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:47.529 [2024-10-21 11:52:24.017862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.529 [2024-10-21 11:52:24.046969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:47.529 [2024-10-21 11:52:24.066535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.529 [2024-10-21 11:52:24.094965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:47.529 [2024-10-21 11:52:24.112604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.789 [2024-10-21 11:52:24.141195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:47.789 Running I/O for 1 seconds... 00:07:47.789 Running I/O for 1 seconds... 00:07:47.789 Running I/O for 1 seconds... 00:07:47.789 Running I/O for 1 seconds... 00:07:48.731 13853.00 IOPS, 54.11 MiB/s 00:07:48.731 Latency(us) 00:07:48.731 [2024-10-21T09:52:25.326Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:48.731 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:48.731 Nvme1n1 : 1.01 13920.69 54.38 0.00 0.00 9168.03 4642.13 16711.68 00:07:48.731 [2024-10-21T09:52:25.326Z] =================================================================================================================== 00:07:48.731 [2024-10-21T09:52:25.326Z] Total : 13920.69 54.38 0.00 0.00 9168.03 4642.13 16711.68 00:07:48.731 188472.00 IOPS, 736.22 MiB/s 00:07:48.731 Latency(us) 00:07:48.731 [2024-10-21T09:52:25.326Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:48.732 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:48.732 Nvme1n1 : 1.00 188102.10 734.77 0.00 0.00 676.87 303.79 1966.08 00:07:48.732 [2024-10-21T09:52:25.327Z] =================================================================================================================== 00:07:48.732 [2024-10-21T09:52:25.327Z] Total : 188102.10 734.77 0.00 0.00 676.87 303.79 1966.08 00:07:48.993 11278.00 IOPS, 44.05 MiB/s 00:07:48.993 Latency(us) 00:07:48.993 [2024-10-21T09:52:25.588Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:48.993 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:48.993 Nvme1n1 : 1.01 11350.15 44.34 0.00 0.00 11238.31 4887.89 19333.12 00:07:48.993 [2024-10-21T09:52:25.588Z] =================================================================================================================== 00:07:48.993 [2024-10-21T09:52:25.588Z] Total : 11350.15 44.34 0.00 0.00 11238.31 4887.89 19333.12 00:07:48.993 11051.00 IOPS, 43.17 MiB/s 00:07:48.993 Latency(us) 00:07:48.993 [2024-10-21T09:52:25.588Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:48.993 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:48.993 Nvme1n1 : 1.01 11108.21 43.39 0.00 0.00 11486.30 4669.44 23702.19 00:07:48.993 [2024-10-21T09:52:25.588Z] =================================================================================================================== 00:07:48.993 [2024-10-21T09:52:25.588Z] Total : 11108.21 43.39 0.00 0.00 11486.30 4669.44 23702.19 00:07:48.993 11:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 803751 00:07:48.993 11:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 803753 00:07:48.993 11:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 803756 00:07:48.993 11:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:48.993 11:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.993 11:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:48.993 11:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.993 11:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:48.993 11:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:48.993 11:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:48.993 11:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:48.993 11:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:48.993 11:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:48.993 11:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:48.993 11:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:48.993 rmmod nvme_tcp 00:07:48.993 rmmod nvme_fabrics 00:07:48.993 rmmod nvme_keyring 00:07:48.993 11:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:48.993 11:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:48.993 11:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:48.993 11:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 803694 ']' 00:07:48.993 11:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 803694 00:07:48.993 11:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 803694 ']' 00:07:48.993 11:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 803694 00:07:48.993 11:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:07:48.993 11:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:48.993 11:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 803694 00:07:49.254 11:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:49.254 11:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:49.254 11:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 803694' 00:07:49.254 killing process with pid 803694 00:07:49.254 11:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 803694 00:07:49.254 11:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 803694 00:07:49.254 11:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:49.254 11:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:49.254 11:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:49.254 11:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:49.254 11:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:07:49.254 11:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:49.254 11:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:07:49.254 11:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:49.254 11:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:49.254 11:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:49.254 11:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:49.254 11:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.802 11:52:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:51.802 00:07:51.802 real 0m12.795s 00:07:51.802 user 0m18.671s 00:07:51.802 sys 0m7.080s 00:07:51.802 11:52:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:51.802 11:52:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:51.802 ************************************ 00:07:51.802 END TEST nvmf_bdev_io_wait 00:07:51.802 ************************************ 00:07:51.802 11:52:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:51.802 11:52:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:51.802 11:52:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:51.802 11:52:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:51.802 ************************************ 00:07:51.802 START TEST nvmf_queue_depth 00:07:51.802 ************************************ 00:07:51.802 11:52:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:51.802 * Looking for test storage... 00:07:51.802 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:51.802 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:51.802 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:07:51.802 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:51.802 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:51.802 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:51.802 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:51.802 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:51.802 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:51.802 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:51.802 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:51.802 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:51.802 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:51.802 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:51.802 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:51.802 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:51.802 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:51.802 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:51.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.803 --rc genhtml_branch_coverage=1 00:07:51.803 --rc genhtml_function_coverage=1 00:07:51.803 --rc genhtml_legend=1 00:07:51.803 --rc geninfo_all_blocks=1 00:07:51.803 --rc geninfo_unexecuted_blocks=1 00:07:51.803 00:07:51.803 ' 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:51.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.803 --rc genhtml_branch_coverage=1 00:07:51.803 --rc genhtml_function_coverage=1 00:07:51.803 --rc genhtml_legend=1 00:07:51.803 --rc geninfo_all_blocks=1 00:07:51.803 --rc geninfo_unexecuted_blocks=1 00:07:51.803 00:07:51.803 ' 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:51.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.803 --rc genhtml_branch_coverage=1 00:07:51.803 --rc genhtml_function_coverage=1 00:07:51.803 --rc genhtml_legend=1 00:07:51.803 --rc geninfo_all_blocks=1 00:07:51.803 --rc geninfo_unexecuted_blocks=1 00:07:51.803 00:07:51.803 ' 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:51.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.803 --rc genhtml_branch_coverage=1 00:07:51.803 --rc genhtml_function_coverage=1 00:07:51.803 --rc genhtml_legend=1 00:07:51.803 --rc geninfo_all_blocks=1 00:07:51.803 --rc geninfo_unexecuted_blocks=1 00:07:51.803 00:07:51.803 ' 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:51.803 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:51.803 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:07:51.804 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:59.949 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:59.949 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:59.949 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:59.949 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:59.950 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:59.950 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:59.950 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:07:59.950 00:07:59.950 --- 10.0.0.2 ping statistics --- 00:07:59.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.950 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:59.950 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:59.950 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:07:59.950 00:07:59.950 --- 10.0.0.1 ping statistics --- 00:07:59.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.950 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=808432 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 808432 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 808432 ']' 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:59.950 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:59.950 [2024-10-21 11:52:35.701552] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:07:59.950 [2024-10-21 11:52:35.701618] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:59.950 [2024-10-21 11:52:35.794001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.950 [2024-10-21 11:52:35.844312] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:59.950 [2024-10-21 11:52:35.844366] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:59.950 [2024-10-21 11:52:35.844375] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:59.950 [2024-10-21 11:52:35.844382] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:59.950 [2024-10-21 11:52:35.844389] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:59.950 [2024-10-21 11:52:35.845174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.950 11:52:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:59.950 11:52:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:07:59.950 11:52:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:59.950 11:52:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:59.950 11:52:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:00.212 11:52:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:00.212 11:52:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:00.212 11:52:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.212 11:52:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:00.212 [2024-10-21 11:52:36.562328] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:00.212 11:52:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.212 11:52:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:00.212 11:52:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.212 11:52:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:00.212 Malloc0 00:08:00.212 11:52:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.212 11:52:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:00.212 11:52:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.212 11:52:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:00.212 11:52:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.212 11:52:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:00.212 11:52:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.212 11:52:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:00.212 11:52:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.212 11:52:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:00.212 11:52:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.213 11:52:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:00.213 [2024-10-21 11:52:36.623526] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:00.213 11:52:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.213 11:52:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=808775 00:08:00.213 11:52:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:00.213 11:52:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:00.213 11:52:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 808775 /var/tmp/bdevperf.sock 00:08:00.213 11:52:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 808775 ']' 00:08:00.213 11:52:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:00.213 11:52:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:00.213 11:52:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:00.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:00.213 11:52:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:00.213 11:52:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:00.213 [2024-10-21 11:52:36.681179] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:08:00.213 [2024-10-21 11:52:36.681244] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid808775 ] 00:08:00.213 [2024-10-21 11:52:36.763513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.481 [2024-10-21 11:52:36.815989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.053 11:52:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:01.053 11:52:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:01.053 11:52:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:01.053 11:52:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.053 11:52:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:01.314 NVMe0n1 00:08:01.314 11:52:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.314 11:52:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:01.314 Running I/O for 10 seconds... 00:08:03.641 10273.00 IOPS, 40.13 MiB/s [2024-10-21T09:52:41.177Z] 11056.50 IOPS, 43.19 MiB/s [2024-10-21T09:52:42.118Z] 11264.33 IOPS, 44.00 MiB/s [2024-10-21T09:52:43.059Z] 11468.75 IOPS, 44.80 MiB/s [2024-10-21T09:52:44.000Z] 11879.00 IOPS, 46.40 MiB/s [2024-10-21T09:52:44.941Z] 12120.17 IOPS, 47.34 MiB/s [2024-10-21T09:52:45.884Z] 12320.57 IOPS, 48.13 MiB/s [2024-10-21T09:52:47.268Z] 12520.50 IOPS, 48.91 MiB/s [2024-10-21T09:52:47.838Z] 12626.44 IOPS, 49.32 MiB/s [2024-10-21T09:52:48.100Z] 12705.60 IOPS, 49.63 MiB/s 00:08:11.505 Latency(us) 00:08:11.505 [2024-10-21T09:52:48.100Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:11.505 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:11.505 Verification LBA range: start 0x0 length 0x4000 00:08:11.505 NVMe0n1 : 10.04 12752.92 49.82 0.00 0.00 80032.43 3659.09 65099.09 00:08:11.505 [2024-10-21T09:52:48.100Z] =================================================================================================================== 00:08:11.505 [2024-10-21T09:52:48.100Z] Total : 12752.92 49.82 0.00 0.00 80032.43 3659.09 65099.09 00:08:11.505 { 00:08:11.505 "results": [ 00:08:11.505 { 00:08:11.505 "job": "NVMe0n1", 00:08:11.505 "core_mask": "0x1", 00:08:11.505 "workload": "verify", 00:08:11.505 "status": "finished", 00:08:11.505 "verify_range": { 00:08:11.505 "start": 0, 00:08:11.505 "length": 16384 00:08:11.505 }, 00:08:11.505 "queue_depth": 1024, 00:08:11.505 "io_size": 4096, 00:08:11.505 "runtime": 10.040915, 00:08:11.505 "iops": 12752.921422001878, 00:08:11.505 "mibps": 49.81609930469484, 00:08:11.505 "io_failed": 0, 00:08:11.505 "io_timeout": 0, 00:08:11.505 "avg_latency_us": 80032.43463682439, 00:08:11.505 "min_latency_us": 3659.0933333333332, 00:08:11.505 "max_latency_us": 65099.09333333333 00:08:11.505 } 00:08:11.505 ], 00:08:11.505 "core_count": 1 00:08:11.505 } 00:08:11.505 11:52:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 808775 00:08:11.505 11:52:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 808775 ']' 00:08:11.505 11:52:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 808775 00:08:11.505 11:52:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:11.505 11:52:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:11.505 11:52:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 808775 00:08:11.505 11:52:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:11.505 11:52:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:11.505 11:52:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 808775' 00:08:11.505 killing process with pid 808775 00:08:11.505 11:52:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 808775 00:08:11.505 Received shutdown signal, test time was about 10.000000 seconds 00:08:11.505 00:08:11.505 Latency(us) 00:08:11.505 [2024-10-21T09:52:48.100Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:11.505 [2024-10-21T09:52:48.100Z] =================================================================================================================== 00:08:11.505 [2024-10-21T09:52:48.100Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:11.505 11:52:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 808775 00:08:11.505 11:52:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:11.505 11:52:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:11.505 11:52:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:11.505 11:52:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:11.505 11:52:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:11.505 11:52:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:11.505 11:52:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:11.505 11:52:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:11.505 rmmod nvme_tcp 00:08:11.505 rmmod nvme_fabrics 00:08:11.766 rmmod nvme_keyring 00:08:11.766 11:52:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:11.766 11:52:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:11.766 11:52:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:11.766 11:52:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 808432 ']' 00:08:11.766 11:52:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 808432 00:08:11.766 11:52:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 808432 ']' 00:08:11.766 11:52:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 808432 00:08:11.766 11:52:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:11.766 11:52:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:11.766 11:52:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 808432 00:08:11.766 11:52:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:11.766 11:52:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:11.766 11:52:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 808432' 00:08:11.766 killing process with pid 808432 00:08:11.766 11:52:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 808432 00:08:11.766 11:52:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 808432 00:08:11.766 11:52:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:11.766 11:52:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:11.766 11:52:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:11.766 11:52:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:11.766 11:52:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:08:11.766 11:52:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:11.766 11:52:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:08:11.766 11:52:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:11.766 11:52:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:11.766 11:52:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.766 11:52:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:11.766 11:52:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.339 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:14.339 00:08:14.339 real 0m22.488s 00:08:14.339 user 0m25.924s 00:08:14.339 sys 0m6.967s 00:08:14.339 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:14.339 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:14.339 ************************************ 00:08:14.339 END TEST nvmf_queue_depth 00:08:14.339 ************************************ 00:08:14.339 11:52:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:14.339 11:52:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:14.339 11:52:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:14.339 11:52:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:14.339 ************************************ 00:08:14.339 START TEST nvmf_target_multipath 00:08:14.339 ************************************ 00:08:14.339 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:14.339 * Looking for test storage... 00:08:14.339 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:14.339 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:14.339 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:08:14.339 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:14.339 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:14.339 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:14.339 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:14.339 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:14.339 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:14.339 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:14.339 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:14.339 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:14.339 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:14.339 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:14.339 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:14.339 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:14.339 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:14.339 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:14.339 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:14.339 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:14.339 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:14.339 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:14.339 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:14.339 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:14.339 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:14.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.340 --rc genhtml_branch_coverage=1 00:08:14.340 --rc genhtml_function_coverage=1 00:08:14.340 --rc genhtml_legend=1 00:08:14.340 --rc geninfo_all_blocks=1 00:08:14.340 --rc geninfo_unexecuted_blocks=1 00:08:14.340 00:08:14.340 ' 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:14.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.340 --rc genhtml_branch_coverage=1 00:08:14.340 --rc genhtml_function_coverage=1 00:08:14.340 --rc genhtml_legend=1 00:08:14.340 --rc geninfo_all_blocks=1 00:08:14.340 --rc geninfo_unexecuted_blocks=1 00:08:14.340 00:08:14.340 ' 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:14.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.340 --rc genhtml_branch_coverage=1 00:08:14.340 --rc genhtml_function_coverage=1 00:08:14.340 --rc genhtml_legend=1 00:08:14.340 --rc geninfo_all_blocks=1 00:08:14.340 --rc geninfo_unexecuted_blocks=1 00:08:14.340 00:08:14.340 ' 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:14.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.340 --rc genhtml_branch_coverage=1 00:08:14.340 --rc genhtml_function_coverage=1 00:08:14.340 --rc genhtml_legend=1 00:08:14.340 --rc geninfo_all_blocks=1 00:08:14.340 --rc geninfo_unexecuted_blocks=1 00:08:14.340 00:08:14.340 ' 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:14.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:14.340 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:22.485 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:22.485 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:22.485 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:22.485 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:22.485 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:22.486 11:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:22.486 11:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:22.486 11:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:22.486 11:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:22.486 11:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:22.486 11:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:22.486 11:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:22.486 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:22.486 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.668 ms 00:08:22.486 00:08:22.486 --- 10.0.0.2 ping statistics --- 00:08:22.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.486 rtt min/avg/max/mdev = 0.668/0.668/0.668/0.000 ms 00:08:22.486 11:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:22.486 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:22.486 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:08:22.486 00:08:22.486 --- 10.0.0.1 ping statistics --- 00:08:22.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.486 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:08:22.486 11:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:22.486 11:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:08:22.486 11:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:22.486 11:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:22.486 11:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:22.486 11:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:22.486 11:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:22.486 11:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:22.486 11:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:22.486 11:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:22.486 11:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:22.486 only one NIC for nvmf test 00:08:22.486 11:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:22.486 11:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:22.486 11:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:22.486 11:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:22.486 11:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:22.486 11:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:22.486 11:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:22.486 rmmod nvme_tcp 00:08:22.486 rmmod nvme_fabrics 00:08:22.486 rmmod nvme_keyring 00:08:22.486 11:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:22.486 11:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:22.486 11:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:22.486 11:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:08:22.486 11:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:22.486 11:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:22.486 11:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:22.486 11:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:22.486 11:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:08:22.486 11:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:22.486 11:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:08:22.486 11:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:22.486 11:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:22.486 11:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.486 11:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:22.486 11:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.871 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:23.871 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:23.871 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:23.871 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:23.871 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:23.871 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:23.871 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:23.871 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:23.871 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:23.871 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:23.871 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:23.871 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:23.871 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:08:23.871 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:23.871 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:23.871 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:23.871 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:23.871 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:08:23.871 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:23.871 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:08:23.871 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:23.871 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:23.871 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:23.871 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:23.871 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.871 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:23.872 00:08:23.872 real 0m9.905s 00:08:23.872 user 0m2.162s 00:08:23.872 sys 0m5.709s 00:08:23.872 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:23.872 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:23.872 ************************************ 00:08:23.872 END TEST nvmf_target_multipath 00:08:23.872 ************************************ 00:08:23.872 11:53:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:23.872 11:53:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:23.872 11:53:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:23.872 11:53:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:24.133 ************************************ 00:08:24.133 START TEST nvmf_zcopy 00:08:24.133 ************************************ 00:08:24.133 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:24.133 * Looking for test storage... 00:08:24.133 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:24.133 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:24.133 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:08:24.133 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:24.133 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:24.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.134 --rc genhtml_branch_coverage=1 00:08:24.134 --rc genhtml_function_coverage=1 00:08:24.134 --rc genhtml_legend=1 00:08:24.134 --rc geninfo_all_blocks=1 00:08:24.134 --rc geninfo_unexecuted_blocks=1 00:08:24.134 00:08:24.134 ' 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:24.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.134 --rc genhtml_branch_coverage=1 00:08:24.134 --rc genhtml_function_coverage=1 00:08:24.134 --rc genhtml_legend=1 00:08:24.134 --rc geninfo_all_blocks=1 00:08:24.134 --rc geninfo_unexecuted_blocks=1 00:08:24.134 00:08:24.134 ' 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:24.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.134 --rc genhtml_branch_coverage=1 00:08:24.134 --rc genhtml_function_coverage=1 00:08:24.134 --rc genhtml_legend=1 00:08:24.134 --rc geninfo_all_blocks=1 00:08:24.134 --rc geninfo_unexecuted_blocks=1 00:08:24.134 00:08:24.134 ' 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:24.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.134 --rc genhtml_branch_coverage=1 00:08:24.134 --rc genhtml_function_coverage=1 00:08:24.134 --rc genhtml_legend=1 00:08:24.134 --rc geninfo_all_blocks=1 00:08:24.134 --rc geninfo_unexecuted_blocks=1 00:08:24.134 00:08:24.134 ' 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:24.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:24.134 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:32.293 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:32.293 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:32.293 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:32.293 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:32.293 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:32.293 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:32.293 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:32.293 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:32.293 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:32.293 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:32.293 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:32.293 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:32.293 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:32.293 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:32.293 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:32.293 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:32.293 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:32.293 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:32.293 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:32.293 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:32.293 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:32.293 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:32.293 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:32.293 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:32.293 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:32.293 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:32.293 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:32.293 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:32.293 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:32.293 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:32.293 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:32.293 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:32.293 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:32.293 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:32.293 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:32.293 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:32.293 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:32.293 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:32.293 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:32.293 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:32.293 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:32.293 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:32.293 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:32.293 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:32.293 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:32.293 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:32.293 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:32.293 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:32.293 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:32.293 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:32.293 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:32.294 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:32.294 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:32.294 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.294 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:32.294 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:32.294 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:32.294 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:32.294 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.294 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:32.294 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:32.294 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.294 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:32.294 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.294 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:32.294 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:32.294 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:32.294 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:32.294 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.294 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:32.294 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:32.294 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.294 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:32.294 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:08:32.294 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:32.294 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:32.294 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:32.294 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:32.294 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:32.294 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:32.294 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:32.294 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:32.294 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:32.294 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:32.294 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:32.294 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:32.294 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:32.294 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:32.294 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:32.294 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:32.294 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:32.294 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:32.294 11:53:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:32.294 11:53:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:32.294 11:53:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:32.294 11:53:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:32.294 11:53:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:32.294 11:53:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:32.294 11:53:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:32.294 11:53:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:32.294 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:32.294 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:08:32.294 00:08:32.294 --- 10.0.0.2 ping statistics --- 00:08:32.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.294 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:08:32.294 11:53:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:32.294 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:32.294 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:08:32.294 00:08:32.294 --- 10.0.0.1 ping statistics --- 00:08:32.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.295 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:08:32.295 11:53:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:32.295 11:53:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:08:32.295 11:53:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:32.295 11:53:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:32.295 11:53:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:32.295 11:53:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:32.295 11:53:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:32.295 11:53:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:32.295 11:53:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:32.295 11:53:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:32.295 11:53:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:32.295 11:53:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:32.295 11:53:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:32.295 11:53:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=819470 00:08:32.295 11:53:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 819470 00:08:32.295 11:53:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:32.295 11:53:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 819470 ']' 00:08:32.295 11:53:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.295 11:53:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:32.295 11:53:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.295 11:53:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:32.295 11:53:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:32.295 [2024-10-21 11:53:08.279020] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:08:32.295 [2024-10-21 11:53:08.279094] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.295 [2024-10-21 11:53:08.368747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.295 [2024-10-21 11:53:08.419231] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:32.295 [2024-10-21 11:53:08.419280] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:32.295 [2024-10-21 11:53:08.419288] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:32.295 [2024-10-21 11:53:08.419295] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:32.295 [2024-10-21 11:53:08.419301] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:32.295 [2024-10-21 11:53:08.420077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.561 11:53:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:32.561 11:53:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:08:32.561 11:53:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:32.561 11:53:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:32.561 11:53:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:32.561 11:53:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:32.561 11:53:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:32.561 11:53:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:32.561 11:53:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.561 11:53:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:32.561 [2024-10-21 11:53:09.140928] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:32.561 11:53:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.561 11:53:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:32.561 11:53:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.561 11:53:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:32.823 11:53:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.823 11:53:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:32.823 11:53:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.823 11:53:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:32.823 [2024-10-21 11:53:09.165213] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:32.823 11:53:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.823 11:53:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:32.823 11:53:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.823 11:53:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:32.823 11:53:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.823 11:53:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:32.823 11:53:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.823 11:53:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:32.823 malloc0 00:08:32.823 11:53:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.823 11:53:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:32.823 11:53:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.823 11:53:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:32.823 11:53:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.823 11:53:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:32.823 11:53:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:32.823 11:53:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:08:32.823 11:53:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:08:32.823 11:53:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:32.823 11:53:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:32.823 { 00:08:32.823 "params": { 00:08:32.823 "name": "Nvme$subsystem", 00:08:32.823 "trtype": "$TEST_TRANSPORT", 00:08:32.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:32.823 "adrfam": "ipv4", 00:08:32.823 "trsvcid": "$NVMF_PORT", 00:08:32.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:32.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:32.823 "hdgst": ${hdgst:-false}, 00:08:32.823 "ddgst": ${ddgst:-false} 00:08:32.823 }, 00:08:32.823 "method": "bdev_nvme_attach_controller" 00:08:32.823 } 00:08:32.823 EOF 00:08:32.823 )") 00:08:32.823 11:53:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:08:32.823 11:53:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:08:32.823 11:53:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:08:32.823 11:53:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:32.823 "params": { 00:08:32.823 "name": "Nvme1", 00:08:32.823 "trtype": "tcp", 00:08:32.823 "traddr": "10.0.0.2", 00:08:32.823 "adrfam": "ipv4", 00:08:32.823 "trsvcid": "4420", 00:08:32.823 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:32.823 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:32.823 "hdgst": false, 00:08:32.823 "ddgst": false 00:08:32.823 }, 00:08:32.823 "method": "bdev_nvme_attach_controller" 00:08:32.823 }' 00:08:32.823 [2024-10-21 11:53:09.266713] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:08:32.823 [2024-10-21 11:53:09.266781] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid819574 ] 00:08:32.823 [2024-10-21 11:53:09.349121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.823 [2024-10-21 11:53:09.403193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.084 Running I/O for 10 seconds... 00:08:34.992 6458.00 IOPS, 50.45 MiB/s [2024-10-21T09:53:12.969Z] 6514.00 IOPS, 50.89 MiB/s [2024-10-21T09:53:13.910Z] 7146.33 IOPS, 55.83 MiB/s [2024-10-21T09:53:14.850Z] 7799.00 IOPS, 60.93 MiB/s [2024-10-21T09:53:15.792Z] 8192.20 IOPS, 64.00 MiB/s [2024-10-21T09:53:16.733Z] 8452.17 IOPS, 66.03 MiB/s [2024-10-21T09:53:17.672Z] 8635.00 IOPS, 67.46 MiB/s [2024-10-21T09:53:18.614Z] 8772.12 IOPS, 68.53 MiB/s [2024-10-21T09:53:20.001Z] 8879.56 IOPS, 69.37 MiB/s [2024-10-21T09:53:20.001Z] 8967.20 IOPS, 70.06 MiB/s 00:08:43.406 Latency(us) 00:08:43.406 [2024-10-21T09:53:20.001Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.406 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:43.406 Verification LBA range: start 0x0 length 0x1000 00:08:43.406 Nvme1n1 : 10.01 8971.79 70.09 0.00 0.00 14221.36 2375.68 28398.93 00:08:43.406 [2024-10-21T09:53:20.001Z] =================================================================================================================== 00:08:43.406 [2024-10-21T09:53:20.001Z] Total : 8971.79 70.09 0.00 0.00 14221.36 2375.68 28398.93 00:08:43.406 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=821672 00:08:43.406 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:43.406 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:43.406 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:43.406 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:43.406 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:08:43.406 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:08:43.406 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:43.406 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:43.406 { 00:08:43.406 "params": { 00:08:43.406 "name": "Nvme$subsystem", 00:08:43.406 "trtype": "$TEST_TRANSPORT", 00:08:43.406 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:43.406 "adrfam": "ipv4", 00:08:43.406 "trsvcid": "$NVMF_PORT", 00:08:43.406 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:43.406 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:43.406 "hdgst": ${hdgst:-false}, 00:08:43.406 "ddgst": ${ddgst:-false} 00:08:43.406 }, 00:08:43.406 "method": "bdev_nvme_attach_controller" 00:08:43.406 } 00:08:43.406 EOF 00:08:43.406 )") 00:08:43.406 [2024-10-21 11:53:19.713688] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.406 [2024-10-21 11:53:19.713720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.406 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:08:43.406 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:08:43.406 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:08:43.406 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:43.406 "params": { 00:08:43.406 "name": "Nvme1", 00:08:43.406 "trtype": "tcp", 00:08:43.406 "traddr": "10.0.0.2", 00:08:43.406 "adrfam": "ipv4", 00:08:43.406 "trsvcid": "4420", 00:08:43.406 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:43.406 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:43.406 "hdgst": false, 00:08:43.406 "ddgst": false 00:08:43.406 }, 00:08:43.406 "method": "bdev_nvme_attach_controller" 00:08:43.406 }' 00:08:43.406 [2024-10-21 11:53:19.725692] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.406 [2024-10-21 11:53:19.725701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.406 [2024-10-21 11:53:19.737721] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.406 [2024-10-21 11:53:19.737729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.406 [2024-10-21 11:53:19.749750] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.406 [2024-10-21 11:53:19.749757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.406 [2024-10-21 11:53:19.757532] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:08:43.406 [2024-10-21 11:53:19.757579] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid821672 ] 00:08:43.406 [2024-10-21 11:53:19.761781] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.406 [2024-10-21 11:53:19.761788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.406 [2024-10-21 11:53:19.773811] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.406 [2024-10-21 11:53:19.773818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.406 [2024-10-21 11:53:19.785842] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.406 [2024-10-21 11:53:19.785850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.406 [2024-10-21 11:53:19.797873] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.407 [2024-10-21 11:53:19.797880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.407 [2024-10-21 11:53:19.809904] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.407 [2024-10-21 11:53:19.809911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.407 [2024-10-21 11:53:19.821935] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.407 [2024-10-21 11:53:19.821942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.407 [2024-10-21 11:53:19.832802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.407 [2024-10-21 11:53:19.833964] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.407 [2024-10-21 11:53:19.833971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.407 [2024-10-21 11:53:19.845995] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.407 [2024-10-21 11:53:19.846004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.407 [2024-10-21 11:53:19.858024] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.407 [2024-10-21 11:53:19.858034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.407 [2024-10-21 11:53:19.862279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.407 [2024-10-21 11:53:19.870055] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.407 [2024-10-21 11:53:19.870063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.407 [2024-10-21 11:53:19.882093] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.407 [2024-10-21 11:53:19.882107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.407 [2024-10-21 11:53:19.894119] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.407 [2024-10-21 11:53:19.894132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.407 [2024-10-21 11:53:19.906150] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.407 [2024-10-21 11:53:19.906159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.407 [2024-10-21 11:53:19.918179] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.407 [2024-10-21 11:53:19.918187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.407 [2024-10-21 11:53:19.930207] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.407 [2024-10-21 11:53:19.930214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.407 [2024-10-21 11:53:19.942248] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.407 [2024-10-21 11:53:19.942264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.407 [2024-10-21 11:53:19.954271] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.407 [2024-10-21 11:53:19.954281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.407 [2024-10-21 11:53:19.966301] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.407 [2024-10-21 11:53:19.966310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.407 [2024-10-21 11:53:19.978334] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.407 [2024-10-21 11:53:19.978342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.407 [2024-10-21 11:53:19.990365] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.407 [2024-10-21 11:53:19.990372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.669 [2024-10-21 11:53:20.002426] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.669 [2024-10-21 11:53:20.002438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.669 [2024-10-21 11:53:20.014445] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.669 [2024-10-21 11:53:20.014455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.669 [2024-10-21 11:53:20.026475] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.669 [2024-10-21 11:53:20.026482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.669 [2024-10-21 11:53:20.038505] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.669 [2024-10-21 11:53:20.038512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.669 [2024-10-21 11:53:20.050536] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.669 [2024-10-21 11:53:20.050543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.669 [2024-10-21 11:53:20.062574] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.669 [2024-10-21 11:53:20.062585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.669 [2024-10-21 11:53:20.074601] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.669 [2024-10-21 11:53:20.074608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.669 [2024-10-21 11:53:20.086633] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.669 [2024-10-21 11:53:20.086641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.669 [2024-10-21 11:53:20.098665] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.669 [2024-10-21 11:53:20.098673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.669 [2024-10-21 11:53:20.110696] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.669 [2024-10-21 11:53:20.110706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.669 [2024-10-21 11:53:20.122723] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.669 [2024-10-21 11:53:20.122730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.669 [2024-10-21 11:53:20.134755] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.669 [2024-10-21 11:53:20.134761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.669 [2024-10-21 11:53:20.146787] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.669 [2024-10-21 11:53:20.146795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.669 [2024-10-21 11:53:20.158828] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.669 [2024-10-21 11:53:20.158844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.669 Running I/O for 5 seconds... 00:08:43.669 [2024-10-21 11:53:20.170851] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.669 [2024-10-21 11:53:20.170859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.669 [2024-10-21 11:53:20.186144] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.669 [2024-10-21 11:53:20.186159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.669 [2024-10-21 11:53:20.199578] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.669 [2024-10-21 11:53:20.199594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.669 [2024-10-21 11:53:20.212328] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.669 [2024-10-21 11:53:20.212344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.669 [2024-10-21 11:53:20.225439] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.669 [2024-10-21 11:53:20.225454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.669 [2024-10-21 11:53:20.238599] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.669 [2024-10-21 11:53:20.238614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.669 [2024-10-21 11:53:20.251617] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.669 [2024-10-21 11:53:20.251631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.931 [2024-10-21 11:53:20.264845] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.931 [2024-10-21 11:53:20.264860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.931 [2024-10-21 11:53:20.277558] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.931 [2024-10-21 11:53:20.277572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.931 [2024-10-21 11:53:20.291375] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.931 [2024-10-21 11:53:20.291390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.931 [2024-10-21 11:53:20.304335] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.931 [2024-10-21 11:53:20.304350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.931 [2024-10-21 11:53:20.317613] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.931 [2024-10-21 11:53:20.317628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.931 [2024-10-21 11:53:20.330926] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.931 [2024-10-21 11:53:20.330940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.931 [2024-10-21 11:53:20.343988] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.931 [2024-10-21 11:53:20.344010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.931 [2024-10-21 11:53:20.356733] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.931 [2024-10-21 11:53:20.356748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.931 [2024-10-21 11:53:20.369964] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.931 [2024-10-21 11:53:20.369979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.931 [2024-10-21 11:53:20.383161] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.931 [2024-10-21 11:53:20.383176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.931 [2024-10-21 11:53:20.396394] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.931 [2024-10-21 11:53:20.396409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.931 [2024-10-21 11:53:20.409113] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.931 [2024-10-21 11:53:20.409128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.931 [2024-10-21 11:53:20.421842] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.931 [2024-10-21 11:53:20.421857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.931 [2024-10-21 11:53:20.434541] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.931 [2024-10-21 11:53:20.434556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.931 [2024-10-21 11:53:20.447747] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.931 [2024-10-21 11:53:20.447761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.931 [2024-10-21 11:53:20.460337] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.931 [2024-10-21 11:53:20.460351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.931 [2024-10-21 11:53:20.473503] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.931 [2024-10-21 11:53:20.473518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.931 [2024-10-21 11:53:20.486747] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.931 [2024-10-21 11:53:20.486762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.931 [2024-10-21 11:53:20.499488] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.931 [2024-10-21 11:53:20.499502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.931 [2024-10-21 11:53:20.512973] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.931 [2024-10-21 11:53:20.512987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.931 [2024-10-21 11:53:20.525905] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.931 [2024-10-21 11:53:20.525920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.193 [2024-10-21 11:53:20.538952] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.193 [2024-10-21 11:53:20.538966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.193 [2024-10-21 11:53:20.551771] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.193 [2024-10-21 11:53:20.551785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.193 [2024-10-21 11:53:20.564780] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.193 [2024-10-21 11:53:20.564795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.193 [2024-10-21 11:53:20.577767] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.193 [2024-10-21 11:53:20.577782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.193 [2024-10-21 11:53:20.590731] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.193 [2024-10-21 11:53:20.590750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.193 [2024-10-21 11:53:20.603680] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.193 [2024-10-21 11:53:20.603694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.193 [2024-10-21 11:53:20.616844] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.193 [2024-10-21 11:53:20.616858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.193 [2024-10-21 11:53:20.630553] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.193 [2024-10-21 11:53:20.630566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.193 [2024-10-21 11:53:20.643506] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.193 [2024-10-21 11:53:20.643520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.193 [2024-10-21 11:53:20.656103] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.193 [2024-10-21 11:53:20.656118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.193 [2024-10-21 11:53:20.669691] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.193 [2024-10-21 11:53:20.669706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.193 [2024-10-21 11:53:20.683335] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.193 [2024-10-21 11:53:20.683350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.193 [2024-10-21 11:53:20.695517] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.193 [2024-10-21 11:53:20.695531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.193 [2024-10-21 11:53:20.709043] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.193 [2024-10-21 11:53:20.709058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.193 [2024-10-21 11:53:20.722209] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.193 [2024-10-21 11:53:20.722223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.193 [2024-10-21 11:53:20.735768] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.193 [2024-10-21 11:53:20.735783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.193 [2024-10-21 11:53:20.748756] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.193 [2024-10-21 11:53:20.748771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.193 [2024-10-21 11:53:20.761378] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.193 [2024-10-21 11:53:20.761393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.193 [2024-10-21 11:53:20.774345] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.193 [2024-10-21 11:53:20.774360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.193 [2024-10-21 11:53:20.787169] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.193 [2024-10-21 11:53:20.787183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.454 [2024-10-21 11:53:20.800695] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.454 [2024-10-21 11:53:20.800710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.454 [2024-10-21 11:53:20.813580] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.454 [2024-10-21 11:53:20.813594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.454 [2024-10-21 11:53:20.826011] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.454 [2024-10-21 11:53:20.826025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.454 [2024-10-21 11:53:20.839039] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.454 [2024-10-21 11:53:20.839057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.454 [2024-10-21 11:53:20.852523] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.454 [2024-10-21 11:53:20.852538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.454 [2024-10-21 11:53:20.865356] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.454 [2024-10-21 11:53:20.865370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.454 [2024-10-21 11:53:20.878556] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.454 [2024-10-21 11:53:20.878571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.454 [2024-10-21 11:53:20.892326] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.454 [2024-10-21 11:53:20.892341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.454 [2024-10-21 11:53:20.905450] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.454 [2024-10-21 11:53:20.905465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.454 [2024-10-21 11:53:20.918943] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.454 [2024-10-21 11:53:20.918957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.454 [2024-10-21 11:53:20.931948] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.454 [2024-10-21 11:53:20.931963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.454 [2024-10-21 11:53:20.945509] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.454 [2024-10-21 11:53:20.945523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.454 [2024-10-21 11:53:20.958615] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.454 [2024-10-21 11:53:20.958630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.454 [2024-10-21 11:53:20.971490] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.454 [2024-10-21 11:53:20.971504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.454 [2024-10-21 11:53:20.984837] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.454 [2024-10-21 11:53:20.984851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.454 [2024-10-21 11:53:20.998385] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.454 [2024-10-21 11:53:20.998400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.454 [2024-10-21 11:53:21.011929] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.454 [2024-10-21 11:53:21.011943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.454 [2024-10-21 11:53:21.024751] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.454 [2024-10-21 11:53:21.024765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.454 [2024-10-21 11:53:21.038042] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.454 [2024-10-21 11:53:21.038056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.716 [2024-10-21 11:53:21.051108] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.716 [2024-10-21 11:53:21.051123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.716 [2024-10-21 11:53:21.064521] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.716 [2024-10-21 11:53:21.064537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.716 [2024-10-21 11:53:21.077556] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.716 [2024-10-21 11:53:21.077571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.716 [2024-10-21 11:53:21.089688] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.716 [2024-10-21 11:53:21.089707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.716 [2024-10-21 11:53:21.102526] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.716 [2024-10-21 11:53:21.102542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.716 [2024-10-21 11:53:21.115009] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.716 [2024-10-21 11:53:21.115024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.716 [2024-10-21 11:53:21.128096] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.716 [2024-10-21 11:53:21.128111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.716 [2024-10-21 11:53:21.141401] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.716 [2024-10-21 11:53:21.141416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.716 [2024-10-21 11:53:21.154294] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.716 [2024-10-21 11:53:21.154309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.716 [2024-10-21 11:53:21.166535] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.716 [2024-10-21 11:53:21.166550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.716 19161.00 IOPS, 149.70 MiB/s [2024-10-21T09:53:21.311Z] [2024-10-21 11:53:21.179706] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.716 [2024-10-21 11:53:21.179721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.716 [2024-10-21 11:53:21.193003] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.716 [2024-10-21 11:53:21.193017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.716 [2024-10-21 11:53:21.206446] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.716 [2024-10-21 11:53:21.206461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.716 [2024-10-21 11:53:21.220009] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.716 [2024-10-21 11:53:21.220023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.716 [2024-10-21 11:53:21.233454] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.716 [2024-10-21 11:53:21.233469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.716 [2024-10-21 11:53:21.246217] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.716 [2024-10-21 11:53:21.246232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.716 [2024-10-21 11:53:21.258839] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.716 [2024-10-21 11:53:21.258853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.716 [2024-10-21 11:53:21.271938] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.716 [2024-10-21 11:53:21.271952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.716 [2024-10-21 11:53:21.285404] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.716 [2024-10-21 11:53:21.285419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.716 [2024-10-21 11:53:21.298747] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.716 [2024-10-21 11:53:21.298761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.982 [2024-10-21 11:53:21.311376] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.982 [2024-10-21 11:53:21.311392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.982 [2024-10-21 11:53:21.324522] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.982 [2024-10-21 11:53:21.324537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.982 [2024-10-21 11:53:21.337274] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.982 [2024-10-21 11:53:21.337291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.982 [2024-10-21 11:53:21.350554] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.982 [2024-10-21 11:53:21.350569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.982 [2024-10-21 11:53:21.363506] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.982 [2024-10-21 11:53:21.363520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.982 [2024-10-21 11:53:21.376808] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.983 [2024-10-21 11:53:21.376823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.983 [2024-10-21 11:53:21.390019] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.983 [2024-10-21 11:53:21.390034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.983 [2024-10-21 11:53:21.403147] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.983 [2024-10-21 11:53:21.403163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.983 [2024-10-21 11:53:21.416163] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.983 [2024-10-21 11:53:21.416179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.983 [2024-10-21 11:53:21.429419] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.983 [2024-10-21 11:53:21.429434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.983 [2024-10-21 11:53:21.442639] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.983 [2024-10-21 11:53:21.442654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.983 [2024-10-21 11:53:21.455601] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.983 [2024-10-21 11:53:21.455615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.983 [2024-10-21 11:53:21.468820] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.983 [2024-10-21 11:53:21.468835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.983 [2024-10-21 11:53:21.482233] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.983 [2024-10-21 11:53:21.482248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.983 [2024-10-21 11:53:21.495828] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.983 [2024-10-21 11:53:21.495843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.983 [2024-10-21 11:53:21.508820] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.983 [2024-10-21 11:53:21.508834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.983 [2024-10-21 11:53:21.521601] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.983 [2024-10-21 11:53:21.521616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.983 [2024-10-21 11:53:21.534181] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.983 [2024-10-21 11:53:21.534196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.983 [2024-10-21 11:53:21.546823] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.983 [2024-10-21 11:53:21.546838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.983 [2024-10-21 11:53:21.559630] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.983 [2024-10-21 11:53:21.559644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.983 [2024-10-21 11:53:21.573312] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.983 [2024-10-21 11:53:21.573333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.290 [2024-10-21 11:53:21.586773] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.290 [2024-10-21 11:53:21.586788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.290 [2024-10-21 11:53:21.600115] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.290 [2024-10-21 11:53:21.600129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.290 [2024-10-21 11:53:21.613905] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.290 [2024-10-21 11:53:21.613920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.290 [2024-10-21 11:53:21.626493] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.290 [2024-10-21 11:53:21.626508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.290 [2024-10-21 11:53:21.640115] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.290 [2024-10-21 11:53:21.640130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.290 [2024-10-21 11:53:21.653324] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.290 [2024-10-21 11:53:21.653339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.290 [2024-10-21 11:53:21.666818] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.290 [2024-10-21 11:53:21.666833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.290 [2024-10-21 11:53:21.680003] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.290 [2024-10-21 11:53:21.680018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.290 [2024-10-21 11:53:21.693730] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.290 [2024-10-21 11:53:21.693745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.290 [2024-10-21 11:53:21.706850] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.290 [2024-10-21 11:53:21.706864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.290 [2024-10-21 11:53:21.720072] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.290 [2024-10-21 11:53:21.720086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.290 [2024-10-21 11:53:21.733733] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.290 [2024-10-21 11:53:21.733748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.290 [2024-10-21 11:53:21.746606] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.290 [2024-10-21 11:53:21.746621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.290 [2024-10-21 11:53:21.759176] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.290 [2024-10-21 11:53:21.759191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.290 [2024-10-21 11:53:21.772436] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.290 [2024-10-21 11:53:21.772451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.290 [2024-10-21 11:53:21.785555] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.290 [2024-10-21 11:53:21.785570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.290 [2024-10-21 11:53:21.798898] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.290 [2024-10-21 11:53:21.798913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.290 [2024-10-21 11:53:21.812389] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.290 [2024-10-21 11:53:21.812403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.290 [2024-10-21 11:53:21.825828] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.290 [2024-10-21 11:53:21.825846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.290 [2024-10-21 11:53:21.838966] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.290 [2024-10-21 11:53:21.838981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.290 [2024-10-21 11:53:21.852644] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.290 [2024-10-21 11:53:21.852659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.290 [2024-10-21 11:53:21.865365] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.290 [2024-10-21 11:53:21.865380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.608 [2024-10-21 11:53:21.879074] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.608 [2024-10-21 11:53:21.879089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.608 [2024-10-21 11:53:21.891703] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.608 [2024-10-21 11:53:21.891717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.608 [2024-10-21 11:53:21.904478] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.608 [2024-10-21 11:53:21.904493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.608 [2024-10-21 11:53:21.917788] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.608 [2024-10-21 11:53:21.917803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.608 [2024-10-21 11:53:21.931071] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.608 [2024-10-21 11:53:21.931085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.608 [2024-10-21 11:53:21.944579] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.608 [2024-10-21 11:53:21.944593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.608 [2024-10-21 11:53:21.958088] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.608 [2024-10-21 11:53:21.958102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.608 [2024-10-21 11:53:21.971072] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.608 [2024-10-21 11:53:21.971086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.608 [2024-10-21 11:53:21.984264] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.608 [2024-10-21 11:53:21.984278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.608 [2024-10-21 11:53:21.997524] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.608 [2024-10-21 11:53:21.997539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.608 [2024-10-21 11:53:22.011269] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.608 [2024-10-21 11:53:22.011283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.608 [2024-10-21 11:53:22.024652] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.608 [2024-10-21 11:53:22.024666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.608 [2024-10-21 11:53:22.037171] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.608 [2024-10-21 11:53:22.037185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.608 [2024-10-21 11:53:22.049761] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.608 [2024-10-21 11:53:22.049775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.608 [2024-10-21 11:53:22.062754] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.608 [2024-10-21 11:53:22.062769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.608 [2024-10-21 11:53:22.075618] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.608 [2024-10-21 11:53:22.075638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.608 [2024-10-21 11:53:22.088103] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.608 [2024-10-21 11:53:22.088117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.608 [2024-10-21 11:53:22.100661] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.608 [2024-10-21 11:53:22.100675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.608 [2024-10-21 11:53:22.113330] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.608 [2024-10-21 11:53:22.113344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.608 [2024-10-21 11:53:22.126228] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.608 [2024-10-21 11:53:22.126243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.608 [2024-10-21 11:53:22.139462] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.608 [2024-10-21 11:53:22.139476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.608 [2024-10-21 11:53:22.152663] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.608 [2024-10-21 11:53:22.152677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.609 [2024-10-21 11:53:22.166002] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.609 [2024-10-21 11:53:22.166016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.609 19253.50 IOPS, 150.42 MiB/s [2024-10-21T09:53:22.204Z] [2024-10-21 11:53:22.179399] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.609 [2024-10-21 11:53:22.179414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.877 [2024-10-21 11:53:22.192421] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.877 [2024-10-21 11:53:22.192435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.877 [2024-10-21 11:53:22.205805] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.877 [2024-10-21 11:53:22.205819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.877 [2024-10-21 11:53:22.219222] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.877 [2024-10-21 11:53:22.219236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.877 [2024-10-21 11:53:22.232749] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.877 [2024-10-21 11:53:22.232763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.877 [2024-10-21 11:53:22.245234] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.877 [2024-10-21 11:53:22.245248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.877 [2024-10-21 11:53:22.258545] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.877 [2024-10-21 11:53:22.258560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.877 [2024-10-21 11:53:22.271021] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.877 [2024-10-21 11:53:22.271035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.877 [2024-10-21 11:53:22.284118] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.877 [2024-10-21 11:53:22.284132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.877 [2024-10-21 11:53:22.296946] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.877 [2024-10-21 11:53:22.296961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.877 [2024-10-21 11:53:22.310436] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.877 [2024-10-21 11:53:22.310451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.877 [2024-10-21 11:53:22.321420] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.877 [2024-10-21 11:53:22.321438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.877 [2024-10-21 11:53:22.334643] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.877 [2024-10-21 11:53:22.334657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.877 [2024-10-21 11:53:22.347680] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.877 [2024-10-21 11:53:22.347694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.877 [2024-10-21 11:53:22.360557] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.877 [2024-10-21 11:53:22.360572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.877 [2024-10-21 11:53:22.374090] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.877 [2024-10-21 11:53:22.374105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.877 [2024-10-21 11:53:22.387297] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.877 [2024-10-21 11:53:22.387311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.877 [2024-10-21 11:53:22.400850] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.877 [2024-10-21 11:53:22.400865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.877 [2024-10-21 11:53:22.414387] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.877 [2024-10-21 11:53:22.414402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.877 [2024-10-21 11:53:22.427886] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.877 [2024-10-21 11:53:22.427900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.877 [2024-10-21 11:53:22.441523] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.877 [2024-10-21 11:53:22.441537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.877 [2024-10-21 11:53:22.454133] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.877 [2024-10-21 11:53:22.454148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.877 [2024-10-21 11:53:22.467721] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.877 [2024-10-21 11:53:22.467736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.139 [2024-10-21 11:53:22.480443] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.139 [2024-10-21 11:53:22.480458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.139 [2024-10-21 11:53:22.493276] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.139 [2024-10-21 11:53:22.493290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.139 [2024-10-21 11:53:22.506645] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.139 [2024-10-21 11:53:22.506659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.139 [2024-10-21 11:53:22.519864] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.139 [2024-10-21 11:53:22.519878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.139 [2024-10-21 11:53:22.532864] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.139 [2024-10-21 11:53:22.532878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.139 [2024-10-21 11:53:22.546312] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.139 [2024-10-21 11:53:22.546332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.139 [2024-10-21 11:53:22.559724] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.139 [2024-10-21 11:53:22.559738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.139 [2024-10-21 11:53:22.573166] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.139 [2024-10-21 11:53:22.573181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.139 [2024-10-21 11:53:22.586375] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.139 [2024-10-21 11:53:22.586390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.139 [2024-10-21 11:53:22.599093] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.139 [2024-10-21 11:53:22.599107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.139 [2024-10-21 11:53:22.611427] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.139 [2024-10-21 11:53:22.611441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.139 [2024-10-21 11:53:22.625008] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.139 [2024-10-21 11:53:22.625023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.139 [2024-10-21 11:53:22.638106] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.139 [2024-10-21 11:53:22.638121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.139 [2024-10-21 11:53:22.650968] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.139 [2024-10-21 11:53:22.650983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.139 [2024-10-21 11:53:22.664666] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.139 [2024-10-21 11:53:22.664680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.139 [2024-10-21 11:53:22.677280] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.139 [2024-10-21 11:53:22.677294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.139 [2024-10-21 11:53:22.690075] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.139 [2024-10-21 11:53:22.690089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.139 [2024-10-21 11:53:22.703260] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.139 [2024-10-21 11:53:22.703274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.139 [2024-10-21 11:53:22.715873] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.139 [2024-10-21 11:53:22.715887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.139 [2024-10-21 11:53:22.728536] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.139 [2024-10-21 11:53:22.728550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.400 [2024-10-21 11:53:22.741689] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.400 [2024-10-21 11:53:22.741704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.400 [2024-10-21 11:53:22.754961] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.400 [2024-10-21 11:53:22.754975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.400 [2024-10-21 11:53:22.767576] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.400 [2024-10-21 11:53:22.767591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.400 [2024-10-21 11:53:22.780879] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.400 [2024-10-21 11:53:22.780893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.400 [2024-10-21 11:53:22.794054] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.400 [2024-10-21 11:53:22.794070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.400 [2024-10-21 11:53:22.806305] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.400 [2024-10-21 11:53:22.806324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.401 [2024-10-21 11:53:22.819398] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.401 [2024-10-21 11:53:22.819413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.401 [2024-10-21 11:53:22.832500] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.401 [2024-10-21 11:53:22.832515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.401 [2024-10-21 11:53:22.846206] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.401 [2024-10-21 11:53:22.846221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.401 [2024-10-21 11:53:22.859845] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.401 [2024-10-21 11:53:22.859859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.401 [2024-10-21 11:53:22.872760] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.401 [2024-10-21 11:53:22.872774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.401 [2024-10-21 11:53:22.886129] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.401 [2024-10-21 11:53:22.886144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.401 [2024-10-21 11:53:22.899277] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.401 [2024-10-21 11:53:22.899292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.401 [2024-10-21 11:53:22.912068] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.401 [2024-10-21 11:53:22.912082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.401 [2024-10-21 11:53:22.925704] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.401 [2024-10-21 11:53:22.925719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.401 [2024-10-21 11:53:22.938517] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.401 [2024-10-21 11:53:22.938532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.401 [2024-10-21 11:53:22.952098] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.401 [2024-10-21 11:53:22.952113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.401 [2024-10-21 11:53:22.965870] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.401 [2024-10-21 11:53:22.965885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.401 [2024-10-21 11:53:22.978266] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.401 [2024-10-21 11:53:22.978281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.401 [2024-10-21 11:53:22.990934] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.401 [2024-10-21 11:53:22.990948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.662 [2024-10-21 11:53:23.003545] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.662 [2024-10-21 11:53:23.003560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.662 [2024-10-21 11:53:23.016644] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.662 [2024-10-21 11:53:23.016659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.662 [2024-10-21 11:53:23.030115] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.662 [2024-10-21 11:53:23.030130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.662 [2024-10-21 11:53:23.043582] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.662 [2024-10-21 11:53:23.043597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.662 [2024-10-21 11:53:23.056476] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.662 [2024-10-21 11:53:23.056491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.662 [2024-10-21 11:53:23.069329] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.662 [2024-10-21 11:53:23.069343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.662 [2024-10-21 11:53:23.082883] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.662 [2024-10-21 11:53:23.082898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.662 [2024-10-21 11:53:23.095703] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.663 [2024-10-21 11:53:23.095718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.663 [2024-10-21 11:53:23.109235] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.663 [2024-10-21 11:53:23.109250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.663 [2024-10-21 11:53:23.122748] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.663 [2024-10-21 11:53:23.122763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.663 [2024-10-21 11:53:23.136071] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.663 [2024-10-21 11:53:23.136086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.663 [2024-10-21 11:53:23.149433] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.663 [2024-10-21 11:53:23.149448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.663 [2024-10-21 11:53:23.162392] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.663 [2024-10-21 11:53:23.162406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.663 [2024-10-21 11:53:23.175718] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.663 [2024-10-21 11:53:23.175732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.663 19265.33 IOPS, 150.51 MiB/s [2024-10-21T09:53:23.258Z] [2024-10-21 11:53:23.188684] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.663 [2024-10-21 11:53:23.188699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.663 [2024-10-21 11:53:23.202378] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.663 [2024-10-21 11:53:23.202392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.663 [2024-10-21 11:53:23.215714] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.663 [2024-10-21 11:53:23.215729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.663 [2024-10-21 11:53:23.229202] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.663 [2024-10-21 11:53:23.229217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.663 [2024-10-21 11:53:23.242822] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.663 [2024-10-21 11:53:23.242836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.663 [2024-10-21 11:53:23.255656] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.663 [2024-10-21 11:53:23.255671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.924 [2024-10-21 11:53:23.268703] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.924 [2024-10-21 11:53:23.268718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.924 [2024-10-21 11:53:23.282244] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.924 [2024-10-21 11:53:23.282259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.924 [2024-10-21 11:53:23.295890] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.924 [2024-10-21 11:53:23.295905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.924 [2024-10-21 11:53:23.308528] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.924 [2024-10-21 11:53:23.308547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.924 [2024-10-21 11:53:23.322082] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.924 [2024-10-21 11:53:23.322097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.924 [2024-10-21 11:53:23.335187] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.924 [2024-10-21 11:53:23.335202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.924 [2024-10-21 11:53:23.348544] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.924 [2024-10-21 11:53:23.348559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.924 [2024-10-21 11:53:23.361292] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.924 [2024-10-21 11:53:23.361307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.924 [2024-10-21 11:53:23.374061] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.924 [2024-10-21 11:53:23.374075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.924 [2024-10-21 11:53:23.386782] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.924 [2024-10-21 11:53:23.386797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.924 [2024-10-21 11:53:23.399203] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.924 [2024-10-21 11:53:23.399218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.924 [2024-10-21 11:53:23.411640] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.924 [2024-10-21 11:53:23.411655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.924 [2024-10-21 11:53:23.424353] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.924 [2024-10-21 11:53:23.424368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.924 [2024-10-21 11:53:23.437086] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.924 [2024-10-21 11:53:23.437101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.924 [2024-10-21 11:53:23.450253] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.924 [2024-10-21 11:53:23.450268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.924 [2024-10-21 11:53:23.463210] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.924 [2024-10-21 11:53:23.463225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.924 [2024-10-21 11:53:23.476205] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.924 [2024-10-21 11:53:23.476220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.924 [2024-10-21 11:53:23.489337] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.924 [2024-10-21 11:53:23.489352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.924 [2024-10-21 11:53:23.502540] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.924 [2024-10-21 11:53:23.502555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.924 [2024-10-21 11:53:23.515705] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.924 [2024-10-21 11:53:23.515720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.185 [2024-10-21 11:53:23.528992] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.185 [2024-10-21 11:53:23.529007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.185 [2024-10-21 11:53:23.542482] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.185 [2024-10-21 11:53:23.542496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.185 [2024-10-21 11:53:23.555641] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.185 [2024-10-21 11:53:23.555660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.185 [2024-10-21 11:53:23.568673] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.185 [2024-10-21 11:53:23.568687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.185 [2024-10-21 11:53:23.581397] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.185 [2024-10-21 11:53:23.581412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.185 [2024-10-21 11:53:23.594689] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.185 [2024-10-21 11:53:23.594703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.185 [2024-10-21 11:53:23.608000] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.185 [2024-10-21 11:53:23.608014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.185 [2024-10-21 11:53:23.620792] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.185 [2024-10-21 11:53:23.620807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.185 [2024-10-21 11:53:23.633938] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.185 [2024-10-21 11:53:23.633952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.186 [2024-10-21 11:53:23.646722] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.186 [2024-10-21 11:53:23.646736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.186 [2024-10-21 11:53:23.660086] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.186 [2024-10-21 11:53:23.660100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.186 [2024-10-21 11:53:23.672968] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.186 [2024-10-21 11:53:23.672982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.186 [2024-10-21 11:53:23.685819] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.186 [2024-10-21 11:53:23.685833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.186 [2024-10-21 11:53:23.698255] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.186 [2024-10-21 11:53:23.698269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.186 [2024-10-21 11:53:23.711617] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.186 [2024-10-21 11:53:23.711631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.186 [2024-10-21 11:53:23.725168] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.186 [2024-10-21 11:53:23.725182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.186 [2024-10-21 11:53:23.737773] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.186 [2024-10-21 11:53:23.737787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.186 [2024-10-21 11:53:23.750260] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.186 [2024-10-21 11:53:23.750274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.186 [2024-10-21 11:53:23.762903] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.186 [2024-10-21 11:53:23.762917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.186 [2024-10-21 11:53:23.776166] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.186 [2024-10-21 11:53:23.776180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.447 [2024-10-21 11:53:23.789746] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.447 [2024-10-21 11:53:23.789761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.447 [2024-10-21 11:53:23.803128] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.447 [2024-10-21 11:53:23.803146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.447 [2024-10-21 11:53:23.816431] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.447 [2024-10-21 11:53:23.816445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.447 [2024-10-21 11:53:23.829976] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.447 [2024-10-21 11:53:23.829991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.447 [2024-10-21 11:53:23.842954] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.447 [2024-10-21 11:53:23.842968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.447 [2024-10-21 11:53:23.855814] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.447 [2024-10-21 11:53:23.855829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.447 [2024-10-21 11:53:23.869060] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.447 [2024-10-21 11:53:23.869074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.447 [2024-10-21 11:53:23.882252] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.447 [2024-10-21 11:53:23.882267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.447 [2024-10-21 11:53:23.895520] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.447 [2024-10-21 11:53:23.895535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.447 [2024-10-21 11:53:23.908502] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.447 [2024-10-21 11:53:23.908516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.447 [2024-10-21 11:53:23.921120] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.447 [2024-10-21 11:53:23.921134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.447 [2024-10-21 11:53:23.934707] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.447 [2024-10-21 11:53:23.934721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.447 [2024-10-21 11:53:23.948062] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.447 [2024-10-21 11:53:23.948076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.447 [2024-10-21 11:53:23.961640] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.447 [2024-10-21 11:53:23.961654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.447 [2024-10-21 11:53:23.975197] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.447 [2024-10-21 11:53:23.975211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.447 [2024-10-21 11:53:23.988591] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.447 [2024-10-21 11:53:23.988605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.447 [2024-10-21 11:53:24.001344] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.447 [2024-10-21 11:53:24.001358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.447 [2024-10-21 11:53:24.014755] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.447 [2024-10-21 11:53:24.014769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.447 [2024-10-21 11:53:24.028114] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.447 [2024-10-21 11:53:24.028129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.447 [2024-10-21 11:53:24.040803] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.447 [2024-10-21 11:53:24.040817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.709 [2024-10-21 11:53:24.054263] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.709 [2024-10-21 11:53:24.054285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.709 [2024-10-21 11:53:24.067718] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.709 [2024-10-21 11:53:24.067732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.709 [2024-10-21 11:53:24.081233] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.709 [2024-10-21 11:53:24.081247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.709 [2024-10-21 11:53:24.094437] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.709 [2024-10-21 11:53:24.094451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.709 [2024-10-21 11:53:24.108285] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.709 [2024-10-21 11:53:24.108299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.709 [2024-10-21 11:53:24.120893] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.709 [2024-10-21 11:53:24.120907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.709 [2024-10-21 11:53:24.133631] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.709 [2024-10-21 11:53:24.133646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.709 [2024-10-21 11:53:24.146434] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.709 [2024-10-21 11:53:24.146448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.709 [2024-10-21 11:53:24.159924] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.709 [2024-10-21 11:53:24.159938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.709 [2024-10-21 11:53:24.173144] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.709 [2024-10-21 11:53:24.173158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.709 19261.75 IOPS, 150.48 MiB/s [2024-10-21T09:53:24.304Z] [2024-10-21 11:53:24.185724] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.709 [2024-10-21 11:53:24.185738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.709 [2024-10-21 11:53:24.199467] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.709 [2024-10-21 11:53:24.199481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.709 [2024-10-21 11:53:24.212713] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.709 [2024-10-21 11:53:24.212727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.709 [2024-10-21 11:53:24.226426] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.709 [2024-10-21 11:53:24.226440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.709 [2024-10-21 11:53:24.239637] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.709 [2024-10-21 11:53:24.239652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.709 [2024-10-21 11:53:24.252122] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.709 [2024-10-21 11:53:24.252136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.709 [2024-10-21 11:53:24.265515] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.709 [2024-10-21 11:53:24.265529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.709 [2024-10-21 11:53:24.278805] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.709 [2024-10-21 11:53:24.278820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.709 [2024-10-21 11:53:24.291692] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.709 [2024-10-21 11:53:24.291707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.709 [2024-10-21 11:53:24.304115] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.709 [2024-10-21 11:53:24.304130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.970 [2024-10-21 11:53:24.317084] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.970 [2024-10-21 11:53:24.317099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.970 [2024-10-21 11:53:24.330846] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.970 [2024-10-21 11:53:24.330860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.970 [2024-10-21 11:53:24.343302] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.970 [2024-10-21 11:53:24.343316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.970 [2024-10-21 11:53:24.357068] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.970 [2024-10-21 11:53:24.357082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.970 [2024-10-21 11:53:24.369585] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.970 [2024-10-21 11:53:24.369599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.970 [2024-10-21 11:53:24.382474] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.970 [2024-10-21 11:53:24.382488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.970 [2024-10-21 11:53:24.395607] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.970 [2024-10-21 11:53:24.395622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.970 [2024-10-21 11:53:24.408670] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.970 [2024-10-21 11:53:24.408684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.970 [2024-10-21 11:53:24.421229] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.970 [2024-10-21 11:53:24.421244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.970 [2024-10-21 11:53:24.434456] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.970 [2024-10-21 11:53:24.434471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.970 [2024-10-21 11:53:24.447232] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.970 [2024-10-21 11:53:24.447247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.970 [2024-10-21 11:53:24.460713] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.970 [2024-10-21 11:53:24.460727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.970 [2024-10-21 11:53:24.474293] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.970 [2024-10-21 11:53:24.474308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.970 [2024-10-21 11:53:24.487778] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.970 [2024-10-21 11:53:24.487793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.970 [2024-10-21 11:53:24.501460] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.970 [2024-10-21 11:53:24.501475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.970 [2024-10-21 11:53:24.514723] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.970 [2024-10-21 11:53:24.514738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.970 [2024-10-21 11:53:24.527871] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.970 [2024-10-21 11:53:24.527886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.970 [2024-10-21 11:53:24.540607] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.970 [2024-10-21 11:53:24.540622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.970 [2024-10-21 11:53:24.553943] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.970 [2024-10-21 11:53:24.553957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.232 [2024-10-21 11:53:24.566669] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.232 [2024-10-21 11:53:24.566684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.232 [2024-10-21 11:53:24.579676] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.232 [2024-10-21 11:53:24.579691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.232 [2024-10-21 11:53:24.593005] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.232 [2024-10-21 11:53:24.593020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.232 [2024-10-21 11:53:24.606166] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.232 [2024-10-21 11:53:24.606180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.232 [2024-10-21 11:53:24.619067] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.232 [2024-10-21 11:53:24.619081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.232 [2024-10-21 11:53:24.632201] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.232 [2024-10-21 11:53:24.632215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.232 [2024-10-21 11:53:24.645313] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.232 [2024-10-21 11:53:24.645333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.232 [2024-10-21 11:53:24.658844] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.232 [2024-10-21 11:53:24.658859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.232 [2024-10-21 11:53:24.672037] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.232 [2024-10-21 11:53:24.672051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.232 [2024-10-21 11:53:24.685551] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.232 [2024-10-21 11:53:24.685565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.232 [2024-10-21 11:53:24.698610] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.232 [2024-10-21 11:53:24.698625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.232 [2024-10-21 11:53:24.711519] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.232 [2024-10-21 11:53:24.711533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.232 [2024-10-21 11:53:24.723802] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.232 [2024-10-21 11:53:24.723817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.232 [2024-10-21 11:53:24.737212] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.232 [2024-10-21 11:53:24.737227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.232 [2024-10-21 11:53:24.750404] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.232 [2024-10-21 11:53:24.750418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.232 [2024-10-21 11:53:24.763109] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.232 [2024-10-21 11:53:24.763123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.232 [2024-10-21 11:53:24.775667] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.232 [2024-10-21 11:53:24.775682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.232 [2024-10-21 11:53:24.788215] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.232 [2024-10-21 11:53:24.788234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.232 [2024-10-21 11:53:24.801078] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.232 [2024-10-21 11:53:24.801092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.232 [2024-10-21 11:53:24.814560] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.232 [2024-10-21 11:53:24.814575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.494 [2024-10-21 11:53:24.828139] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.494 [2024-10-21 11:53:24.828154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.494 [2024-10-21 11:53:24.841143] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.494 [2024-10-21 11:53:24.841158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.494 [2024-10-21 11:53:24.854013] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.494 [2024-10-21 11:53:24.854028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.494 [2024-10-21 11:53:24.867013] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.494 [2024-10-21 11:53:24.867028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.494 [2024-10-21 11:53:24.880049] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.494 [2024-10-21 11:53:24.880064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.494 [2024-10-21 11:53:24.893545] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.494 [2024-10-21 11:53:24.893560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.494 [2024-10-21 11:53:24.907134] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.494 [2024-10-21 11:53:24.907148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.494 [2024-10-21 11:53:24.919723] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.494 [2024-10-21 11:53:24.919738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.494 [2024-10-21 11:53:24.933319] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.494 [2024-10-21 11:53:24.933338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.494 [2024-10-21 11:53:24.946310] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.494 [2024-10-21 11:53:24.946329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.494 [2024-10-21 11:53:24.959691] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.494 [2024-10-21 11:53:24.959706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.494 [2024-10-21 11:53:24.972743] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.494 [2024-10-21 11:53:24.972757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.494 [2024-10-21 11:53:24.986310] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.494 [2024-10-21 11:53:24.986330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.494 [2024-10-21 11:53:24.999870] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.494 [2024-10-21 11:53:24.999885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.494 [2024-10-21 11:53:25.013285] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.494 [2024-10-21 11:53:25.013299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.494 [2024-10-21 11:53:25.026633] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.494 [2024-10-21 11:53:25.026647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.494 [2024-10-21 11:53:25.039519] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.494 [2024-10-21 11:53:25.039538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.494 [2024-10-21 11:53:25.052122] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.494 [2024-10-21 11:53:25.052137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.494 [2024-10-21 11:53:25.065831] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.494 [2024-10-21 11:53:25.065845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.494 [2024-10-21 11:53:25.078773] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.494 [2024-10-21 11:53:25.078787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.755 [2024-10-21 11:53:25.091461] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.755 [2024-10-21 11:53:25.091476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.755 [2024-10-21 11:53:25.103902] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.755 [2024-10-21 11:53:25.103917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.755 [2024-10-21 11:53:25.117224] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.755 [2024-10-21 11:53:25.117238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.755 [2024-10-21 11:53:25.130426] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.755 [2024-10-21 11:53:25.130440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.755 [2024-10-21 11:53:25.143132] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.755 [2024-10-21 11:53:25.143146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.755 [2024-10-21 11:53:25.156371] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.755 [2024-10-21 11:53:25.156385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.755 [2024-10-21 11:53:25.169194] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.755 [2024-10-21 11:53:25.169208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.755 19265.20 IOPS, 150.51 MiB/s [2024-10-21T09:53:25.350Z] [2024-10-21 11:53:25.181822] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.755 [2024-10-21 11:53:25.181837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.755 00:08:48.755 Latency(us) 00:08:48.755 [2024-10-21T09:53:25.350Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:48.755 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:48.755 Nvme1n1 : 5.01 19268.39 150.53 0.00 0.00 6637.37 2676.05 16711.68 00:08:48.755 [2024-10-21T09:53:25.350Z] =================================================================================================================== 00:08:48.755 [2024-10-21T09:53:25.350Z] Total : 19268.39 150.53 0.00 0.00 6637.37 2676.05 16711.68 00:08:48.755 [2024-10-21 11:53:25.191326] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.755 [2024-10-21 11:53:25.191340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.755 [2024-10-21 11:53:25.203354] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.755 [2024-10-21 11:53:25.203368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.756 [2024-10-21 11:53:25.215389] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.756 [2024-10-21 11:53:25.215400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.756 [2024-10-21 11:53:25.227419] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.756 [2024-10-21 11:53:25.227432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.756 [2024-10-21 11:53:25.239448] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.756 [2024-10-21 11:53:25.239463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.756 [2024-10-21 11:53:25.251476] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.756 [2024-10-21 11:53:25.251486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.756 [2024-10-21 11:53:25.263507] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.756 [2024-10-21 11:53:25.263515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.756 [2024-10-21 11:53:25.275539] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.756 [2024-10-21 11:53:25.275548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.756 [2024-10-21 11:53:25.287568] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.756 [2024-10-21 11:53:25.287575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (821672) - No such process 00:08:48.756 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 821672 00:08:48.756 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:48.756 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.756 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:48.756 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.756 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:48.756 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.756 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:48.756 delay0 00:08:48.756 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.756 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:48.756 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.756 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:48.756 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.756 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:49.016 [2024-10-21 11:53:25.447700] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:57.154 Initializing NVMe Controllers 00:08:57.154 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:57.154 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:57.154 Initialization complete. Launching workers. 00:08:57.154 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 237, failed: 33694 00:08:57.154 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 33807, failed to submit 124 00:08:57.154 success 33722, unsuccessful 85, failed 0 00:08:57.154 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:57.154 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:57.154 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:57.154 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:57.154 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:57.154 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:57.154 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:57.154 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:57.154 rmmod nvme_tcp 00:08:57.154 rmmod nvme_fabrics 00:08:57.154 rmmod nvme_keyring 00:08:57.154 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:57.154 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:57.154 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:57.154 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 819470 ']' 00:08:57.154 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 819470 00:08:57.154 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 819470 ']' 00:08:57.154 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 819470 00:08:57.154 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:08:57.154 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:57.154 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 819470 00:08:57.154 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:57.154 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:57.154 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 819470' 00:08:57.154 killing process with pid 819470 00:08:57.154 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 819470 00:08:57.154 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 819470 00:08:57.154 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:57.154 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:57.154 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:57.154 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:08:57.154 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:08:57.154 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:57.154 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:08:57.154 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:57.154 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:57.154 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.154 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:57.154 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.540 11:53:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:58.540 00:08:58.540 real 0m34.337s 00:08:58.540 user 0m45.021s 00:08:58.540 sys 0m11.973s 00:08:58.540 11:53:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:58.540 11:53:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:58.540 ************************************ 00:08:58.540 END TEST nvmf_zcopy 00:08:58.540 ************************************ 00:08:58.540 11:53:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:58.540 11:53:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:58.540 11:53:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:58.540 11:53:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:58.540 ************************************ 00:08:58.540 START TEST nvmf_nmic 00:08:58.540 ************************************ 00:08:58.540 11:53:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:58.540 * Looking for test storage... 00:08:58.540 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:58.540 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:58.540 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:58.540 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:08:58.540 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:58.540 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:58.540 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:58.540 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:58.540 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:58.540 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:58.540 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:58.540 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:58.540 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:58.540 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:58.540 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:58.540 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:58.540 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:58.540 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:58.540 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:58.540 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:58.540 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:58.540 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:58.540 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:58.540 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:58.540 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:58.540 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:58.540 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:58.540 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:58.540 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:58.540 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:58.540 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:58.540 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:58.540 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:58.540 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:58.540 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:58.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.540 --rc genhtml_branch_coverage=1 00:08:58.540 --rc genhtml_function_coverage=1 00:08:58.540 --rc genhtml_legend=1 00:08:58.540 --rc geninfo_all_blocks=1 00:08:58.540 --rc geninfo_unexecuted_blocks=1 00:08:58.540 00:08:58.540 ' 00:08:58.540 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:58.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.540 --rc genhtml_branch_coverage=1 00:08:58.540 --rc genhtml_function_coverage=1 00:08:58.540 --rc genhtml_legend=1 00:08:58.540 --rc geninfo_all_blocks=1 00:08:58.540 --rc geninfo_unexecuted_blocks=1 00:08:58.540 00:08:58.540 ' 00:08:58.540 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:58.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.540 --rc genhtml_branch_coverage=1 00:08:58.540 --rc genhtml_function_coverage=1 00:08:58.540 --rc genhtml_legend=1 00:08:58.540 --rc geninfo_all_blocks=1 00:08:58.540 --rc geninfo_unexecuted_blocks=1 00:08:58.540 00:08:58.540 ' 00:08:58.540 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:58.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.540 --rc genhtml_branch_coverage=1 00:08:58.540 --rc genhtml_function_coverage=1 00:08:58.540 --rc genhtml_legend=1 00:08:58.540 --rc geninfo_all_blocks=1 00:08:58.540 --rc geninfo_unexecuted_blocks=1 00:08:58.540 00:08:58.540 ' 00:08:58.540 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:58.540 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:58.540 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:58.540 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:58.540 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:58.540 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:58.541 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:58.541 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:58.541 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:58.541 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:58.541 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:58.541 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:58.541 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:58.541 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:58.541 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:58.541 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:58.541 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:58.541 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:58.541 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:58.541 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:58.541 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.541 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.541 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.541 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.541 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.541 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.541 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:58.541 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.541 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:58.541 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:58.541 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:58.541 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:58.541 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:58.802 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:58.802 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:58.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:58.802 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:58.802 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:58.802 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:58.802 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:58.802 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:58.802 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:58.802 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:58.802 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:58.802 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:58.802 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:58.802 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:58.802 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.802 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:58.802 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.802 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:58.802 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:58.802 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:08:58.802 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:06.945 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:06.945 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:06.945 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:06.945 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:06.945 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:06.946 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:06.946 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:06.946 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:09:06.946 00:09:06.946 --- 10.0.0.2 ping statistics --- 00:09:06.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.946 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:09:06.946 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:06.946 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:06.946 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:09:06.946 00:09:06.946 --- 10.0.0.1 ping statistics --- 00:09:06.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.946 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:09:06.946 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:06.946 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:09:06.946 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:06.946 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:06.946 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:06.946 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:06.946 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:06.946 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:06.946 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:06.946 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:06.946 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:06.946 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:06.946 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:06.946 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=828531 00:09:06.946 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 828531 00:09:06.946 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:06.946 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 828531 ']' 00:09:06.946 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.946 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:06.946 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.946 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:06.946 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:06.946 [2024-10-21 11:53:42.734099] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:09:06.946 [2024-10-21 11:53:42.734163] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:06.946 [2024-10-21 11:53:42.821735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:06.946 [2024-10-21 11:53:42.876409] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:06.946 [2024-10-21 11:53:42.876461] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:06.946 [2024-10-21 11:53:42.876470] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:06.946 [2024-10-21 11:53:42.876478] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:06.946 [2024-10-21 11:53:42.876486] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:06.946 [2024-10-21 11:53:42.878571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:06.946 [2024-10-21 11:53:42.878735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:06.946 [2024-10-21 11:53:42.878899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:06.946 [2024-10-21 11:53:42.878900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.209 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:07.209 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:09:07.209 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:07.209 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:07.209 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:07.209 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:07.209 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:07.209 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.209 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:07.209 [2024-10-21 11:53:43.618904] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:07.209 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.209 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:07.209 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.209 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:07.209 Malloc0 00:09:07.209 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.209 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:07.209 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.209 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:07.209 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.209 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:07.209 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.209 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:07.209 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.209 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:07.209 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.209 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:07.209 [2024-10-21 11:53:43.698480] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:07.209 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.209 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:07.209 test case1: single bdev can't be used in multiple subsystems 00:09:07.209 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:07.209 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.209 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:07.209 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.209 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:07.209 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.209 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:07.209 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.209 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:07.209 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:07.209 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.209 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:07.209 [2024-10-21 11:53:43.734333] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:07.209 [2024-10-21 11:53:43.734360] subsystem.c:2151:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:07.209 [2024-10-21 11:53:43.734368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.209 request: 00:09:07.209 { 00:09:07.209 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:07.209 "namespace": { 00:09:07.209 "bdev_name": "Malloc0", 00:09:07.209 "no_auto_visible": false 00:09:07.209 }, 00:09:07.209 "method": "nvmf_subsystem_add_ns", 00:09:07.209 "req_id": 1 00:09:07.209 } 00:09:07.209 Got JSON-RPC error response 00:09:07.209 response: 00:09:07.209 { 00:09:07.209 "code": -32602, 00:09:07.209 "message": "Invalid parameters" 00:09:07.209 } 00:09:07.209 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:07.209 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:07.209 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:07.209 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:07.209 Adding namespace failed - expected result. 00:09:07.209 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:07.209 test case2: host connect to nvmf target in multiple paths 00:09:07.209 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:07.209 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.209 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:07.209 [2024-10-21 11:53:43.746539] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:07.209 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.209 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:09.125 11:53:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:10.509 11:53:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:10.509 11:53:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:09:10.509 11:53:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:10.509 11:53:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:10.509 11:53:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:09:12.420 11:53:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:12.420 11:53:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:12.420 11:53:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:12.420 11:53:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:12.420 11:53:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:12.420 11:53:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:09:12.421 11:53:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:12.421 [global] 00:09:12.421 thread=1 00:09:12.421 invalidate=1 00:09:12.421 rw=write 00:09:12.421 time_based=1 00:09:12.421 runtime=1 00:09:12.421 ioengine=libaio 00:09:12.421 direct=1 00:09:12.421 bs=4096 00:09:12.421 iodepth=1 00:09:12.421 norandommap=0 00:09:12.421 numjobs=1 00:09:12.421 00:09:12.421 verify_dump=1 00:09:12.421 verify_backlog=512 00:09:12.421 verify_state_save=0 00:09:12.421 do_verify=1 00:09:12.421 verify=crc32c-intel 00:09:12.421 [job0] 00:09:12.421 filename=/dev/nvme0n1 00:09:12.421 Could not set queue depth (nvme0n1) 00:09:12.682 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:12.682 fio-3.35 00:09:12.682 Starting 1 thread 00:09:14.066 00:09:14.066 job0: (groupid=0, jobs=1): err= 0: pid=829886: Mon Oct 21 11:53:50 2024 00:09:14.066 read: IOPS=44, BW=180KiB/s (184kB/s)(180KiB/1001msec) 00:09:14.066 slat (nsec): min=25085, max=40831, avg=25882.84, stdev=2329.63 00:09:14.066 clat (usec): min=980, max=42057, avg=14621.19, stdev=19311.22 00:09:14.066 lat (usec): min=1005, max=42083, avg=14647.07, stdev=19311.24 00:09:14.066 clat percentiles (usec): 00:09:14.066 | 1.00th=[ 979], 5.00th=[ 1074], 10.00th=[ 1090], 20.00th=[ 1106], 00:09:14.066 | 30.00th=[ 1106], 40.00th=[ 1123], 50.00th=[ 1139], 60.00th=[ 1172], 00:09:14.066 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:09:14.066 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:14.066 | 99.99th=[42206] 00:09:14.066 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:09:14.066 slat (usec): min=10, max=28283, avg=83.75, stdev=1248.76 00:09:14.066 clat (usec): min=218, max=807, avg=575.86, stdev=103.07 00:09:14.066 lat (usec): min=230, max=28887, avg=659.61, stdev=1254.60 00:09:14.066 clat percentiles (usec): 00:09:14.066 | 1.00th=[ 322], 5.00th=[ 392], 10.00th=[ 429], 20.00th=[ 486], 00:09:14.066 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 586], 60.00th=[ 603], 00:09:14.066 | 70.00th=[ 644], 80.00th=[ 676], 90.00th=[ 701], 95.00th=[ 725], 00:09:14.066 | 99.00th=[ 742], 99.50th=[ 775], 99.90th=[ 807], 99.95th=[ 807], 00:09:14.066 | 99.99th=[ 807] 00:09:14.066 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:14.066 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:14.066 lat (usec) : 250=0.54%, 500=21.54%, 750=69.12%, 1000=0.90% 00:09:14.066 lat (msec) : 2=5.21%, 50=2.69% 00:09:14.067 cpu : usr=0.70%, sys=1.60%, ctx=560, majf=0, minf=1 00:09:14.067 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:14.067 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.067 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.067 issued rwts: total=45,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:14.067 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:14.067 00:09:14.067 Run status group 0 (all jobs): 00:09:14.067 READ: bw=180KiB/s (184kB/s), 180KiB/s-180KiB/s (184kB/s-184kB/s), io=180KiB (184kB), run=1001-1001msec 00:09:14.067 WRITE: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:09:14.067 00:09:14.067 Disk stats (read/write): 00:09:14.067 nvme0n1: ios=50/512, merge=0/0, ticks=1524/291, in_queue=1815, util=98.80% 00:09:14.067 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:14.067 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:14.067 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:14.067 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:09:14.067 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:14.067 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:14.067 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:14.067 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:14.067 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:09:14.067 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:14.067 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:14.067 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:14.067 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:14.067 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:14.067 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:14.067 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:14.067 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:14.067 rmmod nvme_tcp 00:09:14.067 rmmod nvme_fabrics 00:09:14.067 rmmod nvme_keyring 00:09:14.327 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:14.327 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:14.327 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:14.327 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 828531 ']' 00:09:14.327 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 828531 00:09:14.327 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 828531 ']' 00:09:14.327 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 828531 00:09:14.327 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:09:14.327 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:14.327 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 828531 00:09:14.327 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:14.327 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:14.327 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 828531' 00:09:14.327 killing process with pid 828531 00:09:14.327 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 828531 00:09:14.327 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 828531 00:09:14.327 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:14.327 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:14.327 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:14.327 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:14.327 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:09:14.327 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:14.327 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:09:14.327 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:14.327 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:14.327 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.327 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:14.327 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.871 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:16.871 00:09:16.871 real 0m18.058s 00:09:16.871 user 0m46.187s 00:09:16.871 sys 0m6.583s 00:09:16.871 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:16.871 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.871 ************************************ 00:09:16.871 END TEST nvmf_nmic 00:09:16.871 ************************************ 00:09:16.871 11:53:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:16.871 11:53:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:16.871 11:53:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:16.871 11:53:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:16.871 ************************************ 00:09:16.871 START TEST nvmf_fio_target 00:09:16.871 ************************************ 00:09:16.871 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:16.871 * Looking for test storage... 00:09:16.871 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:16.871 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:16.871 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:09:16.871 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:16.871 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:16.871 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:16.871 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:16.871 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:16.871 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:16.871 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:16.871 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:16.871 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:16.871 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:16.871 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:16.871 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:16.871 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:16.871 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:16.871 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:16.871 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:16.871 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:16.871 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:16.871 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:16.871 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:16.871 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:16.871 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:16.871 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:16.871 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:16.871 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:16.871 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:16.871 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:16.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.872 --rc genhtml_branch_coverage=1 00:09:16.872 --rc genhtml_function_coverage=1 00:09:16.872 --rc genhtml_legend=1 00:09:16.872 --rc geninfo_all_blocks=1 00:09:16.872 --rc geninfo_unexecuted_blocks=1 00:09:16.872 00:09:16.872 ' 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:16.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.872 --rc genhtml_branch_coverage=1 00:09:16.872 --rc genhtml_function_coverage=1 00:09:16.872 --rc genhtml_legend=1 00:09:16.872 --rc geninfo_all_blocks=1 00:09:16.872 --rc geninfo_unexecuted_blocks=1 00:09:16.872 00:09:16.872 ' 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:16.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.872 --rc genhtml_branch_coverage=1 00:09:16.872 --rc genhtml_function_coverage=1 00:09:16.872 --rc genhtml_legend=1 00:09:16.872 --rc geninfo_all_blocks=1 00:09:16.872 --rc geninfo_unexecuted_blocks=1 00:09:16.872 00:09:16.872 ' 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:16.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.872 --rc genhtml_branch_coverage=1 00:09:16.872 --rc genhtml_function_coverage=1 00:09:16.872 --rc genhtml_legend=1 00:09:16.872 --rc geninfo_all_blocks=1 00:09:16.872 --rc geninfo_unexecuted_blocks=1 00:09:16.872 00:09:16.872 ' 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:16.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.872 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:16.873 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:16.873 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:16.873 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:25.011 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:25.011 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:25.011 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:25.011 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:25.011 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:25.011 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:25.011 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:25.011 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:25.011 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:25.011 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:25.011 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:25.011 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:25.011 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:25.011 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:25.011 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:25.011 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:25.011 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:25.011 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:25.011 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:25.011 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:25.011 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:25.011 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:25.011 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:25.011 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:25.011 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:25.011 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:25.011 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:25.011 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:25.011 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:25.011 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:25.011 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:25.011 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:25.011 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:25.011 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:25.011 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:25.011 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:25.011 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:25.011 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:25.011 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:25.011 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:25.011 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:25.011 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:25.011 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:25.011 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:25.011 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:25.012 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:25.012 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:25.012 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:25.012 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:09:25.012 00:09:25.012 --- 10.0.0.2 ping statistics --- 00:09:25.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.012 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:25.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:25.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:09:25.012 00:09:25.012 --- 10.0.0.1 ping statistics --- 00:09:25.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.012 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=834476 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 834476 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 834476 ']' 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:25.012 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:25.012 [2024-10-21 11:54:00.854803] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:09:25.012 [2024-10-21 11:54:00.854877] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:25.012 [2024-10-21 11:54:00.945498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:25.012 [2024-10-21 11:54:00.999642] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:25.012 [2024-10-21 11:54:00.999689] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:25.012 [2024-10-21 11:54:00.999698] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:25.012 [2024-10-21 11:54:00.999705] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:25.012 [2024-10-21 11:54:00.999711] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:25.012 [2024-10-21 11:54:01.001716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:25.012 [2024-10-21 11:54:01.001873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:25.012 [2024-10-21 11:54:01.002032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.012 [2024-10-21 11:54:01.002032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:25.274 11:54:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:25.274 11:54:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:09:25.274 11:54:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:25.274 11:54:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:25.274 11:54:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:25.274 11:54:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:25.274 11:54:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:25.535 [2024-10-21 11:54:01.878752] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:25.535 11:54:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:25.796 11:54:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:25.796 11:54:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:25.796 11:54:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:25.796 11:54:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:26.057 11:54:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:26.057 11:54:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:26.317 11:54:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:26.317 11:54:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:26.578 11:54:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:26.839 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:26.839 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:26.839 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:26.839 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:27.099 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:27.099 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:27.360 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:27.619 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:27.619 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:27.619 11:54:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:27.619 11:54:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:27.879 11:54:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:28.139 [2024-10-21 11:54:04.492239] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:28.139 11:54:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:28.139 11:54:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:28.399 11:54:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:30.315 11:54:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:30.315 11:54:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:09:30.315 11:54:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:30.315 11:54:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:09:30.315 11:54:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:09:30.315 11:54:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:09:32.250 11:54:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:32.250 11:54:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:32.250 11:54:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:32.250 11:54:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:09:32.250 11:54:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:32.250 11:54:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:09:32.250 11:54:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:32.250 [global] 00:09:32.250 thread=1 00:09:32.250 invalidate=1 00:09:32.250 rw=write 00:09:32.250 time_based=1 00:09:32.250 runtime=1 00:09:32.250 ioengine=libaio 00:09:32.250 direct=1 00:09:32.250 bs=4096 00:09:32.250 iodepth=1 00:09:32.250 norandommap=0 00:09:32.250 numjobs=1 00:09:32.250 00:09:32.250 verify_dump=1 00:09:32.250 verify_backlog=512 00:09:32.250 verify_state_save=0 00:09:32.250 do_verify=1 00:09:32.250 verify=crc32c-intel 00:09:32.250 [job0] 00:09:32.250 filename=/dev/nvme0n1 00:09:32.250 [job1] 00:09:32.250 filename=/dev/nvme0n2 00:09:32.250 [job2] 00:09:32.250 filename=/dev/nvme0n3 00:09:32.250 [job3] 00:09:32.250 filename=/dev/nvme0n4 00:09:32.250 Could not set queue depth (nvme0n1) 00:09:32.250 Could not set queue depth (nvme0n2) 00:09:32.250 Could not set queue depth (nvme0n3) 00:09:32.250 Could not set queue depth (nvme0n4) 00:09:32.514 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:32.514 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:32.514 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:32.514 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:32.514 fio-3.35 00:09:32.514 Starting 4 threads 00:09:33.923 00:09:33.923 job0: (groupid=0, jobs=1): err= 0: pid=836416: Mon Oct 21 11:54:10 2024 00:09:33.923 read: IOPS=17, BW=71.7KiB/s (73.4kB/s)(72.0KiB/1004msec) 00:09:33.923 slat (nsec): min=26376, max=28570, avg=26936.00, stdev=548.45 00:09:33.923 clat (usec): min=923, max=42248, avg=36953.71, stdev=13075.93 00:09:33.923 lat (usec): min=952, max=42274, avg=36980.65, stdev=13075.45 00:09:33.923 clat percentiles (usec): 00:09:33.923 | 1.00th=[ 922], 5.00th=[ 922], 10.00th=[ 1139], 20.00th=[41157], 00:09:33.923 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:09:33.923 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:09:33.923 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:33.924 | 99.99th=[42206] 00:09:33.924 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:09:33.924 slat (nsec): min=9551, max=70106, avg=32481.98, stdev=7810.91 00:09:33.924 clat (usec): min=252, max=1454, avg=621.28, stdev=146.88 00:09:33.924 lat (usec): min=263, max=1488, avg=653.76, stdev=148.42 00:09:33.924 clat percentiles (usec): 00:09:33.924 | 1.00th=[ 281], 5.00th=[ 383], 10.00th=[ 433], 20.00th=[ 494], 00:09:33.924 | 30.00th=[ 545], 40.00th=[ 586], 50.00th=[ 627], 60.00th=[ 652], 00:09:33.924 | 70.00th=[ 701], 80.00th=[ 742], 90.00th=[ 807], 95.00th=[ 857], 00:09:33.924 | 99.00th=[ 947], 99.50th=[ 971], 99.90th=[ 1450], 99.95th=[ 1450], 00:09:33.924 | 99.99th=[ 1450] 00:09:33.924 bw ( KiB/s): min= 4096, max= 4096, per=40.24%, avg=4096.00, stdev= 0.00, samples=1 00:09:33.924 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:33.924 lat (usec) : 500=20.00%, 750=59.06%, 1000=17.36% 00:09:33.924 lat (msec) : 2=0.57%, 50=3.02% 00:09:33.924 cpu : usr=1.50%, sys=1.79%, ctx=531, majf=0, minf=1 00:09:33.924 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:33.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.924 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.924 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.924 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:33.924 job1: (groupid=0, jobs=1): err= 0: pid=836433: Mon Oct 21 11:54:10 2024 00:09:33.924 read: IOPS=176, BW=708KiB/s (725kB/s)(712KiB/1006msec) 00:09:33.924 slat (nsec): min=26482, max=63322, avg=27966.10, stdev=4915.47 00:09:33.924 clat (usec): min=582, max=41838, avg=3762.45, stdev=10113.70 00:09:33.924 lat (usec): min=609, max=41865, avg=3790.41, stdev=10113.48 00:09:33.924 clat percentiles (usec): 00:09:33.924 | 1.00th=[ 660], 5.00th=[ 840], 10.00th=[ 889], 20.00th=[ 971], 00:09:33.924 | 30.00th=[ 1004], 40.00th=[ 1029], 50.00th=[ 1057], 60.00th=[ 1106], 00:09:33.924 | 70.00th=[ 1139], 80.00th=[ 1172], 90.00th=[ 1254], 95.00th=[41157], 00:09:33.924 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:09:33.924 | 99.99th=[41681] 00:09:33.924 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:09:33.924 slat (nsec): min=9322, max=54311, avg=30816.73, stdev=9614.73 00:09:33.924 clat (usec): min=158, max=951, avg=606.19, stdev=129.75 00:09:33.924 lat (usec): min=167, max=985, avg=637.00, stdev=133.58 00:09:33.924 clat percentiles (usec): 00:09:33.924 | 1.00th=[ 251], 5.00th=[ 383], 10.00th=[ 449], 20.00th=[ 498], 00:09:33.924 | 30.00th=[ 537], 40.00th=[ 578], 50.00th=[ 611], 60.00th=[ 652], 00:09:33.924 | 70.00th=[ 676], 80.00th=[ 717], 90.00th=[ 775], 95.00th=[ 799], 00:09:33.924 | 99.00th=[ 881], 99.50th=[ 889], 99.90th=[ 955], 99.95th=[ 955], 00:09:33.924 | 99.99th=[ 955] 00:09:33.924 bw ( KiB/s): min= 4096, max= 4096, per=40.24%, avg=4096.00, stdev= 0.00, samples=1 00:09:33.924 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:33.924 lat (usec) : 250=0.72%, 500=14.64%, 750=48.55%, 1000=17.68% 00:09:33.924 lat (msec) : 2=16.67%, 50=1.74% 00:09:33.924 cpu : usr=1.00%, sys=3.08%, ctx=690, majf=0, minf=1 00:09:33.924 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:33.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.924 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.924 issued rwts: total=178,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.924 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:33.924 job2: (groupid=0, jobs=1): err= 0: pid=836452: Mon Oct 21 11:54:10 2024 00:09:33.924 read: IOPS=531, BW=2126KiB/s (2177kB/s)(2128KiB/1001msec) 00:09:33.924 slat (nsec): min=7844, max=67075, avg=28983.35, stdev=8488.35 00:09:33.924 clat (usec): min=450, max=1028, avg=752.15, stdev=80.37 00:09:33.924 lat (usec): min=482, max=1055, avg=781.13, stdev=81.35 00:09:33.924 clat percentiles (usec): 00:09:33.924 | 1.00th=[ 537], 5.00th=[ 627], 10.00th=[ 660], 20.00th=[ 693], 00:09:33.924 | 30.00th=[ 725], 40.00th=[ 750], 50.00th=[ 758], 60.00th=[ 766], 00:09:33.924 | 70.00th=[ 783], 80.00th=[ 791], 90.00th=[ 832], 95.00th=[ 881], 00:09:33.924 | 99.00th=[ 996], 99.50th=[ 1020], 99.90th=[ 1029], 99.95th=[ 1029], 00:09:33.924 | 99.99th=[ 1029] 00:09:33.924 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:09:33.924 slat (nsec): min=8747, max=73273, avg=32451.16, stdev=10124.73 00:09:33.924 clat (usec): min=134, max=4335, avg=527.39, stdev=228.49 00:09:33.924 lat (usec): min=173, max=4374, avg=559.84, stdev=229.78 00:09:33.924 clat percentiles (usec): 00:09:33.924 | 1.00th=[ 245], 5.00th=[ 310], 10.00th=[ 351], 20.00th=[ 404], 00:09:33.924 | 30.00th=[ 437], 40.00th=[ 465], 50.00th=[ 502], 60.00th=[ 545], 00:09:33.924 | 70.00th=[ 586], 80.00th=[ 644], 90.00th=[ 717], 95.00th=[ 766], 00:09:33.924 | 99.00th=[ 938], 99.50th=[ 1074], 99.90th=[ 3818], 99.95th=[ 4359], 00:09:33.924 | 99.99th=[ 4359] 00:09:33.924 bw ( KiB/s): min= 4096, max= 4096, per=40.24%, avg=4096.00, stdev= 0.00, samples=1 00:09:33.924 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:33.924 lat (usec) : 250=0.84%, 500=32.20%, 750=43.12%, 1000=23.01% 00:09:33.924 lat (msec) : 2=0.64%, 4=0.13%, 10=0.06% 00:09:33.924 cpu : usr=3.40%, sys=6.00%, ctx=1556, majf=0, minf=1 00:09:33.924 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:33.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.924 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.924 issued rwts: total=532,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.924 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:33.924 job3: (groupid=0, jobs=1): err= 0: pid=836459: Mon Oct 21 11:54:10 2024 00:09:33.924 read: IOPS=15, BW=63.9KiB/s (65.4kB/s)(64.0KiB/1002msec) 00:09:33.924 slat (nsec): min=26213, max=27389, avg=26698.25, stdev=334.63 00:09:33.924 clat (usec): min=41031, max=42040, avg=41840.47, stdev=314.03 00:09:33.924 lat (usec): min=41057, max=42067, avg=41867.16, stdev=314.08 00:09:33.924 clat percentiles (usec): 00:09:33.924 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:09:33.924 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:09:33.924 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:33.924 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:33.924 | 99.99th=[42206] 00:09:33.924 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:09:33.924 slat (nsec): min=10336, max=57375, avg=31376.94, stdev=9491.27 00:09:33.924 clat (usec): min=214, max=936, avg=610.48, stdev=125.70 00:09:33.924 lat (usec): min=226, max=971, avg=641.86, stdev=128.74 00:09:33.924 clat percentiles (usec): 00:09:33.924 | 1.00th=[ 318], 5.00th=[ 383], 10.00th=[ 445], 20.00th=[ 498], 00:09:33.924 | 30.00th=[ 545], 40.00th=[ 586], 50.00th=[ 611], 60.00th=[ 652], 00:09:33.924 | 70.00th=[ 685], 80.00th=[ 725], 90.00th=[ 766], 95.00th=[ 807], 00:09:33.924 | 99.00th=[ 873], 99.50th=[ 889], 99.90th=[ 938], 99.95th=[ 938], 00:09:33.924 | 99.99th=[ 938] 00:09:33.924 bw ( KiB/s): min= 4096, max= 4096, per=40.24%, avg=4096.00, stdev= 0.00, samples=1 00:09:33.924 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:33.924 lat (usec) : 250=0.38%, 500=19.13%, 750=64.58%, 1000=12.88% 00:09:33.924 lat (msec) : 50=3.03% 00:09:33.924 cpu : usr=1.00%, sys=1.40%, ctx=531, majf=0, minf=1 00:09:33.924 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:33.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.924 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.924 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.924 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:33.924 00:09:33.924 Run status group 0 (all jobs): 00:09:33.924 READ: bw=2958KiB/s (3029kB/s), 63.9KiB/s-2126KiB/s (65.4kB/s-2177kB/s), io=2976KiB (3047kB), run=1001-1006msec 00:09:33.924 WRITE: bw=9.94MiB/s (10.4MB/s), 2036KiB/s-4092KiB/s (2085kB/s-4190kB/s), io=10.0MiB (10.5MB), run=1001-1006msec 00:09:33.924 00:09:33.924 Disk stats (read/write): 00:09:33.924 nvme0n1: ios=63/512, merge=0/0, ticks=532/240, in_queue=772, util=87.58% 00:09:33.925 nvme0n2: ios=186/512, merge=0/0, ticks=701/229, in_queue=930, util=91.11% 00:09:33.925 nvme0n3: ios=512/774, merge=0/0, ticks=337/323, in_queue=660, util=88.36% 00:09:33.925 nvme0n4: ios=68/512, merge=0/0, ticks=986/300, in_queue=1286, util=96.68% 00:09:33.925 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:33.925 [global] 00:09:33.925 thread=1 00:09:33.925 invalidate=1 00:09:33.925 rw=randwrite 00:09:33.925 time_based=1 00:09:33.925 runtime=1 00:09:33.925 ioengine=libaio 00:09:33.925 direct=1 00:09:33.925 bs=4096 00:09:33.925 iodepth=1 00:09:33.925 norandommap=0 00:09:33.925 numjobs=1 00:09:33.925 00:09:33.925 verify_dump=1 00:09:33.925 verify_backlog=512 00:09:33.925 verify_state_save=0 00:09:33.925 do_verify=1 00:09:33.925 verify=crc32c-intel 00:09:33.925 [job0] 00:09:33.925 filename=/dev/nvme0n1 00:09:33.925 [job1] 00:09:33.925 filename=/dev/nvme0n2 00:09:33.925 [job2] 00:09:33.925 filename=/dev/nvme0n3 00:09:33.925 [job3] 00:09:33.925 filename=/dev/nvme0n4 00:09:33.925 Could not set queue depth (nvme0n1) 00:09:33.925 Could not set queue depth (nvme0n2) 00:09:33.925 Could not set queue depth (nvme0n3) 00:09:33.925 Could not set queue depth (nvme0n4) 00:09:34.188 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:34.188 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:34.188 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:34.188 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:34.188 fio-3.35 00:09:34.188 Starting 4 threads 00:09:35.592 00:09:35.592 job0: (groupid=0, jobs=1): err= 0: pid=837201: Mon Oct 21 11:54:11 2024 00:09:35.592 read: IOPS=335, BW=1343KiB/s (1375kB/s)(1344KiB/1001msec) 00:09:35.592 slat (nsec): min=24511, max=43947, avg=25333.03, stdev=1574.77 00:09:35.592 clat (usec): min=679, max=41961, avg=1944.57, stdev=6198.26 00:09:35.592 lat (usec): min=704, max=41986, avg=1969.91, stdev=6198.25 00:09:35.592 clat percentiles (usec): 00:09:35.592 | 1.00th=[ 742], 5.00th=[ 816], 10.00th=[ 889], 20.00th=[ 922], 00:09:35.592 | 30.00th=[ 955], 40.00th=[ 979], 50.00th=[ 988], 60.00th=[ 1004], 00:09:35.592 | 70.00th=[ 1020], 80.00th=[ 1037], 90.00th=[ 1074], 95.00th=[ 1106], 00:09:35.593 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:35.593 | 99.99th=[42206] 00:09:35.593 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:09:35.593 slat (usec): min=9, max=106, avg=29.97, stdev= 8.62 00:09:35.593 clat (usec): min=181, max=1032, avg=617.93, stdev=136.08 00:09:35.593 lat (usec): min=213, max=1064, avg=647.89, stdev=138.14 00:09:35.593 clat percentiles (usec): 00:09:35.593 | 1.00th=[ 306], 5.00th=[ 392], 10.00th=[ 449], 20.00th=[ 506], 00:09:35.593 | 30.00th=[ 545], 40.00th=[ 578], 50.00th=[ 619], 60.00th=[ 652], 00:09:35.593 | 70.00th=[ 693], 80.00th=[ 734], 90.00th=[ 791], 95.00th=[ 832], 00:09:35.593 | 99.00th=[ 947], 99.50th=[ 996], 99.90th=[ 1037], 99.95th=[ 1037], 00:09:35.593 | 99.99th=[ 1037] 00:09:35.593 bw ( KiB/s): min= 4096, max= 4096, per=51.80%, avg=4096.00, stdev= 0.00, samples=1 00:09:35.593 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:35.593 lat (usec) : 250=0.12%, 500=10.85%, 750=39.27%, 1000=32.43% 00:09:35.593 lat (msec) : 2=16.39%, 50=0.94% 00:09:35.593 cpu : usr=1.10%, sys=2.70%, ctx=849, majf=0, minf=1 00:09:35.593 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:35.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.593 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.593 issued rwts: total=336,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:35.593 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:35.593 job1: (groupid=0, jobs=1): err= 0: pid=837220: Mon Oct 21 11:54:11 2024 00:09:35.593 read: IOPS=17, BW=69.5KiB/s (71.2kB/s)(72.0KiB/1036msec) 00:09:35.593 slat (nsec): min=25911, max=31088, avg=26674.28, stdev=1135.39 00:09:35.593 clat (usec): min=1054, max=42017, avg=39599.67, stdev=9622.63 00:09:35.593 lat (usec): min=1085, max=42043, avg=39626.34, stdev=9621.52 00:09:35.593 clat percentiles (usec): 00:09:35.593 | 1.00th=[ 1057], 5.00th=[ 1057], 10.00th=[41157], 20.00th=[41681], 00:09:35.593 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:09:35.593 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:35.593 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:35.593 | 99.99th=[42206] 00:09:35.593 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:09:35.593 slat (nsec): min=2998, max=82706, avg=25579.63, stdev=12093.11 00:09:35.593 clat (usec): min=182, max=1542, avg=596.88, stdev=141.16 00:09:35.593 lat (usec): min=185, max=1550, avg=622.46, stdev=144.47 00:09:35.593 clat percentiles (usec): 00:09:35.593 | 1.00th=[ 265], 5.00th=[ 371], 10.00th=[ 424], 20.00th=[ 486], 00:09:35.593 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 594], 60.00th=[ 627], 00:09:35.593 | 70.00th=[ 668], 80.00th=[ 701], 90.00th=[ 742], 95.00th=[ 807], 00:09:35.593 | 99.00th=[ 889], 99.50th=[ 1254], 99.90th=[ 1549], 99.95th=[ 1549], 00:09:35.593 | 99.99th=[ 1549] 00:09:35.593 bw ( KiB/s): min= 4096, max= 4096, per=51.80%, avg=4096.00, stdev= 0.00, samples=1 00:09:35.593 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:35.593 lat (usec) : 250=0.94%, 500=20.57%, 750=66.42%, 1000=8.11% 00:09:35.593 lat (msec) : 2=0.75%, 50=3.21% 00:09:35.593 cpu : usr=0.58%, sys=1.35%, ctx=534, majf=0, minf=1 00:09:35.593 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:35.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.593 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.593 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:35.593 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:35.593 job2: (groupid=0, jobs=1): err= 0: pid=837246: Mon Oct 21 11:54:11 2024 00:09:35.593 read: IOPS=333, BW=1333KiB/s (1365kB/s)(1356KiB/1017msec) 00:09:35.593 slat (nsec): min=27625, max=62697, avg=28670.87, stdev=3230.13 00:09:35.593 clat (usec): min=686, max=41377, avg=1998.38, stdev=6079.53 00:09:35.593 lat (usec): min=714, max=41406, avg=2027.05, stdev=6079.50 00:09:35.593 clat percentiles (usec): 00:09:35.593 | 1.00th=[ 848], 5.00th=[ 914], 10.00th=[ 963], 20.00th=[ 1004], 00:09:35.593 | 30.00th=[ 1029], 40.00th=[ 1045], 50.00th=[ 1057], 60.00th=[ 1090], 00:09:35.593 | 70.00th=[ 1106], 80.00th=[ 1123], 90.00th=[ 1139], 95.00th=[ 1188], 00:09:35.593 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:35.593 | 99.99th=[41157] 00:09:35.593 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:09:35.593 slat (nsec): min=9413, max=70205, avg=33168.88, stdev=8825.11 00:09:35.593 clat (usec): min=216, max=3851, avg=591.82, stdev=246.26 00:09:35.593 lat (usec): min=225, max=3886, avg=624.99, stdev=248.03 00:09:35.593 clat percentiles (usec): 00:09:35.593 | 1.00th=[ 273], 5.00th=[ 322], 10.00th=[ 383], 20.00th=[ 441], 00:09:35.593 | 30.00th=[ 498], 40.00th=[ 537], 50.00th=[ 578], 60.00th=[ 619], 00:09:35.593 | 70.00th=[ 660], 80.00th=[ 709], 90.00th=[ 783], 95.00th=[ 840], 00:09:35.593 | 99.00th=[ 988], 99.50th=[ 1418], 99.90th=[ 3851], 99.95th=[ 3851], 00:09:35.593 | 99.99th=[ 3851] 00:09:35.593 bw ( KiB/s): min= 4096, max= 4096, per=51.80%, avg=4096.00, stdev= 0.00, samples=1 00:09:35.593 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:35.593 lat (usec) : 250=0.24%, 500=17.98%, 750=33.96%, 1000=15.39% 00:09:35.593 lat (msec) : 2=31.26%, 4=0.24%, 50=0.94% 00:09:35.593 cpu : usr=1.57%, sys=3.64%, ctx=853, majf=0, minf=1 00:09:35.593 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:35.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.593 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.593 issued rwts: total=339,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:35.593 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:35.593 job3: (groupid=0, jobs=1): err= 0: pid=837257: Mon Oct 21 11:54:11 2024 00:09:35.593 read: IOPS=304, BW=1217KiB/s (1246kB/s)(1244KiB/1022msec) 00:09:35.593 slat (nsec): min=7586, max=50701, avg=29448.70, stdev=8723.24 00:09:35.593 clat (usec): min=453, max=42012, avg=2311.73, stdev=7811.55 00:09:35.593 lat (usec): min=485, max=42039, avg=2341.18, stdev=7810.93 00:09:35.593 clat percentiles (usec): 00:09:35.593 | 1.00th=[ 553], 5.00th=[ 619], 10.00th=[ 644], 20.00th=[ 676], 00:09:35.593 | 30.00th=[ 701], 40.00th=[ 725], 50.00th=[ 750], 60.00th=[ 783], 00:09:35.593 | 70.00th=[ 807], 80.00th=[ 840], 90.00th=[ 881], 95.00th=[ 988], 00:09:35.593 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:35.593 | 99.99th=[42206] 00:09:35.593 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:09:35.593 slat (nsec): min=9463, max=56902, avg=33961.32, stdev=9434.78 00:09:35.593 clat (usec): min=185, max=3402, avg=522.15, stdev=190.24 00:09:35.593 lat (usec): min=196, max=3436, avg=556.11, stdev=190.70 00:09:35.593 clat percentiles (usec): 00:09:35.593 | 1.00th=[ 239], 5.00th=[ 310], 10.00th=[ 334], 20.00th=[ 383], 00:09:35.593 | 30.00th=[ 433], 40.00th=[ 474], 50.00th=[ 502], 60.00th=[ 545], 00:09:35.593 | 70.00th=[ 586], 80.00th=[ 644], 90.00th=[ 717], 95.00th=[ 775], 00:09:35.593 | 99.00th=[ 848], 99.50th=[ 865], 99.90th=[ 3392], 99.95th=[ 3392], 00:09:35.593 | 99.99th=[ 3392] 00:09:35.593 bw ( KiB/s): min= 4096, max= 4096, per=51.80%, avg=4096.00, stdev= 0.00, samples=1 00:09:35.593 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:35.593 lat (usec) : 250=0.85%, 500=30.01%, 750=45.08%, 1000=22.36% 00:09:35.593 lat (msec) : 2=0.12%, 4=0.12%, 50=1.46% 00:09:35.593 cpu : usr=1.57%, sys=3.43%, ctx=826, majf=0, minf=1 00:09:35.593 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:35.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.593 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.593 issued rwts: total=311,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:35.593 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:35.593 00:09:35.593 Run status group 0 (all jobs): 00:09:35.593 READ: bw=3876KiB/s (3969kB/s), 69.5KiB/s-1343KiB/s (71.2kB/s-1375kB/s), io=4016KiB (4112kB), run=1001-1036msec 00:09:35.593 WRITE: bw=7907KiB/s (8097kB/s), 1977KiB/s-2046KiB/s (2024kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1036msec 00:09:35.593 00:09:35.593 Disk stats (read/write): 00:09:35.593 nvme0n1: ios=346/512, merge=0/0, ticks=520/298, in_queue=818, util=87.58% 00:09:35.593 nvme0n2: ios=41/512, merge=0/0, ticks=1483/301, in_queue=1784, util=98.78% 00:09:35.593 nvme0n3: ios=267/512, merge=0/0, ticks=699/244, in_queue=943, util=96.31% 00:09:35.593 nvme0n4: ios=363/512, merge=0/0, ticks=1430/216, in_queue=1646, util=96.47% 00:09:35.593 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:35.594 [global] 00:09:35.594 thread=1 00:09:35.594 invalidate=1 00:09:35.594 rw=write 00:09:35.594 time_based=1 00:09:35.594 runtime=1 00:09:35.594 ioengine=libaio 00:09:35.594 direct=1 00:09:35.594 bs=4096 00:09:35.594 iodepth=128 00:09:35.594 norandommap=0 00:09:35.594 numjobs=1 00:09:35.594 00:09:35.594 verify_dump=1 00:09:35.594 verify_backlog=512 00:09:35.594 verify_state_save=0 00:09:35.594 do_verify=1 00:09:35.594 verify=crc32c-intel 00:09:35.594 [job0] 00:09:35.594 filename=/dev/nvme0n1 00:09:35.594 [job1] 00:09:35.594 filename=/dev/nvme0n2 00:09:35.594 [job2] 00:09:35.594 filename=/dev/nvme0n3 00:09:35.594 [job3] 00:09:35.594 filename=/dev/nvme0n4 00:09:35.594 Could not set queue depth (nvme0n1) 00:09:35.594 Could not set queue depth (nvme0n2) 00:09:35.594 Could not set queue depth (nvme0n3) 00:09:35.594 Could not set queue depth (nvme0n4) 00:09:35.854 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:35.854 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:35.854 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:35.854 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:35.854 fio-3.35 00:09:35.854 Starting 4 threads 00:09:37.257 00:09:37.257 job0: (groupid=0, jobs=1): err= 0: pid=837817: Mon Oct 21 11:54:13 2024 00:09:37.257 read: IOPS=8143, BW=31.8MiB/s (33.4MB/s)(32.0MiB/1006msec) 00:09:37.257 slat (nsec): min=957, max=8696.8k, avg=61821.67, stdev=460994.23 00:09:37.257 clat (usec): min=3277, max=25451, avg=8137.75, stdev=2496.99 00:09:37.257 lat (usec): min=3286, max=25453, avg=8199.57, stdev=2526.81 00:09:37.257 clat percentiles (usec): 00:09:37.257 | 1.00th=[ 4883], 5.00th=[ 5604], 10.00th=[ 6128], 20.00th=[ 6587], 00:09:37.257 | 30.00th=[ 6849], 40.00th=[ 6980], 50.00th=[ 7308], 60.00th=[ 7832], 00:09:37.257 | 70.00th=[ 8717], 80.00th=[ 9634], 90.00th=[11207], 95.00th=[12780], 00:09:37.257 | 99.00th=[18482], 99.50th=[23462], 99.90th=[25297], 99.95th=[25297], 00:09:37.257 | 99.99th=[25560] 00:09:37.257 write: IOPS=8163, BW=31.9MiB/s (33.4MB/s)(32.1MiB/1006msec); 0 zone resets 00:09:37.257 slat (nsec): min=1630, max=9586.8k, avg=54367.23, stdev=401103.47 00:09:37.257 clat (usec): min=1273, max=26072, avg=7414.51, stdev=3570.26 00:09:37.257 lat (usec): min=1295, max=26080, avg=7468.88, stdev=3598.60 00:09:37.257 clat percentiles (usec): 00:09:37.257 | 1.00th=[ 2769], 5.00th=[ 4146], 10.00th=[ 4424], 20.00th=[ 5276], 00:09:37.257 | 30.00th=[ 6063], 40.00th=[ 6587], 50.00th=[ 6783], 60.00th=[ 7046], 00:09:37.257 | 70.00th=[ 7177], 80.00th=[ 8094], 90.00th=[10290], 95.00th=[15664], 00:09:37.257 | 99.00th=[23462], 99.50th=[24249], 99.90th=[25822], 99.95th=[26084], 00:09:37.257 | 99.99th=[26084] 00:09:37.257 bw ( KiB/s): min=29048, max=36488, per=32.68%, avg=32768.00, stdev=5260.87, samples=2 00:09:37.257 iops : min= 7262, max= 9122, avg=8192.00, stdev=1315.22, samples=2 00:09:37.257 lat (msec) : 2=0.05%, 4=2.18%, 10=83.43%, 20=12.81%, 50=1.54% 00:09:37.257 cpu : usr=6.97%, sys=8.16%, ctx=485, majf=0, minf=2 00:09:37.257 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:37.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.257 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:37.257 issued rwts: total=8192,8212,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:37.257 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:37.257 job1: (groupid=0, jobs=1): err= 0: pid=837822: Mon Oct 21 11:54:13 2024 00:09:37.257 read: IOPS=6101, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1007msec) 00:09:37.257 slat (nsec): min=956, max=11603k, avg=68067.87, stdev=491630.60 00:09:37.257 clat (usec): min=1630, max=26144, avg=8906.47, stdev=3214.04 00:09:37.257 lat (usec): min=1659, max=26176, avg=8974.53, stdev=3254.98 00:09:37.257 clat percentiles (usec): 00:09:37.257 | 1.00th=[ 1876], 5.00th=[ 5669], 10.00th=[ 6325], 20.00th=[ 7177], 00:09:37.257 | 30.00th=[ 7504], 40.00th=[ 7701], 50.00th=[ 7963], 60.00th=[ 8291], 00:09:37.257 | 70.00th=[ 9110], 80.00th=[10683], 90.00th=[12649], 95.00th=[15533], 00:09:37.257 | 99.00th=[21365], 99.50th=[21627], 99.90th=[23462], 99.95th=[23462], 00:09:37.257 | 99.99th=[26084] 00:09:37.257 write: IOPS=6306, BW=24.6MiB/s (25.8MB/s)(24.8MiB/1007msec); 0 zone resets 00:09:37.257 slat (nsec): min=1607, max=8914.5k, avg=84362.20, stdev=504827.63 00:09:37.257 clat (usec): min=1236, max=84742, avg=11473.02, stdev=13322.68 00:09:37.257 lat (usec): min=1247, max=84751, avg=11557.38, stdev=13405.09 00:09:37.257 clat percentiles (usec): 00:09:37.257 | 1.00th=[ 4178], 5.00th=[ 4752], 10.00th=[ 5800], 20.00th=[ 6718], 00:09:37.257 | 30.00th=[ 7111], 40.00th=[ 7308], 50.00th=[ 7439], 60.00th=[ 7635], 00:09:37.257 | 70.00th=[ 7898], 80.00th=[10159], 90.00th=[16909], 95.00th=[42730], 00:09:37.257 | 99.00th=[77071], 99.50th=[81265], 99.90th=[84411], 99.95th=[84411], 00:09:37.257 | 99.99th=[84411] 00:09:37.257 bw ( KiB/s): min=20480, max=29312, per=24.83%, avg=24896.00, stdev=6245.17, samples=2 00:09:37.257 iops : min= 5120, max= 7328, avg=6224.00, stdev=1561.29, samples=2 00:09:37.257 lat (msec) : 2=0.82%, 4=0.61%, 10=75.80%, 20=18.00%, 50=2.68% 00:09:37.257 lat (msec) : 100=2.10% 00:09:37.257 cpu : usr=4.08%, sys=5.67%, ctx=761, majf=0, minf=1 00:09:37.257 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:37.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.257 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:37.257 issued rwts: total=6144,6351,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:37.257 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:37.257 job2: (groupid=0, jobs=1): err= 0: pid=837847: Mon Oct 21 11:54:13 2024 00:09:37.257 read: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec) 00:09:37.257 slat (nsec): min=1049, max=33904k, avg=143795.25, stdev=1234573.32 00:09:37.257 clat (usec): min=3322, max=86603, avg=18071.19, stdev=15069.68 00:09:37.257 lat (usec): min=3364, max=86629, avg=18214.99, stdev=15204.40 00:09:37.257 clat percentiles (usec): 00:09:37.257 | 1.00th=[ 5735], 5.00th=[ 7308], 10.00th=[ 7767], 20.00th=[ 8848], 00:09:37.257 | 30.00th=[ 9503], 40.00th=[11600], 50.00th=[12387], 60.00th=[13304], 00:09:37.257 | 70.00th=[15533], 80.00th=[20841], 90.00th=[45351], 95.00th=[53216], 00:09:37.257 | 99.00th=[68682], 99.50th=[68682], 99.90th=[79168], 99.95th=[80217], 00:09:37.257 | 99.99th=[86508] 00:09:37.257 write: IOPS=3910, BW=15.3MiB/s (16.0MB/s)(15.4MiB/1010msec); 0 zone resets 00:09:37.257 slat (nsec): min=1656, max=14948k, avg=106624.60, stdev=842619.29 00:09:37.257 clat (usec): min=1354, max=61990, avg=15612.03, stdev=10988.75 00:09:37.257 lat (usec): min=1364, max=62014, avg=15718.65, stdev=11070.43 00:09:37.257 clat percentiles (usec): 00:09:37.257 | 1.00th=[ 3916], 5.00th=[ 5211], 10.00th=[ 5735], 20.00th=[ 7111], 00:09:37.257 | 30.00th=[ 8094], 40.00th=[ 8717], 50.00th=[11600], 60.00th=[14222], 00:09:37.257 | 70.00th=[20055], 80.00th=[22676], 90.00th=[30802], 95.00th=[39060], 00:09:37.257 | 99.00th=[52691], 99.50th=[55837], 99.90th=[55837], 99.95th=[61604], 00:09:37.257 | 99.99th=[62129] 00:09:37.257 bw ( KiB/s): min= 8184, max=22400, per=15.25%, avg=15292.00, stdev=10052.23, samples=2 00:09:37.257 iops : min= 2046, max= 5600, avg=3823.00, stdev=2513.06, samples=2 00:09:37.257 lat (msec) : 2=0.12%, 4=0.64%, 10=39.02%, 20=33.73%, 50=22.46% 00:09:37.257 lat (msec) : 100=4.04% 00:09:37.257 cpu : usr=3.47%, sys=4.06%, ctx=253, majf=0, minf=2 00:09:37.257 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:37.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.257 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:37.257 issued rwts: total=3584,3950,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:37.257 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:37.257 job3: (groupid=0, jobs=1): err= 0: pid=837855: Mon Oct 21 11:54:13 2024 00:09:37.257 read: IOPS=6603, BW=25.8MiB/s (27.0MB/s)(26.0MiB/1008msec) 00:09:37.257 slat (nsec): min=963, max=20262k, avg=75265.38, stdev=634411.67 00:09:37.257 clat (usec): min=3404, max=53260, avg=10485.07, stdev=4770.95 00:09:37.257 lat (usec): min=3422, max=53285, avg=10560.34, stdev=4817.46 00:09:37.257 clat percentiles (usec): 00:09:37.257 | 1.00th=[ 3949], 5.00th=[ 6980], 10.00th=[ 7439], 20.00th=[ 7963], 00:09:37.257 | 30.00th=[ 8455], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9896], 00:09:37.257 | 70.00th=[10552], 80.00th=[11469], 90.00th=[14091], 95.00th=[16188], 00:09:37.257 | 99.00th=[33162], 99.50th=[38011], 99.90th=[38011], 99.95th=[38011], 00:09:37.257 | 99.99th=[53216] 00:09:37.257 write: IOPS=6751, BW=26.4MiB/s (27.7MB/s)(26.6MiB/1008msec); 0 zone resets 00:09:37.257 slat (nsec): min=1631, max=11176k, avg=63564.19, stdev=535532.92 00:09:37.257 clat (usec): min=1076, max=21825, avg=8545.00, stdev=3178.70 00:09:37.257 lat (usec): min=1328, max=28455, avg=8608.56, stdev=3206.69 00:09:37.257 clat percentiles (usec): 00:09:37.257 | 1.00th=[ 3326], 5.00th=[ 4752], 10.00th=[ 5211], 20.00th=[ 6456], 00:09:37.257 | 30.00th=[ 7242], 40.00th=[ 7701], 50.00th=[ 7963], 60.00th=[ 8225], 00:09:37.257 | 70.00th=[ 8717], 80.00th=[10421], 90.00th=[12780], 95.00th=[14615], 00:09:37.257 | 99.00th=[20579], 99.50th=[21627], 99.90th=[21627], 99.95th=[21627], 00:09:37.257 | 99.99th=[21890] 00:09:37.257 bw ( KiB/s): min=24760, max=28672, per=26.64%, avg=26716.00, stdev=2766.20, samples=2 00:09:37.257 iops : min= 6190, max= 7168, avg=6679.00, stdev=691.55, samples=2 00:09:37.257 lat (msec) : 2=0.31%, 4=1.09%, 10=69.60%, 20=26.25%, 50=2.74% 00:09:37.257 lat (msec) : 100=0.01% 00:09:37.257 cpu : usr=5.36%, sys=7.85%, ctx=268, majf=0, minf=1 00:09:37.257 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:09:37.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.257 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:37.257 issued rwts: total=6656,6806,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:37.257 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:37.257 00:09:37.257 Run status group 0 (all jobs): 00:09:37.257 READ: bw=95.0MiB/s (99.7MB/s), 13.9MiB/s-31.8MiB/s (14.5MB/s-33.4MB/s), io=96.0MiB (101MB), run=1006-1010msec 00:09:37.257 WRITE: bw=97.9MiB/s (103MB/s), 15.3MiB/s-31.9MiB/s (16.0MB/s-33.4MB/s), io=98.9MiB (104MB), run=1006-1010msec 00:09:37.257 00:09:37.257 Disk stats (read/write): 00:09:37.257 nvme0n1: ios=6699/6679, merge=0/0, ticks=51992/48684, in_queue=100676, util=95.99% 00:09:37.257 nvme0n2: ios=4971/5120, merge=0/0, ticks=31209/42436, in_queue=73645, util=96.73% 00:09:37.257 nvme0n3: ios=3130/3441, merge=0/0, ticks=35276/34475, in_queue=69751, util=96.09% 00:09:37.257 nvme0n4: ios=5613/5632, merge=0/0, ticks=52497/42898, in_queue=95395, util=92.73% 00:09:37.257 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:37.257 [global] 00:09:37.257 thread=1 00:09:37.257 invalidate=1 00:09:37.257 rw=randwrite 00:09:37.257 time_based=1 00:09:37.257 runtime=1 00:09:37.257 ioengine=libaio 00:09:37.257 direct=1 00:09:37.257 bs=4096 00:09:37.257 iodepth=128 00:09:37.257 norandommap=0 00:09:37.257 numjobs=1 00:09:37.257 00:09:37.257 verify_dump=1 00:09:37.257 verify_backlog=512 00:09:37.257 verify_state_save=0 00:09:37.257 do_verify=1 00:09:37.257 verify=crc32c-intel 00:09:37.257 [job0] 00:09:37.257 filename=/dev/nvme0n1 00:09:37.257 [job1] 00:09:37.257 filename=/dev/nvme0n2 00:09:37.257 [job2] 00:09:37.257 filename=/dev/nvme0n3 00:09:37.257 [job3] 00:09:37.257 filename=/dev/nvme0n4 00:09:37.257 Could not set queue depth (nvme0n1) 00:09:37.257 Could not set queue depth (nvme0n2) 00:09:37.257 Could not set queue depth (nvme0n3) 00:09:37.257 Could not set queue depth (nvme0n4) 00:09:37.522 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:37.522 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:37.522 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:37.522 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:37.522 fio-3.35 00:09:37.522 Starting 4 threads 00:09:38.941 00:09:38.941 job0: (groupid=0, jobs=1): err= 0: pid=838300: Mon Oct 21 11:54:15 2024 00:09:38.941 read: IOPS=6108, BW=23.9MiB/s (25.0MB/s)(24.1MiB/1008msec) 00:09:38.941 slat (nsec): min=973, max=15752k, avg=79975.31, stdev=612216.83 00:09:38.941 clat (usec): min=3439, max=34292, avg=9886.03, stdev=5389.50 00:09:38.941 lat (usec): min=3445, max=34295, avg=9966.00, stdev=5437.94 00:09:38.941 clat percentiles (usec): 00:09:38.941 | 1.00th=[ 4752], 5.00th=[ 5473], 10.00th=[ 6063], 20.00th=[ 6259], 00:09:38.941 | 30.00th=[ 6521], 40.00th=[ 7242], 50.00th=[ 8225], 60.00th=[ 9241], 00:09:38.941 | 70.00th=[10159], 80.00th=[11469], 90.00th=[17695], 95.00th=[21627], 00:09:38.941 | 99.00th=[29492], 99.50th=[31589], 99.90th=[33817], 99.95th=[34341], 00:09:38.941 | 99.99th=[34341] 00:09:38.941 write: IOPS=6603, BW=25.8MiB/s (27.0MB/s)(26.0MiB/1008msec); 0 zone resets 00:09:38.941 slat (nsec): min=1585, max=10553k, avg=70973.96, stdev=418891.95 00:09:38.941 clat (usec): min=1205, max=34288, avg=10065.53, stdev=5526.01 00:09:38.941 lat (usec): min=1216, max=34290, avg=10136.51, stdev=5562.18 00:09:38.941 clat percentiles (usec): 00:09:38.941 | 1.00th=[ 3130], 5.00th=[ 3851], 10.00th=[ 4228], 20.00th=[ 5342], 00:09:38.941 | 30.00th=[ 5932], 40.00th=[ 6521], 50.00th=[ 8455], 60.00th=[10159], 00:09:38.941 | 70.00th=[13173], 80.00th=[14877], 90.00th=[18482], 95.00th=[20579], 00:09:38.941 | 99.00th=[24249], 99.50th=[25560], 99.90th=[31065], 99.95th=[32113], 00:09:38.941 | 99.99th=[34341] 00:09:38.941 bw ( KiB/s): min=23984, max=28344, per=28.24%, avg=26164.00, stdev=3082.99, samples=2 00:09:38.941 iops : min= 5996, max= 7086, avg=6541.00, stdev=770.75, samples=2 00:09:38.941 lat (msec) : 2=0.02%, 4=4.86%, 10=58.54%, 20=29.35%, 50=7.23% 00:09:38.941 cpu : usr=4.57%, sys=7.45%, ctx=481, majf=0, minf=2 00:09:38.941 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:09:38.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:38.941 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:38.941 issued rwts: total=6157,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:38.941 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:38.941 job1: (groupid=0, jobs=1): err= 0: pid=838318: Mon Oct 21 11:54:15 2024 00:09:38.941 read: IOPS=4610, BW=18.0MiB/s (18.9MB/s)(18.2MiB/1008msec) 00:09:38.941 slat (nsec): min=898, max=18152k, avg=84816.54, stdev=671888.68 00:09:38.941 clat (usec): min=2796, max=44839, avg=11102.13, stdev=7454.05 00:09:38.941 lat (usec): min=2804, max=47638, avg=11186.95, stdev=7525.32 00:09:38.941 clat percentiles (usec): 00:09:38.941 | 1.00th=[ 5407], 5.00th=[ 6194], 10.00th=[ 6718], 20.00th=[ 7177], 00:09:38.941 | 30.00th=[ 7832], 40.00th=[ 8094], 50.00th=[ 8455], 60.00th=[ 8848], 00:09:38.941 | 70.00th=[ 9503], 80.00th=[10683], 90.00th=[26608], 95.00th=[29230], 00:09:38.941 | 99.00th=[38536], 99.50th=[40633], 99.90th=[42206], 99.95th=[43779], 00:09:38.941 | 99.99th=[44827] 00:09:38.941 write: IOPS=5079, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1008msec); 0 zone resets 00:09:38.941 slat (nsec): min=1517, max=11970k, avg=111380.78, stdev=595963.89 00:09:38.941 clat (usec): min=683, max=54109, avg=14847.00, stdev=13701.73 00:09:38.941 lat (usec): min=718, max=54117, avg=14958.38, stdev=13798.28 00:09:38.941 clat percentiles (usec): 00:09:38.941 | 1.00th=[ 1467], 5.00th=[ 3720], 10.00th=[ 4883], 20.00th=[ 6456], 00:09:38.941 | 30.00th=[ 6849], 40.00th=[ 7046], 50.00th=[ 7373], 60.00th=[11076], 00:09:38.941 | 70.00th=[14615], 80.00th=[21890], 90.00th=[42206], 95.00th=[49021], 00:09:38.941 | 99.00th=[52167], 99.50th=[52691], 99.90th=[54264], 99.95th=[54264], 00:09:38.941 | 99.99th=[54264] 00:09:38.941 bw ( KiB/s): min=15336, max=24912, per=21.72%, avg=20124.00, stdev=6771.25, samples=2 00:09:38.941 iops : min= 3834, max= 6228, avg=5031.00, stdev=1692.81, samples=2 00:09:38.941 lat (usec) : 750=0.03%, 1000=0.06% 00:09:38.941 lat (msec) : 2=0.77%, 4=2.77%, 10=61.93%, 20=16.41%, 50=15.94% 00:09:38.941 lat (msec) : 100=2.08% 00:09:38.941 cpu : usr=3.77%, sys=5.06%, ctx=520, majf=0, minf=1 00:09:38.941 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:38.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:38.941 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:38.941 issued rwts: total=4647,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:38.941 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:38.941 job2: (groupid=0, jobs=1): err= 0: pid=838338: Mon Oct 21 11:54:15 2024 00:09:38.941 read: IOPS=6227, BW=24.3MiB/s (25.5MB/s)(24.5MiB/1007msec) 00:09:38.941 slat (nsec): min=943, max=13066k, avg=78172.73, stdev=585967.06 00:09:38.941 clat (usec): min=3419, max=26128, avg=9829.08, stdev=2923.39 00:09:38.941 lat (usec): min=3963, max=26153, avg=9907.25, stdev=2970.28 00:09:38.941 clat percentiles (usec): 00:09:38.941 | 1.00th=[ 5866], 5.00th=[ 6521], 10.00th=[ 7111], 20.00th=[ 7570], 00:09:38.941 | 30.00th=[ 7898], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9634], 00:09:38.941 | 70.00th=[10683], 80.00th=[11994], 90.00th=[13829], 95.00th=[15270], 00:09:38.941 | 99.00th=[20055], 99.50th=[21103], 99.90th=[21890], 99.95th=[21890], 00:09:38.941 | 99.99th=[26084] 00:09:38.941 write: IOPS=6609, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1007msec); 0 zone resets 00:09:38.941 slat (nsec): min=1540, max=16215k, avg=71496.11, stdev=496611.71 00:09:38.941 clat (usec): min=699, max=28411, avg=9924.83, stdev=4632.01 00:09:38.941 lat (usec): min=711, max=28428, avg=9996.32, stdev=4664.80 00:09:38.941 clat percentiles (usec): 00:09:38.941 | 1.00th=[ 2606], 5.00th=[ 4293], 10.00th=[ 4752], 20.00th=[ 5604], 00:09:38.941 | 30.00th=[ 6849], 40.00th=[ 7570], 50.00th=[ 8029], 60.00th=[10552], 00:09:38.941 | 70.00th=[13698], 80.00th=[14877], 90.00th=[15533], 95.00th=[17433], 00:09:38.941 | 99.00th=[20579], 99.50th=[24511], 99.90th=[24511], 99.95th=[24511], 00:09:38.941 | 99.99th=[28443] 00:09:38.941 bw ( KiB/s): min=25136, max=28104, per=28.73%, avg=26620.00, stdev=2098.69, samples=2 00:09:38.941 iops : min= 6284, max= 7026, avg=6655.00, stdev=524.67, samples=2 00:09:38.941 lat (usec) : 750=0.01%, 1000=0.02% 00:09:38.941 lat (msec) : 2=0.29%, 4=2.08%, 10=57.81%, 20=38.62%, 50=1.18% 00:09:38.941 cpu : usr=2.98%, sys=5.17%, ctx=531, majf=0, minf=1 00:09:38.941 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:09:38.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:38.941 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:38.941 issued rwts: total=6271,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:38.941 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:38.941 job3: (groupid=0, jobs=1): err= 0: pid=838345: Mon Oct 21 11:54:15 2024 00:09:38.941 read: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec) 00:09:38.941 slat (nsec): min=981, max=17744k, avg=97976.28, stdev=670563.10 00:09:38.941 clat (usec): min=6535, max=56824, avg=13427.56, stdev=8763.40 00:09:38.941 lat (usec): min=6542, max=60598, avg=13525.53, stdev=8829.61 00:09:38.941 clat percentiles (usec): 00:09:38.941 | 1.00th=[ 7308], 5.00th=[ 8094], 10.00th=[ 8848], 20.00th=[ 9241], 00:09:38.941 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[11207], 00:09:38.941 | 70.00th=[11994], 80.00th=[12780], 90.00th=[22676], 95.00th=[39060], 00:09:38.941 | 99.00th=[46924], 99.50th=[49546], 99.90th=[50594], 99.95th=[54264], 00:09:38.941 | 99.99th=[56886] 00:09:38.941 write: IOPS=4888, BW=19.1MiB/s (20.0MB/s)(19.2MiB/1006msec); 0 zone resets 00:09:38.941 slat (nsec): min=1610, max=12336k, avg=106661.02, stdev=666585.56 00:09:38.941 clat (usec): min=1281, max=56569, avg=13337.49, stdev=8857.42 00:09:38.941 lat (usec): min=1292, max=56597, avg=13444.15, stdev=8938.48 00:09:38.941 clat percentiles (usec): 00:09:38.941 | 1.00th=[ 6390], 5.00th=[ 7635], 10.00th=[ 8094], 20.00th=[ 8356], 00:09:38.941 | 30.00th=[ 8717], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[10028], 00:09:38.941 | 70.00th=[10945], 80.00th=[16450], 90.00th=[24249], 95.00th=[33162], 00:09:38.941 | 99.00th=[51119], 99.50th=[53216], 99.90th=[56361], 99.95th=[56361], 00:09:38.941 | 99.99th=[56361] 00:09:38.941 bw ( KiB/s): min=17848, max=20480, per=20.68%, avg=19164.00, stdev=1861.11, samples=2 00:09:38.941 iops : min= 4462, max= 5120, avg=4791.00, stdev=465.28, samples=2 00:09:38.941 lat (msec) : 2=0.08%, 10=53.61%, 20=30.40%, 50=14.98%, 100=0.92% 00:09:38.941 cpu : usr=2.68%, sys=4.27%, ctx=520, majf=0, minf=1 00:09:38.941 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:38.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:38.941 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:38.941 issued rwts: total=4608,4918,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:38.941 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:38.941 00:09:38.941 Run status group 0 (all jobs): 00:09:38.941 READ: bw=84.0MiB/s (88.1MB/s), 17.9MiB/s-24.3MiB/s (18.8MB/s-25.5MB/s), io=84.7MiB (88.8MB), run=1006-1008msec 00:09:38.941 WRITE: bw=90.5MiB/s (94.9MB/s), 19.1MiB/s-25.8MiB/s (20.0MB/s-27.1MB/s), io=91.2MiB (95.6MB), run=1006-1008msec 00:09:38.941 00:09:38.941 Disk stats (read/write): 00:09:38.941 nvme0n1: ios=5036/5120, merge=0/0, ticks=50117/51678, in_queue=101795, util=87.68% 00:09:38.941 nvme0n2: ios=4134/4527, merge=0/0, ticks=29578/33985, in_queue=63563, util=88.07% 00:09:38.941 nvme0n3: ios=5169/5295, merge=0/0, ticks=49525/52833, in_queue=102358, util=96.31% 00:09:38.941 nvme0n4: ios=4151/4562, merge=0/0, ticks=16627/18098, in_queue=34725, util=100.00% 00:09:38.941 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:38.941 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=838516 00:09:38.941 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:38.941 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:38.941 [global] 00:09:38.941 thread=1 00:09:38.941 invalidate=1 00:09:38.941 rw=read 00:09:38.941 time_based=1 00:09:38.941 runtime=10 00:09:38.941 ioengine=libaio 00:09:38.941 direct=1 00:09:38.942 bs=4096 00:09:38.942 iodepth=1 00:09:38.942 norandommap=1 00:09:38.942 numjobs=1 00:09:38.942 00:09:38.942 [job0] 00:09:38.942 filename=/dev/nvme0n1 00:09:38.942 [job1] 00:09:38.942 filename=/dev/nvme0n2 00:09:38.942 [job2] 00:09:38.942 filename=/dev/nvme0n3 00:09:38.942 [job3] 00:09:38.942 filename=/dev/nvme0n4 00:09:38.942 Could not set queue depth (nvme0n1) 00:09:38.942 Could not set queue depth (nvme0n2) 00:09:38.942 Could not set queue depth (nvme0n3) 00:09:38.942 Could not set queue depth (nvme0n4) 00:09:39.202 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:39.202 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:39.202 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:39.202 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:39.202 fio-3.35 00:09:39.202 Starting 4 threads 00:09:41.751 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:41.751 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=10432512, buflen=4096 00:09:41.751 fio: pid=838833, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:42.012 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:42.012 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=12550144, buflen=4096 00:09:42.012 fio: pid=838826, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:42.012 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:42.012 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:42.276 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:42.277 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:42.277 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=3342336, buflen=4096 00:09:42.277 fio: pid=838793, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:42.541 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=655360, buflen=4096 00:09:42.541 fio: pid=838807, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:42.541 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:42.541 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:42.541 00:09:42.541 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=838793: Mon Oct 21 11:54:18 2024 00:09:42.541 read: IOPS=272, BW=1089KiB/s (1115kB/s)(3264KiB/2998msec) 00:09:42.541 slat (usec): min=6, max=20590, avg=56.49, stdev=737.04 00:09:42.541 clat (usec): min=376, max=42098, avg=3584.15, stdev=10175.97 00:09:42.541 lat (usec): min=402, max=62034, avg=3640.67, stdev=10316.58 00:09:42.541 clat percentiles (usec): 00:09:42.541 | 1.00th=[ 562], 5.00th=[ 660], 10.00th=[ 709], 20.00th=[ 783], 00:09:42.541 | 30.00th=[ 824], 40.00th=[ 881], 50.00th=[ 906], 60.00th=[ 930], 00:09:42.541 | 70.00th=[ 955], 80.00th=[ 971], 90.00th=[ 1020], 95.00th=[41681], 00:09:42.541 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:42.541 | 99.99th=[42206] 00:09:42.541 bw ( KiB/s): min= 96, max= 3728, per=15.43%, avg=1288.00, stdev=1487.96, samples=5 00:09:42.541 iops : min= 24, max= 932, avg=322.00, stdev=371.99, samples=5 00:09:42.541 lat (usec) : 500=0.61%, 750=13.34%, 1000=72.83% 00:09:42.541 lat (msec) : 2=6.36%, 10=0.12%, 50=6.61% 00:09:42.541 cpu : usr=0.23%, sys=0.97%, ctx=820, majf=0, minf=2 00:09:42.541 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:42.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.541 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.541 issued rwts: total=817,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.541 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:42.541 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=838807: Mon Oct 21 11:54:18 2024 00:09:42.541 read: IOPS=50, BW=203KiB/s (208kB/s)(640KiB/3157msec) 00:09:42.541 slat (usec): min=6, max=7706, avg=72.82, stdev=605.84 00:09:42.541 clat (usec): min=494, max=42575, avg=19513.14, stdev=20479.91 00:09:42.541 lat (usec): min=514, max=50011, avg=19586.26, stdev=20546.48 00:09:42.541 clat percentiles (usec): 00:09:42.541 | 1.00th=[ 506], 5.00th=[ 627], 10.00th=[ 701], 20.00th=[ 775], 00:09:42.541 | 30.00th=[ 832], 40.00th=[ 922], 50.00th=[ 971], 60.00th=[41157], 00:09:42.541 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:42.541 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:09:42.541 | 99.99th=[42730] 00:09:42.541 bw ( KiB/s): min= 89, max= 776, per=2.49%, avg=208.17, stdev=278.19, samples=6 00:09:42.541 iops : min= 22, max= 194, avg=52.00, stdev=69.57, samples=6 00:09:42.541 lat (usec) : 500=0.62%, 750=13.66%, 1000=38.51% 00:09:42.541 lat (msec) : 2=1.24%, 50=45.34% 00:09:42.541 cpu : usr=0.00%, sys=0.19%, ctx=163, majf=0, minf=2 00:09:42.541 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:42.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.541 complete : 0=0.6%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.541 issued rwts: total=161,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.541 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:42.541 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=838826: Mon Oct 21 11:54:18 2024 00:09:42.541 read: IOPS=1097, BW=4390KiB/s (4495kB/s)(12.0MiB/2792msec) 00:09:42.541 slat (usec): min=6, max=13045, avg=35.25, stdev=325.85 00:09:42.541 clat (usec): min=378, max=3874, avg=861.35, stdev=178.64 00:09:42.541 lat (usec): min=390, max=13996, avg=896.60, stdev=373.83 00:09:42.541 clat percentiles (usec): 00:09:42.541 | 1.00th=[ 578], 5.00th=[ 652], 10.00th=[ 693], 20.00th=[ 742], 00:09:42.541 | 30.00th=[ 766], 40.00th=[ 799], 50.00th=[ 848], 60.00th=[ 914], 00:09:42.541 | 70.00th=[ 963], 80.00th=[ 988], 90.00th=[ 1020], 95.00th=[ 1045], 00:09:42.541 | 99.00th=[ 1090], 99.50th=[ 1123], 99.90th=[ 3523], 99.95th=[ 3785], 00:09:42.541 | 99.99th=[ 3884] 00:09:42.541 bw ( KiB/s): min= 3984, max= 4928, per=54.11%, avg=4516.80, stdev=383.11, samples=5 00:09:42.541 iops : min= 996, max= 1232, avg=1129.20, stdev=95.78, samples=5 00:09:42.541 lat (usec) : 500=0.46%, 750=22.74%, 1000=61.14% 00:09:42.541 lat (msec) : 2=15.43%, 4=0.20% 00:09:42.541 cpu : usr=1.47%, sys=4.62%, ctx=3067, majf=0, minf=1 00:09:42.541 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:42.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.541 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.541 issued rwts: total=3065,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.541 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:42.541 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=838833: Mon Oct 21 11:54:18 2024 00:09:42.541 read: IOPS=983, BW=3934KiB/s (4028kB/s)(9.95MiB/2590msec) 00:09:42.541 slat (nsec): min=5561, max=71082, avg=25455.66, stdev=3538.03 00:09:42.541 clat (usec): min=301, max=40886, avg=981.22, stdev=1114.02 00:09:42.541 lat (usec): min=309, max=40912, avg=1006.67, stdev=1114.12 00:09:42.541 clat percentiles (usec): 00:09:42.541 | 1.00th=[ 594], 5.00th=[ 750], 10.00th=[ 807], 20.00th=[ 889], 00:09:42.541 | 30.00th=[ 930], 40.00th=[ 955], 50.00th=[ 971], 60.00th=[ 988], 00:09:42.541 | 70.00th=[ 1004], 80.00th=[ 1029], 90.00th=[ 1057], 95.00th=[ 1074], 00:09:42.541 | 99.00th=[ 1139], 99.50th=[ 1156], 99.90th=[ 1254], 99.95th=[40109], 00:09:42.541 | 99.99th=[40633] 00:09:42.541 bw ( KiB/s): min= 3376, max= 4224, per=47.35%, avg=3952.00, stdev=334.52, samples=5 00:09:42.541 iops : min= 844, max= 1056, avg=988.00, stdev=83.63, samples=5 00:09:42.541 lat (usec) : 500=0.24%, 750=4.43%, 1000=62.68% 00:09:42.541 lat (msec) : 2=32.54%, 50=0.08% 00:09:42.541 cpu : usr=0.85%, sys=3.17%, ctx=2550, majf=0, minf=2 00:09:42.542 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:42.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.542 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.542 issued rwts: total=2548,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.542 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:42.542 00:09:42.542 Run status group 0 (all jobs): 00:09:42.542 READ: bw=8346KiB/s (8546kB/s), 203KiB/s-4390KiB/s (208kB/s-4495kB/s), io=25.7MiB (27.0MB), run=2590-3157msec 00:09:42.542 00:09:42.542 Disk stats (read/write): 00:09:42.542 nvme0n1: ios=812/0, merge=0/0, ticks=2740/0, in_queue=2740, util=93.96% 00:09:42.542 nvme0n2: ios=158/0, merge=0/0, ticks=3041/0, in_queue=3041, util=95.45% 00:09:42.542 nvme0n3: ios=2907/0, merge=0/0, ticks=2309/0, in_queue=2309, util=96.03% 00:09:42.542 nvme0n4: ios=2547/0, merge=0/0, ticks=2489/0, in_queue=2489, util=96.42% 00:09:42.542 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:42.542 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:42.803 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:42.803 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:43.065 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:43.065 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:43.065 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:43.065 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:43.327 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:43.327 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 838516 00:09:43.327 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:43.327 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:43.327 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.327 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:43.327 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:09:43.327 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:43.327 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:43.327 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:43.327 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:43.589 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:09:43.589 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:43.589 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:43.589 nvmf hotplug test: fio failed as expected 00:09:43.589 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:43.589 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:43.589 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:43.589 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:43.589 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:43.589 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:43.589 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:43.589 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:43.589 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:43.589 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:43.589 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:43.589 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:43.589 rmmod nvme_tcp 00:09:43.589 rmmod nvme_fabrics 00:09:43.589 rmmod nvme_keyring 00:09:43.851 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:43.851 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:43.851 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:43.851 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 834476 ']' 00:09:43.851 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 834476 00:09:43.851 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 834476 ']' 00:09:43.851 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 834476 00:09:43.851 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:09:43.851 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:43.851 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 834476 00:09:43.851 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:43.851 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:43.851 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 834476' 00:09:43.851 killing process with pid 834476 00:09:43.851 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 834476 00:09:43.851 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 834476 00:09:43.851 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:43.851 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:43.851 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:43.851 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:43.851 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:09:43.851 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:09:43.851 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:43.851 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:43.851 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:43.851 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.851 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:43.851 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.859 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:45.859 00:09:45.859 real 0m29.410s 00:09:45.859 user 2m38.109s 00:09:45.859 sys 0m9.503s 00:09:45.859 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:45.859 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.859 ************************************ 00:09:45.859 END TEST nvmf_fio_target 00:09:45.859 ************************************ 00:09:46.120 11:54:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:46.120 11:54:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:46.120 11:54:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:46.120 11:54:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:46.120 ************************************ 00:09:46.120 START TEST nvmf_bdevio 00:09:46.120 ************************************ 00:09:46.120 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:46.120 * Looking for test storage... 00:09:46.120 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:46.120 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:46.120 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:09:46.120 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:46.382 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:46.382 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:46.382 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:46.382 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:46.382 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:46.382 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:46.382 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:46.382 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:46.382 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:46.382 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:46.382 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:46.382 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:46.382 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:46.382 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:46.382 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:46.382 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:46.382 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:46.382 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:46.382 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:46.382 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:46.382 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:46.382 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:46.382 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:46.382 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:46.382 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:46.382 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:46.382 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:46.382 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:46.382 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:46.382 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:46.382 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:46.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.382 --rc genhtml_branch_coverage=1 00:09:46.382 --rc genhtml_function_coverage=1 00:09:46.382 --rc genhtml_legend=1 00:09:46.382 --rc geninfo_all_blocks=1 00:09:46.382 --rc geninfo_unexecuted_blocks=1 00:09:46.382 00:09:46.382 ' 00:09:46.382 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:46.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.382 --rc genhtml_branch_coverage=1 00:09:46.382 --rc genhtml_function_coverage=1 00:09:46.382 --rc genhtml_legend=1 00:09:46.382 --rc geninfo_all_blocks=1 00:09:46.383 --rc geninfo_unexecuted_blocks=1 00:09:46.383 00:09:46.383 ' 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:46.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.383 --rc genhtml_branch_coverage=1 00:09:46.383 --rc genhtml_function_coverage=1 00:09:46.383 --rc genhtml_legend=1 00:09:46.383 --rc geninfo_all_blocks=1 00:09:46.383 --rc geninfo_unexecuted_blocks=1 00:09:46.383 00:09:46.383 ' 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:46.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.383 --rc genhtml_branch_coverage=1 00:09:46.383 --rc genhtml_function_coverage=1 00:09:46.383 --rc genhtml_legend=1 00:09:46.383 --rc geninfo_all_blocks=1 00:09:46.383 --rc geninfo_unexecuted_blocks=1 00:09:46.383 00:09:46.383 ' 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:46.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:46.383 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:54.524 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:54.524 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:54.524 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:54.524 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:54.524 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:54.524 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:54.524 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:54.524 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:54.524 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:54.524 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:54.524 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:54.524 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:54.524 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:54.524 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:54.524 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:54.524 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:54.524 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:54.524 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:54.524 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:54.524 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:54.524 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:54.524 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:54.524 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:54.524 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:54.524 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:54.524 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:54.524 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:54.524 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:54.524 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:54.524 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:54.524 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:54.524 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:54.524 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:54.524 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:54.524 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:54.524 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:54.524 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:54.524 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:54.524 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:54.525 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:54.525 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:54.525 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:54.525 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:54.525 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:54.525 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:54.525 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:54.525 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:54.525 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:54.525 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:54.525 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:54.525 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:54.525 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:54.525 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:54.525 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:54.525 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:54.525 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:54.525 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:54.525 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:54.525 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:54.525 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:54.525 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:54.525 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:54.525 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:54.525 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:54.525 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:54.525 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:54.525 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:54.525 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:54.525 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:54.525 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:54.525 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:54.525 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:54.525 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:54.525 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:09:54.525 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:54.525 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:54.525 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:54.525 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:54.525 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:54.525 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:54.525 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:54.525 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:54.525 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:54.525 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:54.525 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:54.525 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:54.525 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:54.525 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:54.525 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:54.525 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:54.525 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:54.525 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:54.525 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:54.525 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:54.525 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:54.525 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:54.525 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:54.525 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:54.525 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:54.525 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:54.525 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:54.525 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:09:54.525 00:09:54.525 --- 10.0.0.2 ping statistics --- 00:09:54.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.525 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:09:54.525 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:54.525 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:54.525 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:09:54.525 00:09:54.525 --- 10.0.0.1 ping statistics --- 00:09:54.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.525 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:09:54.525 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:54.525 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:09:54.525 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:54.525 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:54.525 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:54.525 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:54.525 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:54.525 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:54.525 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:54.525 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:54.525 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:54.525 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:54.525 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:54.525 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=844076 00:09:54.525 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 844076 00:09:54.525 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:54.525 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 844076 ']' 00:09:54.525 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.525 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:54.525 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.525 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:54.525 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:54.525 [2024-10-21 11:54:30.260359] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:09:54.525 [2024-10-21 11:54:30.260426] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:54.525 [2024-10-21 11:54:30.349979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:54.525 [2024-10-21 11:54:30.402857] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:54.525 [2024-10-21 11:54:30.402900] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:54.525 [2024-10-21 11:54:30.402909] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:54.525 [2024-10-21 11:54:30.402916] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:54.525 [2024-10-21 11:54:30.402922] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:54.525 [2024-10-21 11:54:30.404952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:54.525 [2024-10-21 11:54:30.405113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:54.525 [2024-10-21 11:54:30.405319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:54.525 [2024-10-21 11:54:30.405319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:54.525 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:54.525 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:09:54.525 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:54.525 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:54.525 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:54.787 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:54.787 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:54.787 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.787 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:54.787 [2024-10-21 11:54:31.140564] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:54.787 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.787 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:54.787 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.787 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:54.787 Malloc0 00:09:54.787 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.787 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:54.787 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.787 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:54.787 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.787 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:54.787 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.787 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:54.787 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.787 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:54.787 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.787 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:54.787 [2024-10-21 11:54:31.220787] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:54.787 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.787 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:54.787 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:54.787 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:09:54.787 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:09:54.787 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:54.788 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:54.788 { 00:09:54.788 "params": { 00:09:54.788 "name": "Nvme$subsystem", 00:09:54.788 "trtype": "$TEST_TRANSPORT", 00:09:54.788 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:54.788 "adrfam": "ipv4", 00:09:54.788 "trsvcid": "$NVMF_PORT", 00:09:54.788 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:54.788 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:54.788 "hdgst": ${hdgst:-false}, 00:09:54.788 "ddgst": ${ddgst:-false} 00:09:54.788 }, 00:09:54.788 "method": "bdev_nvme_attach_controller" 00:09:54.788 } 00:09:54.788 EOF 00:09:54.788 )") 00:09:54.788 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:09:54.788 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:09:54.788 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:09:54.788 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:54.788 "params": { 00:09:54.788 "name": "Nvme1", 00:09:54.788 "trtype": "tcp", 00:09:54.788 "traddr": "10.0.0.2", 00:09:54.788 "adrfam": "ipv4", 00:09:54.788 "trsvcid": "4420", 00:09:54.788 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:54.788 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:54.788 "hdgst": false, 00:09:54.788 "ddgst": false 00:09:54.788 }, 00:09:54.788 "method": "bdev_nvme_attach_controller" 00:09:54.788 }' 00:09:54.788 [2024-10-21 11:54:31.277311] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:09:54.788 [2024-10-21 11:54:31.277401] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid844138 ] 00:09:54.788 [2024-10-21 11:54:31.360804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:55.049 [2024-10-21 11:54:31.418161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:55.049 [2024-10-21 11:54:31.418333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.049 [2024-10-21 11:54:31.418373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:55.310 I/O targets: 00:09:55.310 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:55.310 00:09:55.310 00:09:55.310 CUnit - A unit testing framework for C - Version 2.1-3 00:09:55.310 http://cunit.sourceforge.net/ 00:09:55.310 00:09:55.310 00:09:55.310 Suite: bdevio tests on: Nvme1n1 00:09:55.310 Test: blockdev write read block ...passed 00:09:55.310 Test: blockdev write zeroes read block ...passed 00:09:55.310 Test: blockdev write zeroes read no split ...passed 00:09:55.310 Test: blockdev write zeroes read split ...passed 00:09:55.572 Test: blockdev write zeroes read split partial ...passed 00:09:55.572 Test: blockdev reset ...[2024-10-21 11:54:31.957270] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:09:55.572 [2024-10-21 11:54:31.957380] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f34d0 (9): Bad file descriptor 00:09:55.572 [2024-10-21 11:54:32.101900] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:55.572 passed 00:09:55.572 Test: blockdev write read 8 blocks ...passed 00:09:55.833 Test: blockdev write read size > 128k ...passed 00:09:55.833 Test: blockdev write read invalid size ...passed 00:09:55.833 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:55.833 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:55.833 Test: blockdev write read max offset ...passed 00:09:55.833 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:55.833 Test: blockdev writev readv 8 blocks ...passed 00:09:55.833 Test: blockdev writev readv 30 x 1block ...passed 00:09:55.833 Test: blockdev writev readv block ...passed 00:09:55.833 Test: blockdev writev readv size > 128k ...passed 00:09:55.833 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:55.833 Test: blockdev comparev and writev ...[2024-10-21 11:54:32.367257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:55.833 [2024-10-21 11:54:32.367308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:55.833 [2024-10-21 11:54:32.367329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:55.833 [2024-10-21 11:54:32.367338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:55.833 [2024-10-21 11:54:32.367884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:55.833 [2024-10-21 11:54:32.367898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:55.833 [2024-10-21 11:54:32.367912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:55.833 [2024-10-21 11:54:32.367920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:55.833 [2024-10-21 11:54:32.368436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:55.833 [2024-10-21 11:54:32.368448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:55.833 [2024-10-21 11:54:32.368463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:55.833 [2024-10-21 11:54:32.368472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:55.833 [2024-10-21 11:54:32.369016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:55.833 [2024-10-21 11:54:32.369028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:55.833 [2024-10-21 11:54:32.369044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:55.833 [2024-10-21 11:54:32.369053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:55.833 passed 00:09:56.094 Test: blockdev nvme passthru rw ...passed 00:09:56.094 Test: blockdev nvme passthru vendor specific ...[2024-10-21 11:54:32.455358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:56.094 [2024-10-21 11:54:32.455408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:56.094 [2024-10-21 11:54:32.455800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:56.094 [2024-10-21 11:54:32.455812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:56.094 [2024-10-21 11:54:32.456188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:56.094 [2024-10-21 11:54:32.456206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:56.094 [2024-10-21 11:54:32.456588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:56.094 [2024-10-21 11:54:32.456600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:56.094 passed 00:09:56.094 Test: blockdev nvme admin passthru ...passed 00:09:56.094 Test: blockdev copy ...passed 00:09:56.094 00:09:56.094 Run Summary: Type Total Ran Passed Failed Inactive 00:09:56.094 suites 1 1 n/a 0 0 00:09:56.094 tests 23 23 23 0 0 00:09:56.094 asserts 152 152 152 0 n/a 00:09:56.094 00:09:56.094 Elapsed time = 1.541 seconds 00:09:56.094 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:56.094 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.094 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:56.094 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.094 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:56.094 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:56.094 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:56.094 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:56.095 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:56.095 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:56.095 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:56.095 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:56.095 rmmod nvme_tcp 00:09:56.095 rmmod nvme_fabrics 00:09:56.356 rmmod nvme_keyring 00:09:56.356 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:56.356 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:56.356 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:56.356 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 844076 ']' 00:09:56.356 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 844076 00:09:56.356 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 844076 ']' 00:09:56.356 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 844076 00:09:56.356 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:09:56.356 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:56.356 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 844076 00:09:56.356 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:09:56.356 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:09:56.356 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 844076' 00:09:56.356 killing process with pid 844076 00:09:56.356 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 844076 00:09:56.356 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 844076 00:09:56.617 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:56.617 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:56.617 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:56.617 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:56.617 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:09:56.617 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:56.617 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:09:56.617 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:56.617 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:56.617 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.617 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:56.617 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:58.533 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:58.533 00:09:58.533 real 0m12.514s 00:09:58.533 user 0m15.235s 00:09:58.533 sys 0m6.231s 00:09:58.533 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:58.533 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:58.533 ************************************ 00:09:58.533 END TEST nvmf_bdevio 00:09:58.533 ************************************ 00:09:58.533 11:54:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:58.533 00:09:58.533 real 5m4.228s 00:09:58.533 user 11m50.910s 00:09:58.533 sys 1m51.953s 00:09:58.533 11:54:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:58.533 11:54:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:58.533 ************************************ 00:09:58.533 END TEST nvmf_target_core 00:09:58.533 ************************************ 00:09:58.794 11:54:35 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:58.794 11:54:35 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:58.794 11:54:35 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:58.794 11:54:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:58.794 ************************************ 00:09:58.794 START TEST nvmf_target_extra 00:09:58.794 ************************************ 00:09:58.794 11:54:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:58.794 * Looking for test storage... 00:09:58.794 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:58.794 11:54:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:58.794 11:54:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:09:58.794 11:54:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:58.794 11:54:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:58.794 11:54:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:58.794 11:54:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:58.794 11:54:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:58.794 11:54:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:58.794 11:54:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:58.794 11:54:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:58.794 11:54:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:58.794 11:54:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:58.794 11:54:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:58.794 11:54:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:58.794 11:54:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:58.794 11:54:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:58.794 11:54:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:58.794 11:54:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:58.794 11:54:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:58.794 11:54:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:58.794 11:54:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:58.794 11:54:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:58.794 11:54:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:58.794 11:54:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:58.794 11:54:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:58.794 11:54:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:58.794 11:54:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:58.794 11:54:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:58.794 11:54:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:58.794 11:54:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:58.794 11:54:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:58.794 11:54:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:58.794 11:54:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:58.794 11:54:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:58.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.794 --rc genhtml_branch_coverage=1 00:09:58.794 --rc genhtml_function_coverage=1 00:09:58.794 --rc genhtml_legend=1 00:09:58.794 --rc geninfo_all_blocks=1 00:09:58.794 --rc geninfo_unexecuted_blocks=1 00:09:58.794 00:09:58.794 ' 00:09:58.794 11:54:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:58.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.794 --rc genhtml_branch_coverage=1 00:09:58.794 --rc genhtml_function_coverage=1 00:09:58.794 --rc genhtml_legend=1 00:09:58.794 --rc geninfo_all_blocks=1 00:09:58.794 --rc geninfo_unexecuted_blocks=1 00:09:58.794 00:09:58.794 ' 00:09:58.794 11:54:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:58.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.794 --rc genhtml_branch_coverage=1 00:09:58.794 --rc genhtml_function_coverage=1 00:09:58.795 --rc genhtml_legend=1 00:09:58.795 --rc geninfo_all_blocks=1 00:09:58.795 --rc geninfo_unexecuted_blocks=1 00:09:58.795 00:09:58.795 ' 00:09:58.795 11:54:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:58.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.795 --rc genhtml_branch_coverage=1 00:09:58.795 --rc genhtml_function_coverage=1 00:09:58.795 --rc genhtml_legend=1 00:09:58.795 --rc geninfo_all_blocks=1 00:09:58.795 --rc geninfo_unexecuted_blocks=1 00:09:58.795 00:09:58.795 ' 00:09:58.795 11:54:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:58.795 11:54:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:59.055 11:54:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:59.055 11:54:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:59.055 11:54:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:59.055 11:54:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:59.055 11:54:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:59.055 11:54:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:59.055 11:54:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:59.055 11:54:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:59.055 11:54:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:59.055 11:54:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:59.055 11:54:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:59.055 11:54:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:59.055 11:54:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:59.055 11:54:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:59.055 11:54:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:59.055 11:54:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:59.055 11:54:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:59.055 11:54:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:59.055 11:54:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:59.055 11:54:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:59.055 11:54:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:59.055 11:54:35 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.055 11:54:35 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.056 11:54:35 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.056 11:54:35 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:59.056 11:54:35 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.056 11:54:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:59.056 11:54:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:59.056 11:54:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:59.056 11:54:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:59.056 11:54:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:59.056 11:54:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:59.056 11:54:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:59.056 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:59.056 11:54:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:59.056 11:54:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:59.056 11:54:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:59.056 11:54:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:59.056 11:54:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:59.056 11:54:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:59.056 11:54:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:59.056 11:54:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:59.056 11:54:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:59.056 11:54:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:59.056 ************************************ 00:09:59.056 START TEST nvmf_example 00:09:59.056 ************************************ 00:09:59.056 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:59.056 * Looking for test storage... 00:09:59.056 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:59.056 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:59.056 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:09:59.056 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:59.056 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:59.056 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:59.056 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:59.056 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:59.056 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:59.056 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:59.056 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:59.056 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:59.056 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:59.056 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:59.056 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:59.056 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:59.056 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:59.056 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:59.056 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:59.056 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:59.056 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:59.056 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:59.056 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:59.056 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:59.317 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:59.317 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:59.317 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:59.317 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:59.317 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:59.317 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:59.317 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:59.317 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:59.317 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:59.317 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:59.317 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:59.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.317 --rc genhtml_branch_coverage=1 00:09:59.317 --rc genhtml_function_coverage=1 00:09:59.317 --rc genhtml_legend=1 00:09:59.317 --rc geninfo_all_blocks=1 00:09:59.317 --rc geninfo_unexecuted_blocks=1 00:09:59.317 00:09:59.317 ' 00:09:59.317 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:59.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.317 --rc genhtml_branch_coverage=1 00:09:59.317 --rc genhtml_function_coverage=1 00:09:59.317 --rc genhtml_legend=1 00:09:59.317 --rc geninfo_all_blocks=1 00:09:59.317 --rc geninfo_unexecuted_blocks=1 00:09:59.317 00:09:59.317 ' 00:09:59.317 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:59.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.317 --rc genhtml_branch_coverage=1 00:09:59.317 --rc genhtml_function_coverage=1 00:09:59.317 --rc genhtml_legend=1 00:09:59.317 --rc geninfo_all_blocks=1 00:09:59.317 --rc geninfo_unexecuted_blocks=1 00:09:59.317 00:09:59.317 ' 00:09:59.317 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:59.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.317 --rc genhtml_branch_coverage=1 00:09:59.317 --rc genhtml_function_coverage=1 00:09:59.317 --rc genhtml_legend=1 00:09:59.317 --rc geninfo_all_blocks=1 00:09:59.317 --rc geninfo_unexecuted_blocks=1 00:09:59.317 00:09:59.317 ' 00:09:59.317 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:59.317 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:59.317 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:59.317 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:59.317 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:59.317 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:59.317 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:59.317 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:59.317 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:59.317 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:59.317 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:59.317 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:59.317 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:59.317 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:59.317 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:59.317 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:59.317 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:59.317 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:59.317 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:59.317 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:59.317 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:59.317 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:59.317 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:59.317 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.317 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.318 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.318 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:59.318 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.318 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:09:59.318 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:59.318 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:59.318 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:59.318 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:59.318 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:59.318 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:59.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:59.318 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:59.318 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:59.318 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:59.318 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:59.318 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:59.318 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:59.318 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:59.318 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:59.318 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:59.318 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:59.318 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:59.318 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:59.318 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:59.318 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:59.318 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:59.318 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:59.318 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:59.318 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:59.318 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:59.318 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.318 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:59.318 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.318 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:59.318 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:59.318 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:09:59.318 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:07.461 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:07.461 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:07.461 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:07.461 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:07.462 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:07.462 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:07.462 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:07.462 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:07.462 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:07.462 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:07.462 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:07.462 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:07.462 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:07.462 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:07.462 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:07.462 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # is_hw=yes 00:10:07.462 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:07.462 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:07.462 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:07.462 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:07.462 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:07.462 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:07.462 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:07.462 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:07.462 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:07.462 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:07.462 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:07.462 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:07.462 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:07.462 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:07.462 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:07.462 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:07.462 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:07.462 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:07.462 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:07.462 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:07.462 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:07.462 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:07.462 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:07.462 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:07.462 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:07.462 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:07.462 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:07.462 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:10:07.462 00:10:07.462 --- 10.0.0.2 ping statistics --- 00:10:07.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.462 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:10:07.462 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:07.462 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:07.462 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:10:07.462 00:10:07.462 --- 10.0.0.1 ping statistics --- 00:10:07.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.462 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:10:07.462 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:07.462 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # return 0 00:10:07.462 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:07.462 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:07.462 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:07.462 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:07.462 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:07.462 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:07.462 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:07.462 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:07.462 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:07.462 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:07.462 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.462 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:07.462 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:07.462 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=848843 00:10:07.462 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:07.462 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:07.462 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 848843 00:10:07.462 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 848843 ']' 00:10:07.462 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.462 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:07.462 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.462 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:07.462 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.724 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:07.724 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:10:07.724 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:07.724 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:07.724 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.725 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:07.725 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.725 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.725 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.725 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:07.725 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.725 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.725 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.725 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:07.725 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:07.725 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.725 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.725 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.725 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:07.725 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:07.725 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.725 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.725 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.725 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:07.725 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.725 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.725 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.725 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:07.725 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:19.959 Initializing NVMe Controllers 00:10:19.959 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:19.959 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:19.959 Initialization complete. Launching workers. 00:10:19.959 ======================================================== 00:10:19.959 Latency(us) 00:10:19.959 Device Information : IOPS MiB/s Average min max 00:10:19.959 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19695.61 76.94 3249.27 585.63 15537.32 00:10:19.959 ======================================================== 00:10:19.959 Total : 19695.61 76.94 3249.27 585.63 15537.32 00:10:19.959 00:10:19.959 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:19.959 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:19.959 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:19.959 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:19.959 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:19.959 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:19.959 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:19.959 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:19.959 rmmod nvme_tcp 00:10:19.959 rmmod nvme_fabrics 00:10:19.959 rmmod nvme_keyring 00:10:19.959 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:19.959 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:19.959 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:19.959 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@515 -- # '[' -n 848843 ']' 00:10:19.959 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # killprocess 848843 00:10:19.959 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 848843 ']' 00:10:19.959 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 848843 00:10:19.959 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:10:19.959 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:19.959 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 848843 00:10:19.959 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:10:19.959 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:10:19.959 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 848843' 00:10:19.959 killing process with pid 848843 00:10:19.959 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 848843 00:10:19.959 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 848843 00:10:19.959 nvmf threads initialize successfully 00:10:19.959 bdev subsystem init successfully 00:10:19.959 created a nvmf target service 00:10:19.959 create targets's poll groups done 00:10:19.959 all subsystems of target started 00:10:19.959 nvmf target is running 00:10:19.959 all subsystems of target stopped 00:10:19.959 destroy targets's poll groups done 00:10:19.959 destroyed the nvmf target service 00:10:19.959 bdev subsystem finish successfully 00:10:19.959 nvmf threads destroy successfully 00:10:19.959 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:19.959 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:19.959 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:19.959 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:19.959 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:19.959 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-save 00:10:19.959 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-restore 00:10:19.959 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:19.959 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:19.960 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:19.960 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:19.960 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.220 11:54:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:20.220 11:54:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:20.220 11:54:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:20.220 11:54:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:20.480 00:10:20.480 real 0m21.379s 00:10:20.480 user 0m46.402s 00:10:20.480 sys 0m6.997s 00:10:20.480 11:54:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:20.480 11:54:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:20.480 ************************************ 00:10:20.480 END TEST nvmf_example 00:10:20.480 ************************************ 00:10:20.480 11:54:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:20.480 11:54:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:20.480 11:54:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:20.481 11:54:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:20.481 ************************************ 00:10:20.481 START TEST nvmf_filesystem 00:10:20.481 ************************************ 00:10:20.481 11:54:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:20.481 * Looking for test storage... 00:10:20.481 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:20.481 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:20.481 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:20.481 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:20.745 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:20.745 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:20.745 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:20.745 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:20.745 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:20.745 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:20.745 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:20.745 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:20.745 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:20.745 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:20.745 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:20.745 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:20.745 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:20.745 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:20.745 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:20.745 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:20.745 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:20.745 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:20.745 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:20.745 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:20.745 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:20.745 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:20.745 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:20.745 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:20.745 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:20.745 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:20.745 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:20.745 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:20.745 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:20.745 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:20.745 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:20.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.745 --rc genhtml_branch_coverage=1 00:10:20.745 --rc genhtml_function_coverage=1 00:10:20.745 --rc genhtml_legend=1 00:10:20.745 --rc geninfo_all_blocks=1 00:10:20.745 --rc geninfo_unexecuted_blocks=1 00:10:20.745 00:10:20.745 ' 00:10:20.745 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:20.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.745 --rc genhtml_branch_coverage=1 00:10:20.745 --rc genhtml_function_coverage=1 00:10:20.745 --rc genhtml_legend=1 00:10:20.745 --rc geninfo_all_blocks=1 00:10:20.745 --rc geninfo_unexecuted_blocks=1 00:10:20.745 00:10:20.745 ' 00:10:20.745 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:20.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.745 --rc genhtml_branch_coverage=1 00:10:20.745 --rc genhtml_function_coverage=1 00:10:20.745 --rc genhtml_legend=1 00:10:20.745 --rc geninfo_all_blocks=1 00:10:20.745 --rc geninfo_unexecuted_blocks=1 00:10:20.745 00:10:20.745 ' 00:10:20.745 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:20.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.745 --rc genhtml_branch_coverage=1 00:10:20.745 --rc genhtml_function_coverage=1 00:10:20.745 --rc genhtml_legend=1 00:10:20.745 --rc geninfo_all_blocks=1 00:10:20.745 --rc geninfo_unexecuted_blocks=1 00:10:20.745 00:10:20.745 ' 00:10:20.745 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:20.745 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:20.746 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:20.747 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:20.747 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:20.747 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:20.747 #define SPDK_CONFIG_H 00:10:20.747 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:20.747 #define SPDK_CONFIG_APPS 1 00:10:20.747 #define SPDK_CONFIG_ARCH native 00:10:20.747 #undef SPDK_CONFIG_ASAN 00:10:20.747 #undef SPDK_CONFIG_AVAHI 00:10:20.747 #undef SPDK_CONFIG_CET 00:10:20.747 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:20.747 #define SPDK_CONFIG_COVERAGE 1 00:10:20.747 #define SPDK_CONFIG_CROSS_PREFIX 00:10:20.747 #undef SPDK_CONFIG_CRYPTO 00:10:20.747 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:20.747 #undef SPDK_CONFIG_CUSTOMOCF 00:10:20.747 #undef SPDK_CONFIG_DAOS 00:10:20.747 #define SPDK_CONFIG_DAOS_DIR 00:10:20.747 #define SPDK_CONFIG_DEBUG 1 00:10:20.747 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:20.747 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:20.747 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:20.747 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:20.747 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:20.747 #undef SPDK_CONFIG_DPDK_UADK 00:10:20.747 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:20.747 #define SPDK_CONFIG_EXAMPLES 1 00:10:20.747 #undef SPDK_CONFIG_FC 00:10:20.747 #define SPDK_CONFIG_FC_PATH 00:10:20.747 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:20.747 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:20.747 #define SPDK_CONFIG_FSDEV 1 00:10:20.747 #undef SPDK_CONFIG_FUSE 00:10:20.747 #undef SPDK_CONFIG_FUZZER 00:10:20.747 #define SPDK_CONFIG_FUZZER_LIB 00:10:20.747 #undef SPDK_CONFIG_GOLANG 00:10:20.747 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:20.747 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:20.747 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:20.747 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:20.747 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:20.747 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:20.747 #undef SPDK_CONFIG_HAVE_LZ4 00:10:20.747 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:20.747 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:20.747 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:20.747 #define SPDK_CONFIG_IDXD 1 00:10:20.747 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:20.747 #undef SPDK_CONFIG_IPSEC_MB 00:10:20.747 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:20.747 #define SPDK_CONFIG_ISAL 1 00:10:20.747 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:20.747 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:20.747 #define SPDK_CONFIG_LIBDIR 00:10:20.747 #undef SPDK_CONFIG_LTO 00:10:20.747 #define SPDK_CONFIG_MAX_LCORES 128 00:10:20.747 #define SPDK_CONFIG_NVME_CUSE 1 00:10:20.747 #undef SPDK_CONFIG_OCF 00:10:20.747 #define SPDK_CONFIG_OCF_PATH 00:10:20.747 #define SPDK_CONFIG_OPENSSL_PATH 00:10:20.747 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:20.747 #define SPDK_CONFIG_PGO_DIR 00:10:20.747 #undef SPDK_CONFIG_PGO_USE 00:10:20.747 #define SPDK_CONFIG_PREFIX /usr/local 00:10:20.747 #undef SPDK_CONFIG_RAID5F 00:10:20.747 #undef SPDK_CONFIG_RBD 00:10:20.747 #define SPDK_CONFIG_RDMA 1 00:10:20.747 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:20.747 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:20.747 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:20.747 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:20.747 #define SPDK_CONFIG_SHARED 1 00:10:20.747 #undef SPDK_CONFIG_SMA 00:10:20.747 #define SPDK_CONFIG_TESTS 1 00:10:20.747 #undef SPDK_CONFIG_TSAN 00:10:20.747 #define SPDK_CONFIG_UBLK 1 00:10:20.747 #define SPDK_CONFIG_UBSAN 1 00:10:20.747 #undef SPDK_CONFIG_UNIT_TESTS 00:10:20.747 #undef SPDK_CONFIG_URING 00:10:20.747 #define SPDK_CONFIG_URING_PATH 00:10:20.747 #undef SPDK_CONFIG_URING_ZNS 00:10:20.747 #undef SPDK_CONFIG_USDT 00:10:20.747 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:20.747 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:20.747 #define SPDK_CONFIG_VFIO_USER 1 00:10:20.747 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:20.747 #define SPDK_CONFIG_VHOST 1 00:10:20.747 #define SPDK_CONFIG_VIRTIO 1 00:10:20.747 #undef SPDK_CONFIG_VTUNE 00:10:20.747 #define SPDK_CONFIG_VTUNE_DIR 00:10:20.747 #define SPDK_CONFIG_WERROR 1 00:10:20.747 #define SPDK_CONFIG_WPDK_DIR 00:10:20.747 #undef SPDK_CONFIG_XNVME 00:10:20.747 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:20.747 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:20.747 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:20.747 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:20.747 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:20.747 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:20.747 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:20.747 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.747 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.747 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.747 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:20.747 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.747 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:20.747 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:20.747 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:20.747 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:20.747 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:20.747 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:20.747 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:20.747 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:20.747 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:20.747 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:20.747 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:20.747 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:20.747 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:20.747 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:20.747 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:20.747 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:20.747 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:20.747 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:20.747 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:20.747 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:20.747 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:20.747 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:20.747 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:20.747 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:20.747 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:20.747 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:20.747 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:20.747 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:20.747 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:20.747 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:20.747 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:20.747 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:20.747 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:20.747 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:20.747 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:20.747 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:20.747 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:20.747 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:20.748 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j144 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 851630 ]] 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 851630 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:10:20.749 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.ac6Wfz 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.ac6Wfz/tests/target /tmp/spdk.ac6Wfz 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=607141888 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=4677287936 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=122584608768 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=129356533760 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=6771924992 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64668233728 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678264832 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=25847943168 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=25871306752 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23363584 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=efivarfs 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=efivarfs 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=216064 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=507904 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=287744 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64677429248 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678268928 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=839680 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12935639040 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12935651328 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:10:20.750 * Looking for test storage... 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=122584608768 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=8986517504 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:20.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:20.750 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:21.012 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:21.012 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:21.012 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:21.012 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:21.012 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:21.012 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:21.012 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:21.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.013 --rc genhtml_branch_coverage=1 00:10:21.013 --rc genhtml_function_coverage=1 00:10:21.013 --rc genhtml_legend=1 00:10:21.013 --rc geninfo_all_blocks=1 00:10:21.013 --rc geninfo_unexecuted_blocks=1 00:10:21.013 00:10:21.013 ' 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:21.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.013 --rc genhtml_branch_coverage=1 00:10:21.013 --rc genhtml_function_coverage=1 00:10:21.013 --rc genhtml_legend=1 00:10:21.013 --rc geninfo_all_blocks=1 00:10:21.013 --rc geninfo_unexecuted_blocks=1 00:10:21.013 00:10:21.013 ' 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:21.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.013 --rc genhtml_branch_coverage=1 00:10:21.013 --rc genhtml_function_coverage=1 00:10:21.013 --rc genhtml_legend=1 00:10:21.013 --rc geninfo_all_blocks=1 00:10:21.013 --rc geninfo_unexecuted_blocks=1 00:10:21.013 00:10:21.013 ' 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:21.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.013 --rc genhtml_branch_coverage=1 00:10:21.013 --rc genhtml_function_coverage=1 00:10:21.013 --rc genhtml_legend=1 00:10:21.013 --rc geninfo_all_blocks=1 00:10:21.013 --rc geninfo_unexecuted_blocks=1 00:10:21.013 00:10:21.013 ' 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:21.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:21.013 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:29.183 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:29.183 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:29.183 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:29.183 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:29.183 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:29.183 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:29.183 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:29.183 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:29.183 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:29.183 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:29.183 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:29.183 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:29.183 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:29.183 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:29.183 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:29.183 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:29.183 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:29.183 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:29.183 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:29.183 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:29.183 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:29.183 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:29.183 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:29.183 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:29.183 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:29.183 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:29.183 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:29.183 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:29.184 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:29.184 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:29.184 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:29.184 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # is_hw=yes 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:29.184 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:29.184 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.409 ms 00:10:29.184 00:10:29.184 --- 10.0.0.2 ping statistics --- 00:10:29.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:29.184 rtt min/avg/max/mdev = 0.409/0.409/0.409/0.000 ms 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:29.184 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:29.184 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:10:29.184 00:10:29.184 --- 10.0.0.1 ping statistics --- 00:10:29.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:29.184 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # return 0 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:29.184 ************************************ 00:10:29.184 START TEST nvmf_filesystem_no_in_capsule 00:10:29.184 ************************************ 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=855442 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 855442 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 855442 ']' 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:29.184 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:29.185 [2024-10-21 11:55:05.048503] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:10:29.185 [2024-10-21 11:55:05.048567] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:29.185 [2024-10-21 11:55:05.137712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:29.185 [2024-10-21 11:55:05.191416] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:29.185 [2024-10-21 11:55:05.191472] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:29.185 [2024-10-21 11:55:05.191480] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:29.185 [2024-10-21 11:55:05.191488] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:29.185 [2024-10-21 11:55:05.191495] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:29.185 [2024-10-21 11:55:05.193911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:29.185 [2024-10-21 11:55:05.194074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:29.185 [2024-10-21 11:55:05.194239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.185 [2024-10-21 11:55:05.194239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:29.445 11:55:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:29.445 11:55:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:10:29.446 11:55:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:29.446 11:55:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:29.446 11:55:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:29.446 11:55:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:29.446 11:55:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:29.446 11:55:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:29.446 11:55:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.446 11:55:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:29.446 [2024-10-21 11:55:05.922556] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:29.446 11:55:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.446 11:55:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:29.446 11:55:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.446 11:55:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:29.706 Malloc1 00:10:29.706 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.706 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:29.706 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.706 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:29.706 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.706 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:29.706 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.706 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:29.706 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.706 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:29.706 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.706 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:29.706 [2024-10-21 11:55:06.076009] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:29.706 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.706 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:29.706 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:10:29.706 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:10:29.706 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:10:29.706 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:10:29.706 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:29.706 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.706 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:29.706 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.706 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:10:29.706 { 00:10:29.706 "name": "Malloc1", 00:10:29.706 "aliases": [ 00:10:29.706 "7096c66d-61ed-4697-9e83-ebaec6871a6d" 00:10:29.706 ], 00:10:29.706 "product_name": "Malloc disk", 00:10:29.706 "block_size": 512, 00:10:29.706 "num_blocks": 1048576, 00:10:29.706 "uuid": "7096c66d-61ed-4697-9e83-ebaec6871a6d", 00:10:29.706 "assigned_rate_limits": { 00:10:29.706 "rw_ios_per_sec": 0, 00:10:29.706 "rw_mbytes_per_sec": 0, 00:10:29.706 "r_mbytes_per_sec": 0, 00:10:29.706 "w_mbytes_per_sec": 0 00:10:29.706 }, 00:10:29.706 "claimed": true, 00:10:29.706 "claim_type": "exclusive_write", 00:10:29.706 "zoned": false, 00:10:29.706 "supported_io_types": { 00:10:29.706 "read": true, 00:10:29.706 "write": true, 00:10:29.706 "unmap": true, 00:10:29.706 "flush": true, 00:10:29.706 "reset": true, 00:10:29.706 "nvme_admin": false, 00:10:29.706 "nvme_io": false, 00:10:29.706 "nvme_io_md": false, 00:10:29.706 "write_zeroes": true, 00:10:29.706 "zcopy": true, 00:10:29.706 "get_zone_info": false, 00:10:29.706 "zone_management": false, 00:10:29.706 "zone_append": false, 00:10:29.706 "compare": false, 00:10:29.706 "compare_and_write": false, 00:10:29.707 "abort": true, 00:10:29.707 "seek_hole": false, 00:10:29.707 "seek_data": false, 00:10:29.707 "copy": true, 00:10:29.707 "nvme_iov_md": false 00:10:29.707 }, 00:10:29.707 "memory_domains": [ 00:10:29.707 { 00:10:29.707 "dma_device_id": "system", 00:10:29.707 "dma_device_type": 1 00:10:29.707 }, 00:10:29.707 { 00:10:29.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.707 "dma_device_type": 2 00:10:29.707 } 00:10:29.707 ], 00:10:29.707 "driver_specific": {} 00:10:29.707 } 00:10:29.707 ]' 00:10:29.707 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:10:29.707 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:10:29.707 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:10:29.707 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:10:29.707 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:10:29.707 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:10:29.707 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:29.707 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:31.618 11:55:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:31.618 11:55:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:10:31.618 11:55:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:31.618 11:55:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:31.618 11:55:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:10:33.531 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:33.531 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:33.531 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:33.531 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:33.531 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:33.531 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:10:33.531 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:33.531 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:33.531 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:33.531 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:33.531 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:33.531 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:33.531 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:33.531 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:33.531 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:33.531 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:33.531 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:33.531 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:34.102 11:55:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:35.044 11:55:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:35.045 11:55:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:35.045 11:55:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:35.045 11:55:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:35.045 11:55:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:35.045 ************************************ 00:10:35.045 START TEST filesystem_ext4 00:10:35.045 ************************************ 00:10:35.045 11:55:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:35.045 11:55:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:35.045 11:55:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:35.045 11:55:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:35.045 11:55:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:10:35.045 11:55:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:35.045 11:55:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:10:35.045 11:55:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:10:35.045 11:55:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:10:35.045 11:55:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:10:35.045 11:55:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:35.045 mke2fs 1.47.0 (5-Feb-2023) 00:10:35.045 Discarding device blocks: 0/522240 done 00:10:35.045 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:35.045 Filesystem UUID: 347bfee3-b8be-499f-afa2-e0102c37a917 00:10:35.045 Superblock backups stored on blocks: 00:10:35.045 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:35.045 00:10:35.045 Allocating group tables: 0/64 done 00:10:35.045 Writing inode tables: 0/64 done 00:10:35.305 Creating journal (8192 blocks): done 00:10:37.629 Writing superblocks and filesystem accounting information: 0/64 done 00:10:37.629 00:10:37.629 11:55:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:10:37.629 11:55:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:42.923 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:43.184 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:43.184 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:43.184 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:43.184 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:43.184 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:43.184 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 855442 00:10:43.184 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:43.184 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:43.184 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:43.184 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:43.184 00:10:43.184 real 0m8.053s 00:10:43.184 user 0m0.030s 00:10:43.184 sys 0m0.078s 00:10:43.184 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:43.184 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:43.184 ************************************ 00:10:43.184 END TEST filesystem_ext4 00:10:43.184 ************************************ 00:10:43.185 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:43.185 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:43.185 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:43.185 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:43.185 ************************************ 00:10:43.185 START TEST filesystem_btrfs 00:10:43.185 ************************************ 00:10:43.185 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:43.185 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:43.185 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:43.185 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:43.185 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:10:43.185 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:43.185 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:10:43.185 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:10:43.185 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:10:43.185 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:10:43.185 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:43.446 btrfs-progs v6.8.1 00:10:43.446 See https://btrfs.readthedocs.io for more information. 00:10:43.446 00:10:43.446 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:43.446 NOTE: several default settings have changed in version 5.15, please make sure 00:10:43.446 this does not affect your deployments: 00:10:43.446 - DUP for metadata (-m dup) 00:10:43.446 - enabled no-holes (-O no-holes) 00:10:43.446 - enabled free-space-tree (-R free-space-tree) 00:10:43.446 00:10:43.446 Label: (null) 00:10:43.446 UUID: 05f96e09-7d91-4ecf-acb1-3b455cf9767f 00:10:43.446 Node size: 16384 00:10:43.446 Sector size: 4096 (CPU page size: 4096) 00:10:43.446 Filesystem size: 510.00MiB 00:10:43.446 Block group profiles: 00:10:43.446 Data: single 8.00MiB 00:10:43.446 Metadata: DUP 32.00MiB 00:10:43.446 System: DUP 8.00MiB 00:10:43.446 SSD detected: yes 00:10:43.446 Zoned device: no 00:10:43.446 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:43.446 Checksum: crc32c 00:10:43.446 Number of devices: 1 00:10:43.446 Devices: 00:10:43.446 ID SIZE PATH 00:10:43.446 1 510.00MiB /dev/nvme0n1p1 00:10:43.446 00:10:43.446 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:10:43.446 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:44.389 11:55:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:44.389 11:55:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:44.389 11:55:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:44.389 11:55:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:44.389 11:55:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:44.389 11:55:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:44.389 11:55:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 855442 00:10:44.389 11:55:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:44.389 11:55:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:44.650 11:55:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:44.651 11:55:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:44.651 00:10:44.651 real 0m1.320s 00:10:44.651 user 0m0.030s 00:10:44.651 sys 0m0.121s 00:10:44.651 11:55:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:44.651 11:55:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:44.651 ************************************ 00:10:44.651 END TEST filesystem_btrfs 00:10:44.651 ************************************ 00:10:44.651 11:55:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:44.651 11:55:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:44.651 11:55:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:44.651 11:55:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:44.651 ************************************ 00:10:44.651 START TEST filesystem_xfs 00:10:44.651 ************************************ 00:10:44.651 11:55:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:10:44.651 11:55:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:44.651 11:55:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:44.651 11:55:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:44.651 11:55:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:10:44.651 11:55:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:44.651 11:55:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:10:44.651 11:55:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:10:44.651 11:55:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:10:44.651 11:55:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:10:44.651 11:55:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:44.651 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:44.651 = sectsz=512 attr=2, projid32bit=1 00:10:44.651 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:44.651 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:44.651 data = bsize=4096 blocks=130560, imaxpct=25 00:10:44.651 = sunit=0 swidth=0 blks 00:10:44.651 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:44.651 log =internal log bsize=4096 blocks=16384, version=2 00:10:44.651 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:44.651 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:45.593 Discarding blocks...Done. 00:10:45.853 11:55:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:10:45.853 11:55:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:48.397 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:48.397 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:48.397 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:48.397 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:48.397 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:48.397 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:48.397 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 855442 00:10:48.397 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:48.397 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:48.397 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:48.397 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:48.397 00:10:48.397 real 0m3.697s 00:10:48.397 user 0m0.027s 00:10:48.397 sys 0m0.080s 00:10:48.397 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:48.397 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:48.397 ************************************ 00:10:48.398 END TEST filesystem_xfs 00:10:48.398 ************************************ 00:10:48.398 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:48.398 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:48.398 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:48.659 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.659 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:48.659 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:10:48.659 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:48.660 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:48.660 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:48.660 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:48.660 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:10:48.660 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:48.660 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.660 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.660 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.660 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:48.660 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 855442 00:10:48.660 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 855442 ']' 00:10:48.660 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 855442 00:10:48.660 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:10:48.660 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:48.660 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 855442 00:10:48.660 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:48.660 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:48.660 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 855442' 00:10:48.660 killing process with pid 855442 00:10:48.660 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 855442 00:10:48.660 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 855442 00:10:48.921 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:48.921 00:10:48.921 real 0m20.342s 00:10:48.921 user 1m20.396s 00:10:48.921 sys 0m1.507s 00:10:48.921 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:48.921 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.921 ************************************ 00:10:48.921 END TEST nvmf_filesystem_no_in_capsule 00:10:48.921 ************************************ 00:10:48.921 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:48.921 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:48.921 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:48.921 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:48.921 ************************************ 00:10:48.921 START TEST nvmf_filesystem_in_capsule 00:10:48.921 ************************************ 00:10:48.921 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:10:48.921 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:48.921 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:48.921 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:48.921 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:48.921 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.921 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=859528 00:10:48.921 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 859528 00:10:48.921 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:48.921 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 859528 ']' 00:10:48.921 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.921 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:48.921 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.921 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:48.921 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.921 [2024-10-21 11:55:25.466911] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:10:48.921 [2024-10-21 11:55:25.466967] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:49.182 [2024-10-21 11:55:25.551843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:49.182 [2024-10-21 11:55:25.589042] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:49.182 [2024-10-21 11:55:25.589075] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:49.182 [2024-10-21 11:55:25.589081] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:49.182 [2024-10-21 11:55:25.589089] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:49.182 [2024-10-21 11:55:25.589093] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:49.182 [2024-10-21 11:55:25.590417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:49.182 [2024-10-21 11:55:25.590588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:49.182 [2024-10-21 11:55:25.590716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.182 [2024-10-21 11:55:25.590718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:49.754 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:49.755 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:10:49.755 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:49.755 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:49.755 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:49.755 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:49.755 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:49.755 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:49.755 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.755 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:49.755 [2024-10-21 11:55:26.314041] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:49.755 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.755 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:49.755 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.755 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:50.015 Malloc1 00:10:50.015 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.015 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:50.015 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.015 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:50.015 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.015 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:50.015 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.015 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:50.015 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.015 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:50.015 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.015 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:50.015 [2024-10-21 11:55:26.436304] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:50.015 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.015 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:50.015 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:10:50.015 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:10:50.015 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:10:50.015 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:10:50.015 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:50.015 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.015 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:50.015 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.015 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:10:50.015 { 00:10:50.015 "name": "Malloc1", 00:10:50.015 "aliases": [ 00:10:50.015 "7cff8a30-a6a1-492a-b103-4a2214fbe538" 00:10:50.015 ], 00:10:50.015 "product_name": "Malloc disk", 00:10:50.015 "block_size": 512, 00:10:50.015 "num_blocks": 1048576, 00:10:50.016 "uuid": "7cff8a30-a6a1-492a-b103-4a2214fbe538", 00:10:50.016 "assigned_rate_limits": { 00:10:50.016 "rw_ios_per_sec": 0, 00:10:50.016 "rw_mbytes_per_sec": 0, 00:10:50.016 "r_mbytes_per_sec": 0, 00:10:50.016 "w_mbytes_per_sec": 0 00:10:50.016 }, 00:10:50.016 "claimed": true, 00:10:50.016 "claim_type": "exclusive_write", 00:10:50.016 "zoned": false, 00:10:50.016 "supported_io_types": { 00:10:50.016 "read": true, 00:10:50.016 "write": true, 00:10:50.016 "unmap": true, 00:10:50.016 "flush": true, 00:10:50.016 "reset": true, 00:10:50.016 "nvme_admin": false, 00:10:50.016 "nvme_io": false, 00:10:50.016 "nvme_io_md": false, 00:10:50.016 "write_zeroes": true, 00:10:50.016 "zcopy": true, 00:10:50.016 "get_zone_info": false, 00:10:50.016 "zone_management": false, 00:10:50.016 "zone_append": false, 00:10:50.016 "compare": false, 00:10:50.016 "compare_and_write": false, 00:10:50.016 "abort": true, 00:10:50.016 "seek_hole": false, 00:10:50.016 "seek_data": false, 00:10:50.016 "copy": true, 00:10:50.016 "nvme_iov_md": false 00:10:50.016 }, 00:10:50.016 "memory_domains": [ 00:10:50.016 { 00:10:50.016 "dma_device_id": "system", 00:10:50.016 "dma_device_type": 1 00:10:50.016 }, 00:10:50.016 { 00:10:50.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.016 "dma_device_type": 2 00:10:50.016 } 00:10:50.016 ], 00:10:50.016 "driver_specific": {} 00:10:50.016 } 00:10:50.016 ]' 00:10:50.016 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:10:50.016 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:10:50.016 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:10:50.016 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:10:50.016 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:10:50.016 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:10:50.016 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:50.016 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:51.930 11:55:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:51.930 11:55:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:10:51.930 11:55:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:51.930 11:55:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:51.930 11:55:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:10:53.844 11:55:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:53.844 11:55:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:53.844 11:55:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:53.844 11:55:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:53.844 11:55:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:53.844 11:55:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:10:53.844 11:55:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:53.844 11:55:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:53.844 11:55:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:53.844 11:55:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:53.844 11:55:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:53.844 11:55:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:53.844 11:55:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:53.844 11:55:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:53.844 11:55:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:53.844 11:55:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:53.844 11:55:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:54.149 11:55:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:54.835 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:55.795 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:55.795 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:55.795 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:55.795 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:55.795 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.795 ************************************ 00:10:55.795 START TEST filesystem_in_capsule_ext4 00:10:55.795 ************************************ 00:10:55.795 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:55.795 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:55.795 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:55.795 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:55.795 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:10:55.795 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:55.795 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:10:55.795 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:10:55.795 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:10:55.795 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:10:55.796 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:55.796 mke2fs 1.47.0 (5-Feb-2023) 00:10:56.054 Discarding device blocks: 0/522240 done 00:10:56.054 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:56.054 Filesystem UUID: b0b6e03e-8078-4bae-b6b9-a395459e6e59 00:10:56.054 Superblock backups stored on blocks: 00:10:56.054 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:56.054 00:10:56.054 Allocating group tables: 0/64 done 00:10:56.054 Writing inode tables: 0/64 done 00:10:56.054 Creating journal (8192 blocks): done 00:10:58.531 Writing superblocks and filesystem accounting information: 0/64 done 00:10:58.531 00:10:58.531 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:10:58.531 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:05.113 11:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:05.113 11:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:05.113 11:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:05.113 11:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:05.113 11:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:05.113 11:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:05.113 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 859528 00:11:05.113 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:05.113 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:05.113 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:05.113 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:05.113 00:11:05.113 real 0m8.665s 00:11:05.113 user 0m0.022s 00:11:05.113 sys 0m0.086s 00:11:05.113 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:05.113 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:05.113 ************************************ 00:11:05.113 END TEST filesystem_in_capsule_ext4 00:11:05.113 ************************************ 00:11:05.113 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:05.113 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:05.113 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:05.113 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:05.113 ************************************ 00:11:05.113 START TEST filesystem_in_capsule_btrfs 00:11:05.113 ************************************ 00:11:05.113 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:05.113 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:05.113 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:05.114 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:05.114 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:05.114 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:05.114 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:05.114 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:05.114 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:05.114 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:05.114 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:05.114 btrfs-progs v6.8.1 00:11:05.114 See https://btrfs.readthedocs.io for more information. 00:11:05.114 00:11:05.114 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:05.114 NOTE: several default settings have changed in version 5.15, please make sure 00:11:05.114 this does not affect your deployments: 00:11:05.114 - DUP for metadata (-m dup) 00:11:05.114 - enabled no-holes (-O no-holes) 00:11:05.114 - enabled free-space-tree (-R free-space-tree) 00:11:05.114 00:11:05.114 Label: (null) 00:11:05.114 UUID: ccf41a26-e267-4279-a148-6943d843680f 00:11:05.114 Node size: 16384 00:11:05.114 Sector size: 4096 (CPU page size: 4096) 00:11:05.114 Filesystem size: 510.00MiB 00:11:05.114 Block group profiles: 00:11:05.114 Data: single 8.00MiB 00:11:05.114 Metadata: DUP 32.00MiB 00:11:05.114 System: DUP 8.00MiB 00:11:05.114 SSD detected: yes 00:11:05.114 Zoned device: no 00:11:05.114 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:05.114 Checksum: crc32c 00:11:05.114 Number of devices: 1 00:11:05.114 Devices: 00:11:05.114 ID SIZE PATH 00:11:05.114 1 510.00MiB /dev/nvme0n1p1 00:11:05.114 00:11:05.114 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:05.114 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:05.686 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:05.686 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:05.686 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:05.686 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:05.686 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:05.686 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:05.686 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 859528 00:11:05.686 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:05.686 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:05.686 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:05.686 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:05.686 00:11:05.686 real 0m1.065s 00:11:05.686 user 0m0.028s 00:11:05.686 sys 0m0.121s 00:11:05.686 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:05.686 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:05.686 ************************************ 00:11:05.686 END TEST filesystem_in_capsule_btrfs 00:11:05.686 ************************************ 00:11:05.686 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:05.686 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:05.686 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:05.686 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:05.686 ************************************ 00:11:05.686 START TEST filesystem_in_capsule_xfs 00:11:05.686 ************************************ 00:11:05.686 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:05.686 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:05.686 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:05.686 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:05.686 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:05.686 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:05.686 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:05.686 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:11:05.686 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:05.686 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:05.686 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:05.947 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:05.947 = sectsz=512 attr=2, projid32bit=1 00:11:05.947 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:05.947 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:05.947 data = bsize=4096 blocks=130560, imaxpct=25 00:11:05.947 = sunit=0 swidth=0 blks 00:11:05.947 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:05.947 log =internal log bsize=4096 blocks=16384, version=2 00:11:05.947 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:05.947 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:06.519 Discarding blocks...Done. 00:11:06.519 11:55:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:06.519 11:55:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:09.064 11:55:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:09.325 11:55:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:09.325 11:55:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:09.325 11:55:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:09.325 11:55:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:09.325 11:55:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:09.325 11:55:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 859528 00:11:09.325 11:55:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:09.325 11:55:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:09.325 11:55:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:09.325 11:55:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:09.325 00:11:09.325 real 0m3.465s 00:11:09.325 user 0m0.028s 00:11:09.325 sys 0m0.078s 00:11:09.325 11:55:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:09.325 11:55:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:09.325 ************************************ 00:11:09.325 END TEST filesystem_in_capsule_xfs 00:11:09.325 ************************************ 00:11:09.325 11:55:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:09.585 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:09.585 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:09.846 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.846 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:09.846 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:09.846 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:09.847 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:09.847 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:09.847 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:09.847 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:09.847 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:09.847 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.847 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:09.847 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.847 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:09.847 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 859528 00:11:09.847 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 859528 ']' 00:11:09.847 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 859528 00:11:09.847 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:09.847 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:09.847 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 859528 00:11:09.847 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:09.847 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:09.847 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 859528' 00:11:09.847 killing process with pid 859528 00:11:09.847 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 859528 00:11:09.847 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 859528 00:11:10.109 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:10.109 00:11:10.109 real 0m21.093s 00:11:10.109 user 1m23.499s 00:11:10.109 sys 0m1.463s 00:11:10.109 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:10.109 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:10.109 ************************************ 00:11:10.109 END TEST nvmf_filesystem_in_capsule 00:11:10.109 ************************************ 00:11:10.109 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:10.109 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:10.109 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:10.109 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:10.109 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:10.109 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:10.109 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:10.109 rmmod nvme_tcp 00:11:10.109 rmmod nvme_fabrics 00:11:10.109 rmmod nvme_keyring 00:11:10.109 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:10.109 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:10.109 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:10.109 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:11:10.109 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:10.109 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:10.109 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:10.109 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:10.109 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-save 00:11:10.109 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:10.109 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-restore 00:11:10.109 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:10.109 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:10.109 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.109 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:10.109 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:12.662 00:11:12.662 real 0m51.787s 00:11:12.662 user 2m46.291s 00:11:12.662 sys 0m8.869s 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:12.662 ************************************ 00:11:12.662 END TEST nvmf_filesystem 00:11:12.662 ************************************ 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:12.662 ************************************ 00:11:12.662 START TEST nvmf_target_discovery 00:11:12.662 ************************************ 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:12.662 * Looking for test storage... 00:11:12.662 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:12.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.662 --rc genhtml_branch_coverage=1 00:11:12.662 --rc genhtml_function_coverage=1 00:11:12.662 --rc genhtml_legend=1 00:11:12.662 --rc geninfo_all_blocks=1 00:11:12.662 --rc geninfo_unexecuted_blocks=1 00:11:12.662 00:11:12.662 ' 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:12.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.662 --rc genhtml_branch_coverage=1 00:11:12.662 --rc genhtml_function_coverage=1 00:11:12.662 --rc genhtml_legend=1 00:11:12.662 --rc geninfo_all_blocks=1 00:11:12.662 --rc geninfo_unexecuted_blocks=1 00:11:12.662 00:11:12.662 ' 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:12.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.662 --rc genhtml_branch_coverage=1 00:11:12.662 --rc genhtml_function_coverage=1 00:11:12.662 --rc genhtml_legend=1 00:11:12.662 --rc geninfo_all_blocks=1 00:11:12.662 --rc geninfo_unexecuted_blocks=1 00:11:12.662 00:11:12.662 ' 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:12.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.662 --rc genhtml_branch_coverage=1 00:11:12.662 --rc genhtml_function_coverage=1 00:11:12.662 --rc genhtml_legend=1 00:11:12.662 --rc geninfo_all_blocks=1 00:11:12.662 --rc geninfo_unexecuted_blocks=1 00:11:12.662 00:11:12.662 ' 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:12.662 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:12.662 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:12.662 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:12.662 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:12.662 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:12.662 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:12.662 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:12.662 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:12.662 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:12.662 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:12.662 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:12.663 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:12.663 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.663 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.663 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.663 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:12.663 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.663 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:12.663 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:12.663 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:12.663 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:12.663 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:12.663 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:12.663 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:12.663 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:12.663 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:12.663 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:12.663 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:12.663 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:12.663 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:12.663 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:12.663 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:12.663 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:12.663 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:12.663 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:12.663 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:12.663 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:12.663 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:12.663 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.663 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:12.663 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.663 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:12.663 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:12.663 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:12.663 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:20.808 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:20.808 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:20.808 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:20.808 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:20.808 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:20.808 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:20.808 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:20.808 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:20.808 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:20.808 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:20.808 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:20.808 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:20.808 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:20.808 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:20.808 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:20.808 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:20.808 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:20.808 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:20.808 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:20.808 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:20.808 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:20.808 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:20.808 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:20.808 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:20.808 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:20.808 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:20.808 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:20.808 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:20.808 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:20.808 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:20.808 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:20.808 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:20.808 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:20.808 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:20.808 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:20.808 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:20.808 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:20.808 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:20.808 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:20.808 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:20.808 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:20.808 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:20.809 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:20.809 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:20.809 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:20.809 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:20.809 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:11:20.809 00:11:20.809 --- 10.0.0.2 ping statistics --- 00:11:20.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.809 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:20.809 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:20.809 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:11:20.809 00:11:20.809 --- 10.0.0.1 ping statistics --- 00:11:20.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.809 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # return 0 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # nvmfpid=868113 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # waitforlisten 868113 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 868113 ']' 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:20.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:20.809 11:55:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:20.809 [2024-10-21 11:55:56.642507] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:11:20.809 [2024-10-21 11:55:56.642574] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:20.809 [2024-10-21 11:55:56.732621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:20.809 [2024-10-21 11:55:56.787030] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:20.809 [2024-10-21 11:55:56.787084] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:20.809 [2024-10-21 11:55:56.787093] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:20.809 [2024-10-21 11:55:56.787100] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:20.809 [2024-10-21 11:55:56.787107] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:20.809 [2024-10-21 11:55:56.789567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:20.809 [2024-10-21 11:55:56.789795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:20.809 [2024-10-21 11:55:56.789958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:20.809 [2024-10-21 11:55:56.789960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.070 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.071 [2024-10-21 11:55:57.518055] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.071 Null1 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.071 [2024-10-21 11:55:57.578609] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.071 Null2 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.071 Null3 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.071 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.332 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.332 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:21.332 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.332 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.332 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.332 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:21.332 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:21.332 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.332 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.332 Null4 00:11:21.332 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.332 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:21.332 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.332 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.332 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.332 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:21.332 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.332 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.332 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.332 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:21.332 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.332 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.332 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.332 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:21.332 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.332 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.332 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.332 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:21.332 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.332 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.333 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.333 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:11:21.594 00:11:21.594 Discovery Log Number of Records 6, Generation counter 6 00:11:21.594 =====Discovery Log Entry 0====== 00:11:21.594 trtype: tcp 00:11:21.594 adrfam: ipv4 00:11:21.594 subtype: current discovery subsystem 00:11:21.594 treq: not required 00:11:21.594 portid: 0 00:11:21.594 trsvcid: 4420 00:11:21.594 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:21.594 traddr: 10.0.0.2 00:11:21.594 eflags: explicit discovery connections, duplicate discovery information 00:11:21.594 sectype: none 00:11:21.594 =====Discovery Log Entry 1====== 00:11:21.594 trtype: tcp 00:11:21.594 adrfam: ipv4 00:11:21.594 subtype: nvme subsystem 00:11:21.594 treq: not required 00:11:21.594 portid: 0 00:11:21.594 trsvcid: 4420 00:11:21.594 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:21.594 traddr: 10.0.0.2 00:11:21.594 eflags: none 00:11:21.594 sectype: none 00:11:21.594 =====Discovery Log Entry 2====== 00:11:21.594 trtype: tcp 00:11:21.594 adrfam: ipv4 00:11:21.594 subtype: nvme subsystem 00:11:21.594 treq: not required 00:11:21.594 portid: 0 00:11:21.594 trsvcid: 4420 00:11:21.594 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:21.594 traddr: 10.0.0.2 00:11:21.594 eflags: none 00:11:21.594 sectype: none 00:11:21.594 =====Discovery Log Entry 3====== 00:11:21.594 trtype: tcp 00:11:21.594 adrfam: ipv4 00:11:21.594 subtype: nvme subsystem 00:11:21.594 treq: not required 00:11:21.594 portid: 0 00:11:21.594 trsvcid: 4420 00:11:21.594 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:21.594 traddr: 10.0.0.2 00:11:21.594 eflags: none 00:11:21.594 sectype: none 00:11:21.594 =====Discovery Log Entry 4====== 00:11:21.594 trtype: tcp 00:11:21.594 adrfam: ipv4 00:11:21.594 subtype: nvme subsystem 00:11:21.594 treq: not required 00:11:21.594 portid: 0 00:11:21.594 trsvcid: 4420 00:11:21.594 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:21.594 traddr: 10.0.0.2 00:11:21.594 eflags: none 00:11:21.594 sectype: none 00:11:21.594 =====Discovery Log Entry 5====== 00:11:21.594 trtype: tcp 00:11:21.594 adrfam: ipv4 00:11:21.594 subtype: discovery subsystem referral 00:11:21.594 treq: not required 00:11:21.594 portid: 0 00:11:21.594 trsvcid: 4430 00:11:21.594 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:21.594 traddr: 10.0.0.2 00:11:21.594 eflags: none 00:11:21.594 sectype: none 00:11:21.595 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:21.595 Perform nvmf subsystem discovery via RPC 00:11:21.595 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:21.595 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.595 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.595 [ 00:11:21.595 { 00:11:21.595 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:21.595 "subtype": "Discovery", 00:11:21.595 "listen_addresses": [ 00:11:21.595 { 00:11:21.595 "trtype": "TCP", 00:11:21.595 "adrfam": "IPv4", 00:11:21.595 "traddr": "10.0.0.2", 00:11:21.595 "trsvcid": "4420" 00:11:21.595 } 00:11:21.595 ], 00:11:21.595 "allow_any_host": true, 00:11:21.595 "hosts": [] 00:11:21.595 }, 00:11:21.595 { 00:11:21.595 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:21.595 "subtype": "NVMe", 00:11:21.595 "listen_addresses": [ 00:11:21.595 { 00:11:21.595 "trtype": "TCP", 00:11:21.595 "adrfam": "IPv4", 00:11:21.595 "traddr": "10.0.0.2", 00:11:21.595 "trsvcid": "4420" 00:11:21.595 } 00:11:21.595 ], 00:11:21.595 "allow_any_host": true, 00:11:21.595 "hosts": [], 00:11:21.595 "serial_number": "SPDK00000000000001", 00:11:21.595 "model_number": "SPDK bdev Controller", 00:11:21.595 "max_namespaces": 32, 00:11:21.595 "min_cntlid": 1, 00:11:21.595 "max_cntlid": 65519, 00:11:21.595 "namespaces": [ 00:11:21.595 { 00:11:21.595 "nsid": 1, 00:11:21.595 "bdev_name": "Null1", 00:11:21.595 "name": "Null1", 00:11:21.595 "nguid": "10CE436793A2410A9EAAB76BB0EB6744", 00:11:21.595 "uuid": "10ce4367-93a2-410a-9eaa-b76bb0eb6744" 00:11:21.595 } 00:11:21.595 ] 00:11:21.595 }, 00:11:21.595 { 00:11:21.595 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:21.595 "subtype": "NVMe", 00:11:21.595 "listen_addresses": [ 00:11:21.595 { 00:11:21.595 "trtype": "TCP", 00:11:21.595 "adrfam": "IPv4", 00:11:21.595 "traddr": "10.0.0.2", 00:11:21.595 "trsvcid": "4420" 00:11:21.595 } 00:11:21.595 ], 00:11:21.595 "allow_any_host": true, 00:11:21.595 "hosts": [], 00:11:21.595 "serial_number": "SPDK00000000000002", 00:11:21.595 "model_number": "SPDK bdev Controller", 00:11:21.595 "max_namespaces": 32, 00:11:21.595 "min_cntlid": 1, 00:11:21.595 "max_cntlid": 65519, 00:11:21.595 "namespaces": [ 00:11:21.595 { 00:11:21.595 "nsid": 1, 00:11:21.595 "bdev_name": "Null2", 00:11:21.595 "name": "Null2", 00:11:21.595 "nguid": "E1DBDFD88C864058AD54283255782951", 00:11:21.595 "uuid": "e1dbdfd8-8c86-4058-ad54-283255782951" 00:11:21.595 } 00:11:21.595 ] 00:11:21.595 }, 00:11:21.595 { 00:11:21.595 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:21.595 "subtype": "NVMe", 00:11:21.595 "listen_addresses": [ 00:11:21.595 { 00:11:21.595 "trtype": "TCP", 00:11:21.595 "adrfam": "IPv4", 00:11:21.595 "traddr": "10.0.0.2", 00:11:21.595 "trsvcid": "4420" 00:11:21.595 } 00:11:21.595 ], 00:11:21.595 "allow_any_host": true, 00:11:21.595 "hosts": [], 00:11:21.595 "serial_number": "SPDK00000000000003", 00:11:21.595 "model_number": "SPDK bdev Controller", 00:11:21.595 "max_namespaces": 32, 00:11:21.595 "min_cntlid": 1, 00:11:21.595 "max_cntlid": 65519, 00:11:21.595 "namespaces": [ 00:11:21.595 { 00:11:21.595 "nsid": 1, 00:11:21.595 "bdev_name": "Null3", 00:11:21.595 "name": "Null3", 00:11:21.595 "nguid": "D3A3A85093CE49DB8EA6AC3096A8D023", 00:11:21.595 "uuid": "d3a3a850-93ce-49db-8ea6-ac3096a8d023" 00:11:21.595 } 00:11:21.595 ] 00:11:21.595 }, 00:11:21.595 { 00:11:21.595 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:21.595 "subtype": "NVMe", 00:11:21.595 "listen_addresses": [ 00:11:21.595 { 00:11:21.595 "trtype": "TCP", 00:11:21.595 "adrfam": "IPv4", 00:11:21.595 "traddr": "10.0.0.2", 00:11:21.595 "trsvcid": "4420" 00:11:21.595 } 00:11:21.595 ], 00:11:21.595 "allow_any_host": true, 00:11:21.595 "hosts": [], 00:11:21.595 "serial_number": "SPDK00000000000004", 00:11:21.595 "model_number": "SPDK bdev Controller", 00:11:21.595 "max_namespaces": 32, 00:11:21.595 "min_cntlid": 1, 00:11:21.595 "max_cntlid": 65519, 00:11:21.595 "namespaces": [ 00:11:21.595 { 00:11:21.595 "nsid": 1, 00:11:21.595 "bdev_name": "Null4", 00:11:21.595 "name": "Null4", 00:11:21.595 "nguid": "C1BE275BCAAD4E4CB9B0F0D67D289D35", 00:11:21.595 "uuid": "c1be275b-caad-4e4c-b9b0-f0d67d289d35" 00:11:21.595 } 00:11:21.595 ] 00:11:21.595 } 00:11:21.595 ] 00:11:21.595 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.595 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:21.595 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:21.595 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:21.595 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.595 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.595 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.595 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:21.595 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.595 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.595 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.595 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:21.595 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:21.595 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.595 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.595 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.595 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:21.595 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.595 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.595 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.595 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:21.595 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:21.595 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.595 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.595 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.595 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:21.595 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.595 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.595 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.595 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:21.595 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:21.595 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.595 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.595 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.595 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:21.595 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.595 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.595 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.595 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:21.595 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.595 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.595 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.595 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:21.595 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:21.595 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.595 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.595 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.595 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:21.595 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:21.595 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:21.595 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:21.595 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:21.595 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:21.595 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:21.595 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:21.595 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:21.595 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:21.595 rmmod nvme_tcp 00:11:21.595 rmmod nvme_fabrics 00:11:21.595 rmmod nvme_keyring 00:11:21.857 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:21.857 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:21.857 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:21.857 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@515 -- # '[' -n 868113 ']' 00:11:21.857 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # killprocess 868113 00:11:21.857 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 868113 ']' 00:11:21.857 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 868113 00:11:21.857 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:11:21.857 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:21.857 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 868113 00:11:21.857 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:21.857 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:21.857 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 868113' 00:11:21.857 killing process with pid 868113 00:11:21.857 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 868113 00:11:21.857 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 868113 00:11:21.857 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:21.857 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:21.857 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:21.857 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:21.857 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-save 00:11:21.857 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:21.857 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:11:21.857 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:21.857 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:21.857 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.857 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:21.857 11:55:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:24.406 00:11:24.406 real 0m11.725s 00:11:24.406 user 0m8.918s 00:11:24.406 sys 0m6.174s 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.406 ************************************ 00:11:24.406 END TEST nvmf_target_discovery 00:11:24.406 ************************************ 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:24.406 ************************************ 00:11:24.406 START TEST nvmf_referrals 00:11:24.406 ************************************ 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:24.406 * Looking for test storage... 00:11:24.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:24.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.406 --rc genhtml_branch_coverage=1 00:11:24.406 --rc genhtml_function_coverage=1 00:11:24.406 --rc genhtml_legend=1 00:11:24.406 --rc geninfo_all_blocks=1 00:11:24.406 --rc geninfo_unexecuted_blocks=1 00:11:24.406 00:11:24.406 ' 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:24.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.406 --rc genhtml_branch_coverage=1 00:11:24.406 --rc genhtml_function_coverage=1 00:11:24.406 --rc genhtml_legend=1 00:11:24.406 --rc geninfo_all_blocks=1 00:11:24.406 --rc geninfo_unexecuted_blocks=1 00:11:24.406 00:11:24.406 ' 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:24.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.406 --rc genhtml_branch_coverage=1 00:11:24.406 --rc genhtml_function_coverage=1 00:11:24.406 --rc genhtml_legend=1 00:11:24.406 --rc geninfo_all_blocks=1 00:11:24.406 --rc geninfo_unexecuted_blocks=1 00:11:24.406 00:11:24.406 ' 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:24.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.406 --rc genhtml_branch_coverage=1 00:11:24.406 --rc genhtml_function_coverage=1 00:11:24.406 --rc genhtml_legend=1 00:11:24.406 --rc geninfo_all_blocks=1 00:11:24.406 --rc geninfo_unexecuted_blocks=1 00:11:24.406 00:11:24.406 ' 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:24.406 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:24.407 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:24.407 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:24.407 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:24.407 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:24.407 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:24.407 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:24.407 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:24.407 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:24.407 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.407 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.407 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.407 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:24.407 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.407 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:24.407 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:24.407 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:24.407 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:24.407 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:24.407 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:24.407 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:24.407 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:24.407 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:24.407 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:24.407 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:24.407 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:24.407 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:24.407 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:24.407 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:24.407 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:24.407 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:24.407 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:24.407 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:24.407 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:24.407 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:24.407 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:24.407 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:24.407 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.407 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:24.407 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:24.407 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:24.407 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:24.407 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:24.407 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:32.554 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:32.554 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:32.554 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:32.554 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:32.554 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:32.554 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:32.554 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:32.554 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:32.554 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:32.554 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:32.554 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:32.554 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:32.554 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:32.554 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:32.554 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:32.554 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:32.554 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:32.554 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:32.554 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:32.554 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:32.554 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:32.555 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:32.555 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:32.555 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:32.555 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # is_hw=yes 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:32.555 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:32.555 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.703 ms 00:11:32.555 00:11:32.555 --- 10.0.0.2 ping statistics --- 00:11:32.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.555 rtt min/avg/max/mdev = 0.703/0.703/0.703/0.000 ms 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:32.555 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:32.555 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:11:32.555 00:11:32.555 --- 10.0.0.1 ping statistics --- 00:11:32.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.555 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # return 0 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # nvmfpid=872794 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # waitforlisten 872794 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 872794 ']' 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:32.555 11:56:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:32.555 [2024-10-21 11:56:08.458175] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:11:32.555 [2024-10-21 11:56:08.458237] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:32.555 [2024-10-21 11:56:08.549100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:32.556 [2024-10-21 11:56:08.602307] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:32.556 [2024-10-21 11:56:08.602370] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:32.556 [2024-10-21 11:56:08.602379] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:32.556 [2024-10-21 11:56:08.602387] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:32.556 [2024-10-21 11:56:08.602393] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:32.556 [2024-10-21 11:56:08.604377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:32.556 [2024-10-21 11:56:08.604460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:32.556 [2024-10-21 11:56:08.604626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.556 [2024-10-21 11:56:08.604625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:32.818 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:32.818 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:11:32.818 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:32.818 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:32.819 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:32.819 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:32.819 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:32.819 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.819 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:32.819 [2024-10-21 11:56:09.336137] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:32.819 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.819 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:32.819 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.819 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:32.819 [2024-10-21 11:56:09.352533] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:32.819 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.819 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:32.819 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.819 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:32.819 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.819 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:32.819 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.819 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:32.819 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.819 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:32.819 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.819 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:32.819 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.819 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:32.819 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.819 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:32.819 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.081 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.081 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:33.081 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:33.081 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:33.081 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:33.081 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:33.081 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.081 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.081 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:33.081 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.081 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:33.081 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:33.081 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:33.081 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:33.081 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:33.081 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:33.081 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:33.081 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:33.343 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:33.343 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:33.343 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:33.343 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.343 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.343 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.343 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:33.343 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.343 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.343 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.343 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:33.343 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.343 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.343 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.343 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:33.343 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:33.343 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.343 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.343 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.343 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:33.343 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:33.343 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:33.343 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:33.343 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:33.343 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:33.343 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:33.605 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:33.605 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:33.605 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:33.605 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.605 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.605 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.605 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:33.605 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.605 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.605 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.605 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:33.605 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:33.605 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:33.605 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:33.605 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.605 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.605 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:33.605 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.605 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:33.605 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:33.605 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:33.605 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:33.605 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:33.605 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:33.605 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:33.605 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:33.866 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:33.866 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:33.866 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:33.866 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:33.866 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:33.866 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:33.866 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:34.128 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:34.128 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:34.128 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:34.128 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:34.128 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:34.128 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:34.128 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:34.128 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:34.128 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.128 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:34.128 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.128 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:34.128 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:34.128 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:34.128 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:34.128 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.128 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:34.128 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:34.128 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.389 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:34.389 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:34.389 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:34.389 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:34.389 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:34.389 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:34.389 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:34.389 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:34.389 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:34.389 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:34.389 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:34.389 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:34.389 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:34.389 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:34.389 11:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:34.651 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:34.651 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:34.651 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:34.651 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:34.651 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:34.651 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:34.651 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:34.651 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:34.651 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.651 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:34.651 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.911 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:34.911 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:34.911 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.911 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:34.911 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.911 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:34.911 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:34.911 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:34.911 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:34.911 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:34.911 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:34.911 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:35.173 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:35.173 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:35.173 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:35.173 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:35.173 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:35.173 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:35.173 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:35.173 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:35.173 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:35.173 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:35.173 rmmod nvme_tcp 00:11:35.173 rmmod nvme_fabrics 00:11:35.173 rmmod nvme_keyring 00:11:35.173 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:35.173 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:35.173 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:35.173 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@515 -- # '[' -n 872794 ']' 00:11:35.173 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # killprocess 872794 00:11:35.173 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 872794 ']' 00:11:35.173 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 872794 00:11:35.173 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:11:35.173 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:35.173 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 872794 00:11:35.173 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:35.173 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:35.173 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 872794' 00:11:35.173 killing process with pid 872794 00:11:35.173 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 872794 00:11:35.173 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 872794 00:11:35.434 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:35.434 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:35.434 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:35.434 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:35.434 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:35.434 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-save 00:11:35.434 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-restore 00:11:35.434 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:35.434 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:35.434 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.434 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:35.434 11:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:37.349 11:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:37.349 00:11:37.349 real 0m13.254s 00:11:37.349 user 0m15.908s 00:11:37.349 sys 0m6.471s 00:11:37.349 11:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:37.349 11:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:37.349 ************************************ 00:11:37.349 END TEST nvmf_referrals 00:11:37.349 ************************************ 00:11:37.349 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:37.349 11:56:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:37.349 11:56:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:37.349 11:56:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:37.349 ************************************ 00:11:37.349 START TEST nvmf_connect_disconnect 00:11:37.349 ************************************ 00:11:37.611 11:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:37.611 * Looking for test storage... 00:11:37.611 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:37.611 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:37.611 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:11:37.611 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:37.611 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:37.611 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:37.611 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:37.611 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:37.611 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:37.611 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:37.611 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:37.611 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:37.611 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:37.611 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:37.611 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:37.611 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:37.611 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:37.611 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:37.611 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:37.611 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:37.611 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:37.611 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:37.611 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:37.611 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:37.611 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:37.611 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:37.611 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:37.611 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:37.611 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:37.611 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:37.611 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:37.611 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:37.611 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:37.611 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:37.611 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:37.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.611 --rc genhtml_branch_coverage=1 00:11:37.611 --rc genhtml_function_coverage=1 00:11:37.611 --rc genhtml_legend=1 00:11:37.611 --rc geninfo_all_blocks=1 00:11:37.611 --rc geninfo_unexecuted_blocks=1 00:11:37.611 00:11:37.611 ' 00:11:37.611 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:37.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.611 --rc genhtml_branch_coverage=1 00:11:37.611 --rc genhtml_function_coverage=1 00:11:37.611 --rc genhtml_legend=1 00:11:37.611 --rc geninfo_all_blocks=1 00:11:37.611 --rc geninfo_unexecuted_blocks=1 00:11:37.611 00:11:37.611 ' 00:11:37.611 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:37.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.611 --rc genhtml_branch_coverage=1 00:11:37.611 --rc genhtml_function_coverage=1 00:11:37.611 --rc genhtml_legend=1 00:11:37.611 --rc geninfo_all_blocks=1 00:11:37.611 --rc geninfo_unexecuted_blocks=1 00:11:37.611 00:11:37.611 ' 00:11:37.611 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:37.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.611 --rc genhtml_branch_coverage=1 00:11:37.611 --rc genhtml_function_coverage=1 00:11:37.611 --rc genhtml_legend=1 00:11:37.611 --rc geninfo_all_blocks=1 00:11:37.611 --rc geninfo_unexecuted_blocks=1 00:11:37.611 00:11:37.611 ' 00:11:37.611 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:37.611 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:37.611 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:37.611 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:37.611 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:37.612 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:37.612 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:37.612 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:37.612 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:37.612 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:37.612 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:37.612 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:37.612 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:37.612 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:37.612 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:37.612 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:37.612 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:37.612 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:37.612 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:37.612 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:37.612 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:37.612 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:37.612 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:37.612 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.612 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.612 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.612 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:37.612 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.612 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:37.612 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:37.612 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:37.612 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:37.612 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:37.612 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:37.612 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:37.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:37.612 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:37.612 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:37.612 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:37.612 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:37.612 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:37.612 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:37.612 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:37.612 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:37.612 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:37.612 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:37.612 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:37.612 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:37.612 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:37.612 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:37.612 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:37.612 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:37.612 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:37.612 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:45.755 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:45.756 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:45.756 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:45.756 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:45.756 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:45.756 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:45.756 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.603 ms 00:11:45.756 00:11:45.756 --- 10.0.0.2 ping statistics --- 00:11:45.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:45.756 rtt min/avg/max/mdev = 0.603/0.603/0.603/0.000 ms 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:45.756 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:45.756 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:11:45.756 00:11:45.756 --- 10.0.0.1 ping statistics --- 00:11:45.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:45.756 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:11:45.756 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:45.757 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # return 0 00:11:45.757 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:45.757 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:45.757 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:45.757 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:45.757 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:45.757 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:45.757 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:45.757 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:45.757 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:45.757 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:45.757 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:45.757 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # nvmfpid=877593 00:11:45.757 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # waitforlisten 877593 00:11:45.757 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:45.757 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 877593 ']' 00:11:45.757 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:45.757 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:45.757 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:45.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:45.757 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:45.757 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:45.757 [2024-10-21 11:56:21.806669] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:11:45.757 [2024-10-21 11:56:21.806740] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:45.757 [2024-10-21 11:56:21.896182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:45.757 [2024-10-21 11:56:21.951337] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:45.757 [2024-10-21 11:56:21.951385] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:45.757 [2024-10-21 11:56:21.951395] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:45.757 [2024-10-21 11:56:21.951402] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:45.757 [2024-10-21 11:56:21.951409] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:45.757 [2024-10-21 11:56:21.953722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:45.757 [2024-10-21 11:56:21.953756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:45.757 [2024-10-21 11:56:21.953889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.757 [2024-10-21 11:56:21.953889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:46.330 11:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:46.330 11:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:11:46.330 11:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:46.330 11:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:46.330 11:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:46.330 11:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:46.330 11:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:46.330 11:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.330 11:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:46.330 [2024-10-21 11:56:22.682083] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:46.330 11:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.330 11:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:46.330 11:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.330 11:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:46.330 11:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.330 11:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:46.330 11:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:46.330 11:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.330 11:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:46.330 11:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.330 11:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:46.330 11:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.330 11:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:46.330 11:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.330 11:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:46.330 11:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.330 11:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:46.330 [2024-10-21 11:56:22.761927] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:46.330 11:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.330 11:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:46.330 11:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:46.330 11:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:50.541 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.842 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.145 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.352 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.747 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.747 11:56:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:04.748 11:56:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:04.748 11:56:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:04.748 11:56:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:04.748 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:04.748 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:04.748 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:04.748 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:04.748 rmmod nvme_tcp 00:12:04.748 rmmod nvme_fabrics 00:12:04.748 rmmod nvme_keyring 00:12:04.748 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:04.748 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:04.748 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:04.748 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@515 -- # '[' -n 877593 ']' 00:12:04.748 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # killprocess 877593 00:12:04.748 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 877593 ']' 00:12:04.748 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 877593 00:12:04.748 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:12:04.748 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:04.748 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 877593 00:12:04.748 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:04.748 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:04.748 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 877593' 00:12:04.748 killing process with pid 877593 00:12:04.748 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 877593 00:12:04.748 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 877593 00:12:04.748 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:04.748 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:04.748 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:04.748 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:04.748 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:12:04.748 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:04.748 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:12:04.748 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:04.748 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:04.748 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.748 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:04.748 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:07.297 00:12:07.297 real 0m29.392s 00:12:07.297 user 1m19.067s 00:12:07.297 sys 0m7.146s 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:07.297 ************************************ 00:12:07.297 END TEST nvmf_connect_disconnect 00:12:07.297 ************************************ 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:07.297 ************************************ 00:12:07.297 START TEST nvmf_multitarget 00:12:07.297 ************************************ 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:07.297 * Looking for test storage... 00:12:07.297 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:07.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.297 --rc genhtml_branch_coverage=1 00:12:07.297 --rc genhtml_function_coverage=1 00:12:07.297 --rc genhtml_legend=1 00:12:07.297 --rc geninfo_all_blocks=1 00:12:07.297 --rc geninfo_unexecuted_blocks=1 00:12:07.297 00:12:07.297 ' 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:07.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.297 --rc genhtml_branch_coverage=1 00:12:07.297 --rc genhtml_function_coverage=1 00:12:07.297 --rc genhtml_legend=1 00:12:07.297 --rc geninfo_all_blocks=1 00:12:07.297 --rc geninfo_unexecuted_blocks=1 00:12:07.297 00:12:07.297 ' 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:07.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.297 --rc genhtml_branch_coverage=1 00:12:07.297 --rc genhtml_function_coverage=1 00:12:07.297 --rc genhtml_legend=1 00:12:07.297 --rc geninfo_all_blocks=1 00:12:07.297 --rc geninfo_unexecuted_blocks=1 00:12:07.297 00:12:07.297 ' 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:07.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.297 --rc genhtml_branch_coverage=1 00:12:07.297 --rc genhtml_function_coverage=1 00:12:07.297 --rc genhtml_legend=1 00:12:07.297 --rc geninfo_all_blocks=1 00:12:07.297 --rc geninfo_unexecuted_blocks=1 00:12:07.297 00:12:07.297 ' 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:07.297 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:07.298 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.298 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.298 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.298 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:07.298 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.298 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:07.298 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:07.298 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:07.298 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:07.298 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:07.298 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:07.298 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:07.298 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:07.298 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:07.298 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:07.298 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:07.298 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:07.298 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:07.298 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:07.298 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:07.298 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:07.298 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:07.298 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:07.298 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.298 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:07.298 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.298 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:07.298 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:07.298 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:07.298 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:15.446 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:15.447 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:15.447 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:15.447 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:15.447 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # is_hw=yes 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:15.447 11:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:15.447 11:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:15.447 11:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:15.447 11:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:15.447 11:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:15.447 11:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:15.447 11:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:15.447 11:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:15.447 11:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:15.447 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:15.447 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.673 ms 00:12:15.447 00:12:15.447 --- 10.0.0.2 ping statistics --- 00:12:15.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.447 rtt min/avg/max/mdev = 0.673/0.673/0.673/0.000 ms 00:12:15.447 11:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:15.447 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:15.447 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:12:15.447 00:12:15.447 --- 10.0.0.1 ping statistics --- 00:12:15.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.447 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:12:15.447 11:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:15.447 11:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # return 0 00:12:15.447 11:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:15.447 11:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:15.448 11:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:15.448 11:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:15.448 11:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:15.448 11:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:15.448 11:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:15.448 11:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:15.448 11:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:15.448 11:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:15.448 11:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:15.448 11:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # nvmfpid=885722 00:12:15.448 11:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # waitforlisten 885722 00:12:15.448 11:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:15.448 11:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 885722 ']' 00:12:15.448 11:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.448 11:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:15.448 11:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.448 11:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:15.448 11:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:15.448 [2024-10-21 11:56:51.264112] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:12:15.448 [2024-10-21 11:56:51.264180] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:15.448 [2024-10-21 11:56:51.353988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:15.448 [2024-10-21 11:56:51.408180] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:15.448 [2024-10-21 11:56:51.408231] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:15.448 [2024-10-21 11:56:51.408240] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:15.448 [2024-10-21 11:56:51.408247] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:15.448 [2024-10-21 11:56:51.408253] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:15.448 [2024-10-21 11:56:51.410368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:15.448 [2024-10-21 11:56:51.410492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:15.448 [2024-10-21 11:56:51.410657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:15.448 [2024-10-21 11:56:51.410658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.710 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:15.710 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:12:15.710 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:15.710 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:15.710 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:15.710 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:15.710 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:15.710 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:15.710 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:15.710 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:15.710 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:15.972 "nvmf_tgt_1" 00:12:15.972 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:15.972 "nvmf_tgt_2" 00:12:15.972 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:15.972 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:16.233 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:16.233 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:16.233 true 00:12:16.233 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:16.233 true 00:12:16.233 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:16.233 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:16.494 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:16.494 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:16.494 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:16.494 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:16.494 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:16.494 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:16.494 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:16.494 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:16.494 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:16.494 rmmod nvme_tcp 00:12:16.494 rmmod nvme_fabrics 00:12:16.494 rmmod nvme_keyring 00:12:16.494 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:16.494 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:16.494 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:16.494 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@515 -- # '[' -n 885722 ']' 00:12:16.494 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # killprocess 885722 00:12:16.494 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 885722 ']' 00:12:16.495 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 885722 00:12:16.495 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:12:16.495 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:16.495 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 885722 00:12:16.756 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:16.756 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:16.756 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 885722' 00:12:16.756 killing process with pid 885722 00:12:16.756 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 885722 00:12:16.756 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 885722 00:12:16.756 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:16.756 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:16.756 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:16.756 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:16.756 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-save 00:12:16.756 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-restore 00:12:16.756 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:16.756 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:16.756 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:16.756 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.756 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:16.756 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:19.314 00:12:19.314 real 0m11.919s 00:12:19.314 user 0m10.400s 00:12:19.314 sys 0m6.189s 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:19.314 ************************************ 00:12:19.314 END TEST nvmf_multitarget 00:12:19.314 ************************************ 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:19.314 ************************************ 00:12:19.314 START TEST nvmf_rpc 00:12:19.314 ************************************ 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:19.314 * Looking for test storage... 00:12:19.314 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:19.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.314 --rc genhtml_branch_coverage=1 00:12:19.314 --rc genhtml_function_coverage=1 00:12:19.314 --rc genhtml_legend=1 00:12:19.314 --rc geninfo_all_blocks=1 00:12:19.314 --rc geninfo_unexecuted_blocks=1 00:12:19.314 00:12:19.314 ' 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:19.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.314 --rc genhtml_branch_coverage=1 00:12:19.314 --rc genhtml_function_coverage=1 00:12:19.314 --rc genhtml_legend=1 00:12:19.314 --rc geninfo_all_blocks=1 00:12:19.314 --rc geninfo_unexecuted_blocks=1 00:12:19.314 00:12:19.314 ' 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:19.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.314 --rc genhtml_branch_coverage=1 00:12:19.314 --rc genhtml_function_coverage=1 00:12:19.314 --rc genhtml_legend=1 00:12:19.314 --rc geninfo_all_blocks=1 00:12:19.314 --rc geninfo_unexecuted_blocks=1 00:12:19.314 00:12:19.314 ' 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:19.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.314 --rc genhtml_branch_coverage=1 00:12:19.314 --rc genhtml_function_coverage=1 00:12:19.314 --rc genhtml_legend=1 00:12:19.314 --rc geninfo_all_blocks=1 00:12:19.314 --rc geninfo_unexecuted_blocks=1 00:12:19.314 00:12:19.314 ' 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:19.314 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.315 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:19.315 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:19.315 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:19.315 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:19.315 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:19.315 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:19.315 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:19.315 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:19.315 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:19.315 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:19.315 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:19.315 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:19.315 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:19.315 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:19.315 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:19.315 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:19.315 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:19.315 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:19.315 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.315 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:19.315 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.315 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:19.315 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:19.315 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:19.315 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:27.488 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:27.488 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:27.488 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:27.488 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # is_hw=yes 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:27.488 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:27.489 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:27.489 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:27.489 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:27.489 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:27.489 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:27.489 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:27.489 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:27.489 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:27.489 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:27.489 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:27.489 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:27.489 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:27.489 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:27.489 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:27.489 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.543 ms 00:12:27.489 00:12:27.489 --- 10.0.0.2 ping statistics --- 00:12:27.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.489 rtt min/avg/max/mdev = 0.543/0.543/0.543/0.000 ms 00:12:27.489 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:27.489 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:27.489 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:12:27.489 00:12:27.489 --- 10.0.0.1 ping statistics --- 00:12:27.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.489 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:12:27.489 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:27.489 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # return 0 00:12:27.489 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:27.489 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:27.489 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:27.489 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:27.489 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:27.489 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:27.489 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:27.489 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:27.489 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:27.489 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:27.489 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.489 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # nvmfpid=890477 00:12:27.489 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # waitforlisten 890477 00:12:27.489 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:27.489 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 890477 ']' 00:12:27.489 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.489 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:27.489 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.489 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:27.489 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.489 [2024-10-21 11:57:03.297872] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:12:27.489 [2024-10-21 11:57:03.297935] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:27.489 [2024-10-21 11:57:03.388977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:27.489 [2024-10-21 11:57:03.442374] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:27.489 [2024-10-21 11:57:03.442428] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:27.489 [2024-10-21 11:57:03.442437] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:27.489 [2024-10-21 11:57:03.442444] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:27.489 [2024-10-21 11:57:03.442450] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:27.489 [2024-10-21 11:57:03.444777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:27.489 [2024-10-21 11:57:03.444942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:27.489 [2024-10-21 11:57:03.448353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:27.489 [2024-10-21 11:57:03.448497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.750 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:27.750 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:12:27.750 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:27.750 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:27.750 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.751 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:27.751 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:27.751 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.751 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.751 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.751 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:27.751 "tick_rate": 2400000000, 00:12:27.751 "poll_groups": [ 00:12:27.751 { 00:12:27.751 "name": "nvmf_tgt_poll_group_000", 00:12:27.751 "admin_qpairs": 0, 00:12:27.751 "io_qpairs": 0, 00:12:27.751 "current_admin_qpairs": 0, 00:12:27.751 "current_io_qpairs": 0, 00:12:27.751 "pending_bdev_io": 0, 00:12:27.751 "completed_nvme_io": 0, 00:12:27.751 "transports": [] 00:12:27.751 }, 00:12:27.751 { 00:12:27.751 "name": "nvmf_tgt_poll_group_001", 00:12:27.751 "admin_qpairs": 0, 00:12:27.751 "io_qpairs": 0, 00:12:27.751 "current_admin_qpairs": 0, 00:12:27.751 "current_io_qpairs": 0, 00:12:27.751 "pending_bdev_io": 0, 00:12:27.751 "completed_nvme_io": 0, 00:12:27.751 "transports": [] 00:12:27.751 }, 00:12:27.751 { 00:12:27.751 "name": "nvmf_tgt_poll_group_002", 00:12:27.751 "admin_qpairs": 0, 00:12:27.751 "io_qpairs": 0, 00:12:27.751 "current_admin_qpairs": 0, 00:12:27.751 "current_io_qpairs": 0, 00:12:27.751 "pending_bdev_io": 0, 00:12:27.751 "completed_nvme_io": 0, 00:12:27.751 "transports": [] 00:12:27.751 }, 00:12:27.751 { 00:12:27.751 "name": "nvmf_tgt_poll_group_003", 00:12:27.751 "admin_qpairs": 0, 00:12:27.751 "io_qpairs": 0, 00:12:27.751 "current_admin_qpairs": 0, 00:12:27.751 "current_io_qpairs": 0, 00:12:27.751 "pending_bdev_io": 0, 00:12:27.751 "completed_nvme_io": 0, 00:12:27.751 "transports": [] 00:12:27.751 } 00:12:27.751 ] 00:12:27.751 }' 00:12:27.751 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:27.751 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:27.751 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:27.751 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:27.751 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:27.751 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:27.751 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:27.751 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:27.751 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.751 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.751 [2024-10-21 11:57:04.292616] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:27.751 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.751 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:27.751 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.751 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.751 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.751 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:27.751 "tick_rate": 2400000000, 00:12:27.751 "poll_groups": [ 00:12:27.751 { 00:12:27.751 "name": "nvmf_tgt_poll_group_000", 00:12:27.751 "admin_qpairs": 0, 00:12:27.751 "io_qpairs": 0, 00:12:27.751 "current_admin_qpairs": 0, 00:12:27.751 "current_io_qpairs": 0, 00:12:27.751 "pending_bdev_io": 0, 00:12:27.751 "completed_nvme_io": 0, 00:12:27.751 "transports": [ 00:12:27.751 { 00:12:27.751 "trtype": "TCP" 00:12:27.751 } 00:12:27.751 ] 00:12:27.751 }, 00:12:27.751 { 00:12:27.751 "name": "nvmf_tgt_poll_group_001", 00:12:27.751 "admin_qpairs": 0, 00:12:27.751 "io_qpairs": 0, 00:12:27.751 "current_admin_qpairs": 0, 00:12:27.751 "current_io_qpairs": 0, 00:12:27.751 "pending_bdev_io": 0, 00:12:27.751 "completed_nvme_io": 0, 00:12:27.751 "transports": [ 00:12:27.751 { 00:12:27.751 "trtype": "TCP" 00:12:27.751 } 00:12:27.751 ] 00:12:27.751 }, 00:12:27.751 { 00:12:27.751 "name": "nvmf_tgt_poll_group_002", 00:12:27.751 "admin_qpairs": 0, 00:12:27.751 "io_qpairs": 0, 00:12:27.751 "current_admin_qpairs": 0, 00:12:27.751 "current_io_qpairs": 0, 00:12:27.751 "pending_bdev_io": 0, 00:12:27.751 "completed_nvme_io": 0, 00:12:27.751 "transports": [ 00:12:27.751 { 00:12:27.751 "trtype": "TCP" 00:12:27.751 } 00:12:27.751 ] 00:12:27.751 }, 00:12:27.751 { 00:12:27.751 "name": "nvmf_tgt_poll_group_003", 00:12:27.751 "admin_qpairs": 0, 00:12:27.751 "io_qpairs": 0, 00:12:27.751 "current_admin_qpairs": 0, 00:12:27.751 "current_io_qpairs": 0, 00:12:27.751 "pending_bdev_io": 0, 00:12:27.751 "completed_nvme_io": 0, 00:12:27.751 "transports": [ 00:12:27.751 { 00:12:27.751 "trtype": "TCP" 00:12:27.751 } 00:12:27.751 ] 00:12:27.751 } 00:12:27.751 ] 00:12:27.751 }' 00:12:27.751 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:27.751 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:27.751 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:27.751 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:28.012 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:28.012 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:28.012 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:28.012 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:28.012 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:28.012 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:28.012 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:28.012 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:28.012 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:28.012 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:28.012 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.012 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.012 Malloc1 00:12:28.012 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.012 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:28.012 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.012 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.012 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.012 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:28.012 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.012 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.012 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.012 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:28.012 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.012 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.012 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.012 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:28.012 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.012 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.012 [2024-10-21 11:57:04.501378] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:28.012 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.012 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:28.012 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:28.012 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:28.012 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:28.012 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:28.012 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:28.012 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:28.012 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:28.013 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:28.013 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:28.013 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:28.013 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:28.013 [2024-10-21 11:57:04.538270] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:12:28.013 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:28.013 could not add new controller: failed to write to nvme-fabrics device 00:12:28.013 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:28.013 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:28.013 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:28.013 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:28.013 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:28.013 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.013 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.013 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.013 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:29.925 11:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:29.925 11:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:29.925 11:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:29.925 11:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:29.925 11:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:31.837 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:31.837 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:31.837 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:31.837 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:31.837 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:31.837 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:31.837 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:31.837 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.837 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:31.837 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:31.837 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:31.837 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:31.837 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:31.837 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:31.837 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:31.837 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:31.837 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.837 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.837 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.837 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:31.837 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:31.837 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:31.837 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:31.837 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:31.837 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:31.837 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:31.837 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:31.837 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:31.837 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:31.837 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:31.837 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:31.837 [2024-10-21 11:57:08.241307] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:12:31.837 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:31.837 could not add new controller: failed to write to nvme-fabrics device 00:12:31.837 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:31.837 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:31.837 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:31.837 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:31.837 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:31.837 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.837 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.837 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.837 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:33.223 11:57:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:33.223 11:57:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:33.223 11:57:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:33.223 11:57:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:33.223 11:57:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:35.771 11:57:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:35.771 11:57:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:35.771 11:57:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:35.771 11:57:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:35.771 11:57:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:35.771 11:57:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:35.771 11:57:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:35.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.771 11:57:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:35.771 11:57:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:35.771 11:57:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:35.771 11:57:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.771 11:57:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:35.771 11:57:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.771 11:57:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:35.771 11:57:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:35.771 11:57:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.771 11:57:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.771 11:57:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.771 11:57:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:35.771 11:57:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:35.771 11:57:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:35.771 11:57:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.771 11:57:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.771 11:57:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.771 11:57:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:35.771 11:57:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.771 11:57:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.771 [2024-10-21 11:57:11.966889] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:35.771 11:57:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.771 11:57:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:35.771 11:57:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.771 11:57:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.771 11:57:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.771 11:57:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:35.772 11:57:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.772 11:57:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.772 11:57:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.772 11:57:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:37.156 11:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:37.156 11:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:37.156 11:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:37.156 11:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:37.156 11:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:39.070 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:39.070 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:39.070 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:39.070 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:39.070 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:39.070 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:39.070 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:39.070 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.070 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:39.070 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:39.070 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:39.070 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:39.070 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:39.070 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:39.070 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:39.070 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:39.070 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.070 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.070 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.331 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:39.331 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.331 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.331 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.331 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:39.332 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:39.332 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.332 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.332 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.332 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:39.332 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.332 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.332 [2024-10-21 11:57:15.696452] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:39.332 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.332 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:39.332 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.332 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.332 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.332 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:39.332 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.332 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.332 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.332 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:40.717 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:40.717 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:40.717 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:40.717 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:40.717 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:43.263 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:43.263 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:43.263 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:43.263 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:43.263 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:43.263 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:43.263 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:43.263 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.263 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:43.263 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:43.263 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:43.263 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.263 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:43.263 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.263 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:43.263 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:43.263 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.263 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.263 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.263 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.263 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.263 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.263 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.263 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:43.263 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:43.263 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.263 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.263 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.263 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.263 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.263 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.263 [2024-10-21 11:57:19.458800] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.263 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.263 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:43.263 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.263 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.263 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.263 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:43.263 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.263 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.263 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.263 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:44.648 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:44.648 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:44.648 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:44.648 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:44.648 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:46.561 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:46.561 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:46.561 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:46.561 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:46.561 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:46.561 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:46.561 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:46.561 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.561 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:46.561 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:46.561 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:46.561 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:46.822 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:46.822 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:46.822 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:46.822 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:46.822 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.822 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.822 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.822 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:46.822 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.822 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.822 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.822 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:46.822 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:46.822 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.822 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.822 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.822 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:46.822 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.822 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.822 [2024-10-21 11:57:23.223507] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:46.822 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.822 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:46.822 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.822 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.822 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.822 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:46.822 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.822 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.822 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.822 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:48.207 11:57:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:48.207 11:57:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:48.207 11:57:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:48.207 11:57:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:48.207 11:57:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:50.753 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:50.753 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:50.753 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:50.753 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:50.753 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:50.753 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:50.753 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:50.753 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.753 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:50.753 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:50.753 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:50.753 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:50.753 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:50.753 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:50.753 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:50.753 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:50.753 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.753 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.753 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.753 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:50.753 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.753 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.753 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.753 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:50.753 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:50.753 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.753 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.753 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.753 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:50.753 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.753 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.753 [2024-10-21 11:57:26.952775] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:50.753 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.753 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:50.753 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.753 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.753 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.753 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:50.753 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.753 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.753 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.753 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:52.139 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:52.139 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:52.139 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:52.139 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:52.139 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:54.045 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:54.045 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:54.045 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:54.045 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:54.045 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:54.045 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:54.045 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:54.305 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.305 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:54.305 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:54.305 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:54.305 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:54.305 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:54.305 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:54.305 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:54.305 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:54.305 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.305 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.305 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.305 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:54.305 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.305 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.305 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.305 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:54.305 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:54.305 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:54.305 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.305 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.305 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.305 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:54.305 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.306 [2024-10-21 11:57:30.726766] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.306 [2024-10-21 11:57:30.794909] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.306 [2024-10-21 11:57:30.863098] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.306 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:54.566 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.566 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.566 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.566 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:54.566 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:54.566 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.566 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.566 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.566 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:54.566 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.566 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.566 [2024-10-21 11:57:30.931324] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:54.566 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.566 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:54.566 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.566 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.566 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.566 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:54.566 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.566 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.566 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.566 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:54.566 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.566 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.567 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.567 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:54.567 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.567 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.567 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.567 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:54.567 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:54.567 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.567 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.567 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.567 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:54.567 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.567 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.567 [2024-10-21 11:57:30.991523] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:54.567 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.567 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:54.567 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.567 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.567 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.567 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:54.567 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.567 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.567 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.567 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:54.567 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.567 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.567 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.567 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:54.567 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.567 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.567 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.567 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:54.567 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.567 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.567 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.567 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:54.567 "tick_rate": 2400000000, 00:12:54.567 "poll_groups": [ 00:12:54.567 { 00:12:54.567 "name": "nvmf_tgt_poll_group_000", 00:12:54.567 "admin_qpairs": 0, 00:12:54.567 "io_qpairs": 224, 00:12:54.567 "current_admin_qpairs": 0, 00:12:54.567 "current_io_qpairs": 0, 00:12:54.567 "pending_bdev_io": 0, 00:12:54.567 "completed_nvme_io": 345, 00:12:54.567 "transports": [ 00:12:54.567 { 00:12:54.567 "trtype": "TCP" 00:12:54.567 } 00:12:54.567 ] 00:12:54.567 }, 00:12:54.567 { 00:12:54.567 "name": "nvmf_tgt_poll_group_001", 00:12:54.567 "admin_qpairs": 1, 00:12:54.567 "io_qpairs": 223, 00:12:54.567 "current_admin_qpairs": 0, 00:12:54.567 "current_io_qpairs": 0, 00:12:54.567 "pending_bdev_io": 0, 00:12:54.567 "completed_nvme_io": 300, 00:12:54.567 "transports": [ 00:12:54.567 { 00:12:54.567 "trtype": "TCP" 00:12:54.567 } 00:12:54.567 ] 00:12:54.567 }, 00:12:54.567 { 00:12:54.567 "name": "nvmf_tgt_poll_group_002", 00:12:54.567 "admin_qpairs": 6, 00:12:54.567 "io_qpairs": 218, 00:12:54.567 "current_admin_qpairs": 0, 00:12:54.567 "current_io_qpairs": 0, 00:12:54.567 "pending_bdev_io": 0, 00:12:54.567 "completed_nvme_io": 220, 00:12:54.567 "transports": [ 00:12:54.567 { 00:12:54.567 "trtype": "TCP" 00:12:54.567 } 00:12:54.567 ] 00:12:54.567 }, 00:12:54.567 { 00:12:54.567 "name": "nvmf_tgt_poll_group_003", 00:12:54.567 "admin_qpairs": 0, 00:12:54.567 "io_qpairs": 224, 00:12:54.567 "current_admin_qpairs": 0, 00:12:54.567 "current_io_qpairs": 0, 00:12:54.567 "pending_bdev_io": 0, 00:12:54.567 "completed_nvme_io": 374, 00:12:54.567 "transports": [ 00:12:54.567 { 00:12:54.567 "trtype": "TCP" 00:12:54.567 } 00:12:54.567 ] 00:12:54.567 } 00:12:54.567 ] 00:12:54.567 }' 00:12:54.567 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:54.567 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:54.567 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:54.567 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:54.567 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:54.567 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:54.567 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:54.567 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:54.567 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:54.567 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:12:54.567 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:54.567 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:54.827 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:54.827 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:54.827 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:12:54.827 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:54.827 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:12:54.827 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:54.827 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:54.827 rmmod nvme_tcp 00:12:54.827 rmmod nvme_fabrics 00:12:54.827 rmmod nvme_keyring 00:12:54.827 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:54.827 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:12:54.827 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:12:54.827 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@515 -- # '[' -n 890477 ']' 00:12:54.827 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # killprocess 890477 00:12:54.827 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 890477 ']' 00:12:54.827 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 890477 00:12:54.827 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:12:54.827 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:54.827 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 890477 00:12:54.827 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:54.827 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:54.827 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 890477' 00:12:54.827 killing process with pid 890477 00:12:54.827 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 890477 00:12:54.827 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 890477 00:12:54.827 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:54.827 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:54.827 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:54.827 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:12:54.827 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-save 00:12:54.827 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:54.827 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-restore 00:12:54.827 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:54.827 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:54.827 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:54.827 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:54.827 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.369 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:57.369 00:12:57.369 real 0m38.058s 00:12:57.369 user 1m53.716s 00:12:57.369 sys 0m8.039s 00:12:57.369 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:57.369 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.369 ************************************ 00:12:57.369 END TEST nvmf_rpc 00:12:57.369 ************************************ 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:57.370 ************************************ 00:12:57.370 START TEST nvmf_invalid 00:12:57.370 ************************************ 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:57.370 * Looking for test storage... 00:12:57.370 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:57.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.370 --rc genhtml_branch_coverage=1 00:12:57.370 --rc genhtml_function_coverage=1 00:12:57.370 --rc genhtml_legend=1 00:12:57.370 --rc geninfo_all_blocks=1 00:12:57.370 --rc geninfo_unexecuted_blocks=1 00:12:57.370 00:12:57.370 ' 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:57.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.370 --rc genhtml_branch_coverage=1 00:12:57.370 --rc genhtml_function_coverage=1 00:12:57.370 --rc genhtml_legend=1 00:12:57.370 --rc geninfo_all_blocks=1 00:12:57.370 --rc geninfo_unexecuted_blocks=1 00:12:57.370 00:12:57.370 ' 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:57.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.370 --rc genhtml_branch_coverage=1 00:12:57.370 --rc genhtml_function_coverage=1 00:12:57.370 --rc genhtml_legend=1 00:12:57.370 --rc geninfo_all_blocks=1 00:12:57.370 --rc geninfo_unexecuted_blocks=1 00:12:57.370 00:12:57.370 ' 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:57.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.370 --rc genhtml_branch_coverage=1 00:12:57.370 --rc genhtml_function_coverage=1 00:12:57.370 --rc genhtml_legend=1 00:12:57.370 --rc geninfo_all_blocks=1 00:12:57.370 --rc geninfo_unexecuted_blocks=1 00:12:57.370 00:12:57.370 ' 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:57.370 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:57.370 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:57.371 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:57.371 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:57.371 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:57.371 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:57.371 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:57.371 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:57.371 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:57.371 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:57.371 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:57.371 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:57.371 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.371 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:57.371 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.371 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:57.371 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:57.371 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:12:57.371 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:05.517 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:05.517 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:05.517 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:05.517 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:05.518 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:05.518 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:05.518 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:05.518 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # is_hw=yes 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:05.518 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:05.518 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:05.518 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:05.518 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:05.518 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:05.518 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:05.518 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:05.518 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:05.518 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:05.518 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:05.518 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.504 ms 00:13:05.518 00:13:05.518 --- 10.0.0.2 ping statistics --- 00:13:05.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:05.518 rtt min/avg/max/mdev = 0.504/0.504/0.504/0.000 ms 00:13:05.518 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:05.518 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:05.518 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:13:05.518 00:13:05.518 --- 10.0.0.1 ping statistics --- 00:13:05.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:05.518 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:13:05.518 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:05.518 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # return 0 00:13:05.518 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:05.518 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:05.518 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:05.518 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:05.518 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:05.518 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:05.518 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:05.518 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:05.518 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:05.518 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:05.518 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:05.518 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # nvmfpid=900550 00:13:05.518 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # waitforlisten 900550 00:13:05.518 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:05.518 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 900550 ']' 00:13:05.518 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:05.518 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:05.518 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:05.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:05.518 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:05.518 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:05.518 [2024-10-21 11:57:41.339130] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:13:05.518 [2024-10-21 11:57:41.339199] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:05.518 [2024-10-21 11:57:41.429531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:05.519 [2024-10-21 11:57:41.483028] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:05.519 [2024-10-21 11:57:41.483080] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:05.519 [2024-10-21 11:57:41.483088] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:05.519 [2024-10-21 11:57:41.483095] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:05.519 [2024-10-21 11:57:41.483101] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:05.519 [2024-10-21 11:57:41.485452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:05.519 [2024-10-21 11:57:41.485627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:05.519 [2024-10-21 11:57:41.485790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:05.519 [2024-10-21 11:57:41.485791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.779 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:05.779 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:13:05.779 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:05.779 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:05.779 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:05.779 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:05.779 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:05.779 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode11113 00:13:06.041 [2024-10-21 11:57:42.386463] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:06.041 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:06.041 { 00:13:06.041 "nqn": "nqn.2016-06.io.spdk:cnode11113", 00:13:06.041 "tgt_name": "foobar", 00:13:06.041 "method": "nvmf_create_subsystem", 00:13:06.041 "req_id": 1 00:13:06.041 } 00:13:06.041 Got JSON-RPC error response 00:13:06.041 response: 00:13:06.041 { 00:13:06.041 "code": -32603, 00:13:06.041 "message": "Unable to find target foobar" 00:13:06.041 }' 00:13:06.041 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:06.041 { 00:13:06.041 "nqn": "nqn.2016-06.io.spdk:cnode11113", 00:13:06.041 "tgt_name": "foobar", 00:13:06.041 "method": "nvmf_create_subsystem", 00:13:06.041 "req_id": 1 00:13:06.041 } 00:13:06.041 Got JSON-RPC error response 00:13:06.041 response: 00:13:06.041 { 00:13:06.041 "code": -32603, 00:13:06.041 "message": "Unable to find target foobar" 00:13:06.041 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:06.041 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:06.041 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode32475 00:13:06.041 [2024-10-21 11:57:42.595313] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32475: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:06.041 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:06.041 { 00:13:06.041 "nqn": "nqn.2016-06.io.spdk:cnode32475", 00:13:06.041 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:06.041 "method": "nvmf_create_subsystem", 00:13:06.041 "req_id": 1 00:13:06.041 } 00:13:06.041 Got JSON-RPC error response 00:13:06.041 response: 00:13:06.041 { 00:13:06.041 "code": -32602, 00:13:06.041 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:06.041 }' 00:13:06.041 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:06.041 { 00:13:06.041 "nqn": "nqn.2016-06.io.spdk:cnode32475", 00:13:06.041 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:06.041 "method": "nvmf_create_subsystem", 00:13:06.041 "req_id": 1 00:13:06.041 } 00:13:06.041 Got JSON-RPC error response 00:13:06.041 response: 00:13:06.041 { 00:13:06.041 "code": -32602, 00:13:06.041 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:06.041 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:06.301 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:06.301 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode17037 00:13:06.301 [2024-10-21 11:57:42.804061] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17037: invalid model number 'SPDK_Controller' 00:13:06.301 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:06.301 { 00:13:06.301 "nqn": "nqn.2016-06.io.spdk:cnode17037", 00:13:06.301 "model_number": "SPDK_Controller\u001f", 00:13:06.301 "method": "nvmf_create_subsystem", 00:13:06.301 "req_id": 1 00:13:06.301 } 00:13:06.301 Got JSON-RPC error response 00:13:06.301 response: 00:13:06.301 { 00:13:06.301 "code": -32602, 00:13:06.301 "message": "Invalid MN SPDK_Controller\u001f" 00:13:06.301 }' 00:13:06.301 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:06.301 { 00:13:06.301 "nqn": "nqn.2016-06.io.spdk:cnode17037", 00:13:06.301 "model_number": "SPDK_Controller\u001f", 00:13:06.301 "method": "nvmf_create_subsystem", 00:13:06.301 "req_id": 1 00:13:06.301 } 00:13:06.301 Got JSON-RPC error response 00:13:06.301 response: 00:13:06.301 { 00:13:06.301 "code": -32602, 00:13:06.301 "message": "Invalid MN SPDK_Controller\u001f" 00:13:06.301 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:06.301 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:06.301 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:06.302 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:06.302 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:06.302 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:06.302 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:06.302 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.302 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:13:06.302 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:06.302 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:13:06.302 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.302 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.302 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:13:06.302 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:06.302 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:13:06.302 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.302 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.302 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:06.302 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:06.302 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:06.302 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.302 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.302 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:06.302 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:06.302 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:06.302 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.302 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.302 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:13:06.302 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:06.302 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:13:06.302 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.302 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.302 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:06.302 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:06.302 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:06.302 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.302 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.302 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:06.563 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:06.563 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:06.563 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.563 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.563 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:06.563 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:06.563 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:06.564 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:06.564 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:06.564 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.564 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.564 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:06.564 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:06.564 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:06.564 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.564 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.564 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ @ == \- ]] 00:13:06.564 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '@,$/@{;[&%)wF)c{EX6\' 00:13:06.564 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '@,$/@{;[&%)wF)c{EX6\' nqn.2016-06.io.spdk:cnode30024 00:13:06.826 [2024-10-21 11:57:43.177499] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30024: invalid serial number '@,$/@{;[&%)wF)c{EX6\' 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:06.826 { 00:13:06.826 "nqn": "nqn.2016-06.io.spdk:cnode30024", 00:13:06.826 "serial_number": "@,$/@{;[&%)wF)c{EX6\u007f\\", 00:13:06.826 "method": "nvmf_create_subsystem", 00:13:06.826 "req_id": 1 00:13:06.826 } 00:13:06.826 Got JSON-RPC error response 00:13:06.826 response: 00:13:06.826 { 00:13:06.826 "code": -32602, 00:13:06.826 "message": "Invalid SN @,$/@{;[&%)wF)c{EX6\u007f\\" 00:13:06.826 }' 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:06.826 { 00:13:06.826 "nqn": "nqn.2016-06.io.spdk:cnode30024", 00:13:06.826 "serial_number": "@,$/@{;[&%)wF)c{EX6\u007f\\", 00:13:06.826 "method": "nvmf_create_subsystem", 00:13:06.826 "req_id": 1 00:13:06.826 } 00:13:06.826 Got JSON-RPC error response 00:13:06.826 response: 00:13:06.826 { 00:13:06.826 "code": -32602, 00:13:06.826 "message": "Invalid SN @,$/@{;[&%)wF)c{EX6\u007f\\" 00:13:06.826 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.826 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.827 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:07.088 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:07.088 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:07.088 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.088 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.088 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:07.088 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:07.088 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:07.088 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.088 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.088 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:07.088 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:07.088 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:07.088 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.088 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.088 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:07.088 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:07.088 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:07.088 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.088 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ l == \- ]] 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'lvX?TYl0G2uemUm'\''c6R+O:N>I0L:yP!,9h%'\''[D,WY' 00:13:07.089 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'lvX?TYl0G2uemUm'\''c6R+O:N>I0L:yP!,9h%'\''[D,WY' nqn.2016-06.io.spdk:cnode31745 00:13:07.349 [2024-10-21 11:57:43.719448] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31745: invalid model number 'lvX?TYl0G2uemUm'c6R+O:N>I0L:yP!,9h%'[D,WY' 00:13:07.349 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:07.349 { 00:13:07.349 "nqn": "nqn.2016-06.io.spdk:cnode31745", 00:13:07.349 "model_number": "lvX?TYl0G2uemUm'\''c6R+O:N>I0L:yP!,9h%'\''[D,WY", 00:13:07.349 "method": "nvmf_create_subsystem", 00:13:07.349 "req_id": 1 00:13:07.349 } 00:13:07.349 Got JSON-RPC error response 00:13:07.349 response: 00:13:07.349 { 00:13:07.349 "code": -32602, 00:13:07.349 "message": "Invalid MN lvX?TYl0G2uemUm'\''c6R+O:N>I0L:yP!,9h%'\''[D,WY" 00:13:07.349 }' 00:13:07.349 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:07.349 { 00:13:07.349 "nqn": "nqn.2016-06.io.spdk:cnode31745", 00:13:07.350 "model_number": "lvX?TYl0G2uemUm'c6R+O:N>I0L:yP!,9h%'[D,WY", 00:13:07.350 "method": "nvmf_create_subsystem", 00:13:07.350 "req_id": 1 00:13:07.350 } 00:13:07.350 Got JSON-RPC error response 00:13:07.350 response: 00:13:07.350 { 00:13:07.350 "code": -32602, 00:13:07.350 "message": "Invalid MN lvX?TYl0G2uemUm'c6R+O:N>I0L:yP!,9h%'[D,WY" 00:13:07.350 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:07.350 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:07.350 [2024-10-21 11:57:43.888065] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:07.350 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:07.616 11:57:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:07.616 11:57:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:07.616 11:57:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:07.616 11:57:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:07.616 11:57:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:07.940 [2024-10-21 11:57:44.270613] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:07.940 11:57:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:07.940 { 00:13:07.940 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:07.940 "listen_address": { 00:13:07.940 "trtype": "tcp", 00:13:07.940 "traddr": "", 00:13:07.940 "trsvcid": "4421" 00:13:07.940 }, 00:13:07.940 "method": "nvmf_subsystem_remove_listener", 00:13:07.940 "req_id": 1 00:13:07.940 } 00:13:07.940 Got JSON-RPC error response 00:13:07.940 response: 00:13:07.940 { 00:13:07.940 "code": -32602, 00:13:07.940 "message": "Invalid parameters" 00:13:07.940 }' 00:13:07.940 11:57:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:07.940 { 00:13:07.940 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:07.940 "listen_address": { 00:13:07.940 "trtype": "tcp", 00:13:07.940 "traddr": "", 00:13:07.940 "trsvcid": "4421" 00:13:07.940 }, 00:13:07.940 "method": "nvmf_subsystem_remove_listener", 00:13:07.940 "req_id": 1 00:13:07.940 } 00:13:07.940 Got JSON-RPC error response 00:13:07.940 response: 00:13:07.940 { 00:13:07.940 "code": -32602, 00:13:07.940 "message": "Invalid parameters" 00:13:07.940 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:07.940 11:57:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28331 -i 0 00:13:07.940 [2024-10-21 11:57:44.459144] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28331: invalid cntlid range [0-65519] 00:13:07.940 11:57:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:07.940 { 00:13:07.940 "nqn": "nqn.2016-06.io.spdk:cnode28331", 00:13:07.940 "min_cntlid": 0, 00:13:07.940 "method": "nvmf_create_subsystem", 00:13:07.940 "req_id": 1 00:13:07.940 } 00:13:07.940 Got JSON-RPC error response 00:13:07.940 response: 00:13:07.940 { 00:13:07.940 "code": -32602, 00:13:07.940 "message": "Invalid cntlid range [0-65519]" 00:13:07.940 }' 00:13:07.940 11:57:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:07.940 { 00:13:07.940 "nqn": "nqn.2016-06.io.spdk:cnode28331", 00:13:07.940 "min_cntlid": 0, 00:13:07.940 "method": "nvmf_create_subsystem", 00:13:07.940 "req_id": 1 00:13:07.940 } 00:13:07.940 Got JSON-RPC error response 00:13:07.940 response: 00:13:07.940 { 00:13:07.940 "code": -32602, 00:13:07.940 "message": "Invalid cntlid range [0-65519]" 00:13:07.940 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:07.940 11:57:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25250 -i 65520 00:13:08.235 [2024-10-21 11:57:44.643738] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25250: invalid cntlid range [65520-65519] 00:13:08.235 11:57:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:08.235 { 00:13:08.235 "nqn": "nqn.2016-06.io.spdk:cnode25250", 00:13:08.235 "min_cntlid": 65520, 00:13:08.235 "method": "nvmf_create_subsystem", 00:13:08.235 "req_id": 1 00:13:08.235 } 00:13:08.235 Got JSON-RPC error response 00:13:08.235 response: 00:13:08.235 { 00:13:08.235 "code": -32602, 00:13:08.235 "message": "Invalid cntlid range [65520-65519]" 00:13:08.235 }' 00:13:08.235 11:57:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:08.235 { 00:13:08.235 "nqn": "nqn.2016-06.io.spdk:cnode25250", 00:13:08.235 "min_cntlid": 65520, 00:13:08.235 "method": "nvmf_create_subsystem", 00:13:08.235 "req_id": 1 00:13:08.235 } 00:13:08.235 Got JSON-RPC error response 00:13:08.235 response: 00:13:08.235 { 00:13:08.235 "code": -32602, 00:13:08.235 "message": "Invalid cntlid range [65520-65519]" 00:13:08.235 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:08.235 11:57:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3328 -I 0 00:13:08.504 [2024-10-21 11:57:44.832252] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3328: invalid cntlid range [1-0] 00:13:08.504 11:57:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:08.504 { 00:13:08.504 "nqn": "nqn.2016-06.io.spdk:cnode3328", 00:13:08.504 "max_cntlid": 0, 00:13:08.504 "method": "nvmf_create_subsystem", 00:13:08.504 "req_id": 1 00:13:08.504 } 00:13:08.504 Got JSON-RPC error response 00:13:08.504 response: 00:13:08.504 { 00:13:08.504 "code": -32602, 00:13:08.504 "message": "Invalid cntlid range [1-0]" 00:13:08.504 }' 00:13:08.504 11:57:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:08.504 { 00:13:08.504 "nqn": "nqn.2016-06.io.spdk:cnode3328", 00:13:08.504 "max_cntlid": 0, 00:13:08.504 "method": "nvmf_create_subsystem", 00:13:08.504 "req_id": 1 00:13:08.504 } 00:13:08.504 Got JSON-RPC error response 00:13:08.504 response: 00:13:08.504 { 00:13:08.504 "code": -32602, 00:13:08.504 "message": "Invalid cntlid range [1-0]" 00:13:08.504 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:08.504 11:57:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28349 -I 65520 00:13:08.504 [2024-10-21 11:57:45.020839] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28349: invalid cntlid range [1-65520] 00:13:08.504 11:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:08.504 { 00:13:08.504 "nqn": "nqn.2016-06.io.spdk:cnode28349", 00:13:08.504 "max_cntlid": 65520, 00:13:08.504 "method": "nvmf_create_subsystem", 00:13:08.504 "req_id": 1 00:13:08.504 } 00:13:08.504 Got JSON-RPC error response 00:13:08.504 response: 00:13:08.504 { 00:13:08.504 "code": -32602, 00:13:08.504 "message": "Invalid cntlid range [1-65520]" 00:13:08.504 }' 00:13:08.504 11:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:08.504 { 00:13:08.504 "nqn": "nqn.2016-06.io.spdk:cnode28349", 00:13:08.504 "max_cntlid": 65520, 00:13:08.504 "method": "nvmf_create_subsystem", 00:13:08.504 "req_id": 1 00:13:08.504 } 00:13:08.504 Got JSON-RPC error response 00:13:08.504 response: 00:13:08.504 { 00:13:08.504 "code": -32602, 00:13:08.504 "message": "Invalid cntlid range [1-65520]" 00:13:08.504 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:08.504 11:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2533 -i 6 -I 5 00:13:08.765 [2024-10-21 11:57:45.209439] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2533: invalid cntlid range [6-5] 00:13:08.765 11:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:08.765 { 00:13:08.765 "nqn": "nqn.2016-06.io.spdk:cnode2533", 00:13:08.765 "min_cntlid": 6, 00:13:08.765 "max_cntlid": 5, 00:13:08.765 "method": "nvmf_create_subsystem", 00:13:08.765 "req_id": 1 00:13:08.765 } 00:13:08.765 Got JSON-RPC error response 00:13:08.765 response: 00:13:08.765 { 00:13:08.765 "code": -32602, 00:13:08.765 "message": "Invalid cntlid range [6-5]" 00:13:08.765 }' 00:13:08.765 11:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:08.765 { 00:13:08.765 "nqn": "nqn.2016-06.io.spdk:cnode2533", 00:13:08.765 "min_cntlid": 6, 00:13:08.765 "max_cntlid": 5, 00:13:08.765 "method": "nvmf_create_subsystem", 00:13:08.765 "req_id": 1 00:13:08.765 } 00:13:08.765 Got JSON-RPC error response 00:13:08.765 response: 00:13:08.765 { 00:13:08.765 "code": -32602, 00:13:08.765 "message": "Invalid cntlid range [6-5]" 00:13:08.765 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:08.765 11:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:08.765 11:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:08.765 { 00:13:08.765 "name": "foobar", 00:13:08.765 "method": "nvmf_delete_target", 00:13:08.765 "req_id": 1 00:13:08.765 } 00:13:08.765 Got JSON-RPC error response 00:13:08.765 response: 00:13:08.765 { 00:13:08.765 "code": -32602, 00:13:08.766 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:08.766 }' 00:13:08.766 11:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:08.766 { 00:13:08.766 "name": "foobar", 00:13:08.766 "method": "nvmf_delete_target", 00:13:08.766 "req_id": 1 00:13:08.766 } 00:13:08.766 Got JSON-RPC error response 00:13:08.766 response: 00:13:08.766 { 00:13:08.766 "code": -32602, 00:13:08.766 "message": "The specified target doesn't exist, cannot delete it." 00:13:08.766 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:08.766 11:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:08.766 11:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:08.766 11:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:08.766 11:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:13:08.766 11:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:08.766 11:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:13:08.766 11:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:08.766 11:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:08.766 rmmod nvme_tcp 00:13:09.027 rmmod nvme_fabrics 00:13:09.027 rmmod nvme_keyring 00:13:09.027 11:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:09.027 11:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:13:09.027 11:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:13:09.027 11:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@515 -- # '[' -n 900550 ']' 00:13:09.027 11:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # killprocess 900550 00:13:09.027 11:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 900550 ']' 00:13:09.027 11:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 900550 00:13:09.027 11:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:13:09.027 11:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:09.027 11:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 900550 00:13:09.027 11:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:09.027 11:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:09.027 11:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 900550' 00:13:09.027 killing process with pid 900550 00:13:09.027 11:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 900550 00:13:09.027 11:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 900550 00:13:09.027 11:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:09.027 11:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:09.027 11:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:09.027 11:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:13:09.027 11:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:09.027 11:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-save 00:13:09.027 11:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-restore 00:13:09.027 11:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:09.027 11:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:09.027 11:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.027 11:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:09.027 11:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.575 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:11.575 00:13:11.575 real 0m14.108s 00:13:11.575 user 0m20.986s 00:13:11.575 sys 0m6.739s 00:13:11.575 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:11.575 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:11.575 ************************************ 00:13:11.575 END TEST nvmf_invalid 00:13:11.575 ************************************ 00:13:11.575 11:57:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:11.575 11:57:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:11.575 11:57:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:11.575 11:57:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:11.575 ************************************ 00:13:11.575 START TEST nvmf_connect_stress 00:13:11.575 ************************************ 00:13:11.575 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:11.575 * Looking for test storage... 00:13:11.575 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:11.575 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:11.575 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:13:11.575 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:11.575 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:11.575 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:11.575 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:11.575 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:11.575 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:11.575 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:11.575 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:11.575 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:11.575 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:11.575 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:11.575 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:11.575 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:11.575 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:11.575 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:11.575 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:11.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.576 --rc genhtml_branch_coverage=1 00:13:11.576 --rc genhtml_function_coverage=1 00:13:11.576 --rc genhtml_legend=1 00:13:11.576 --rc geninfo_all_blocks=1 00:13:11.576 --rc geninfo_unexecuted_blocks=1 00:13:11.576 00:13:11.576 ' 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:11.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.576 --rc genhtml_branch_coverage=1 00:13:11.576 --rc genhtml_function_coverage=1 00:13:11.576 --rc genhtml_legend=1 00:13:11.576 --rc geninfo_all_blocks=1 00:13:11.576 --rc geninfo_unexecuted_blocks=1 00:13:11.576 00:13:11.576 ' 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:11.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.576 --rc genhtml_branch_coverage=1 00:13:11.576 --rc genhtml_function_coverage=1 00:13:11.576 --rc genhtml_legend=1 00:13:11.576 --rc geninfo_all_blocks=1 00:13:11.576 --rc geninfo_unexecuted_blocks=1 00:13:11.576 00:13:11.576 ' 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:11.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.576 --rc genhtml_branch_coverage=1 00:13:11.576 --rc genhtml_function_coverage=1 00:13:11.576 --rc genhtml_legend=1 00:13:11.576 --rc geninfo_all_blocks=1 00:13:11.576 --rc geninfo_unexecuted_blocks=1 00:13:11.576 00:13:11.576 ' 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:11.576 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:11.576 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:19.723 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:19.723 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:19.723 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:19.723 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:19.723 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:19.723 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:19.723 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:13:19.723 00:13:19.724 --- 10.0.0.2 ping statistics --- 00:13:19.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:19.724 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:13:19.724 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:19.724 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:19.724 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:13:19.724 00:13:19.724 --- 10.0.0.1 ping statistics --- 00:13:19.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:19.724 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:13:19.724 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:19.724 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # return 0 00:13:19.724 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:19.724 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:19.724 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:19.724 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:19.724 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:19.724 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:19.724 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:19.724 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:19.724 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:19.724 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:19.724 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.724 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # nvmfpid=905716 00:13:19.724 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # waitforlisten 905716 00:13:19.724 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:19.724 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 905716 ']' 00:13:19.724 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.724 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:19.724 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.724 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:19.724 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.724 [2024-10-21 11:57:55.469763] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:13:19.724 [2024-10-21 11:57:55.469828] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:19.724 [2024-10-21 11:57:55.563026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:19.724 [2024-10-21 11:57:55.614496] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:19.724 [2024-10-21 11:57:55.614545] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:19.724 [2024-10-21 11:57:55.614555] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:19.724 [2024-10-21 11:57:55.614562] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:19.724 [2024-10-21 11:57:55.614568] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:19.724 [2024-10-21 11:57:55.616420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:19.724 [2024-10-21 11:57:55.616772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:19.724 [2024-10-21 11:57:55.616773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:19.724 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:19.724 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:13:19.724 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:19.724 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:19.724 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.985 [2024-10-21 11:57:56.344188] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.985 [2024-10-21 11:57:56.370053] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.985 NULL1 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=906062 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:19.985 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.986 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:19.986 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.986 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:19.986 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.986 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:19.986 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.986 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:19.986 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.986 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:19.986 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906062 00:13:19.986 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.986 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.986 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.247 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.247 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906062 00:13:20.247 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.247 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.247 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.818 11:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.818 11:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906062 00:13:20.818 11:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.818 11:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.818 11:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.109 11:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.109 11:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906062 00:13:21.109 11:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.109 11:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.109 11:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.370 11:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.370 11:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906062 00:13:21.370 11:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.370 11:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.370 11:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.631 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.631 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906062 00:13:21.631 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.631 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.631 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.892 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.892 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906062 00:13:21.892 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.892 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.892 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.461 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.461 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906062 00:13:22.461 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.461 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.461 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.721 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.721 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906062 00:13:22.721 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.721 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.721 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.982 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.982 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906062 00:13:22.982 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.982 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.982 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.243 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.243 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906062 00:13:23.243 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.243 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.243 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.503 11:58:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.503 11:58:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906062 00:13:23.503 11:58:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.503 11:58:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.503 11:58:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.073 11:58:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.073 11:58:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906062 00:13:24.073 11:58:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.073 11:58:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.073 11:58:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.396 11:58:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.396 11:58:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906062 00:13:24.396 11:58:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.396 11:58:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.396 11:58:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.657 11:58:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.657 11:58:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906062 00:13:24.657 11:58:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.657 11:58:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.657 11:58:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.917 11:58:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.917 11:58:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906062 00:13:24.917 11:58:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.917 11:58:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.917 11:58:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.176 11:58:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.176 11:58:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906062 00:13:25.176 11:58:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.176 11:58:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.176 11:58:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.746 11:58:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.746 11:58:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906062 00:13:25.746 11:58:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.746 11:58:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.747 11:58:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.007 11:58:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.008 11:58:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906062 00:13:26.008 11:58:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.008 11:58:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.008 11:58:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.268 11:58:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.268 11:58:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906062 00:13:26.268 11:58:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.268 11:58:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.268 11:58:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.528 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.528 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906062 00:13:26.528 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.528 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.528 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.787 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.787 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906062 00:13:26.787 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.787 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.787 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.358 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.358 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906062 00:13:27.358 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.358 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.358 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.618 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.618 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906062 00:13:27.618 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.618 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.618 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.878 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.878 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906062 00:13:27.879 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.879 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.879 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.139 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.139 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906062 00:13:28.139 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.139 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.139 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.399 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.399 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906062 00:13:28.399 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.399 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.399 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.971 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.971 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906062 00:13:28.971 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.971 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.971 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.231 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.231 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906062 00:13:29.231 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.231 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.231 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.491 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.491 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906062 00:13:29.491 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.491 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.491 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.752 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.752 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906062 00:13:29.752 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.752 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.752 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.013 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:30.273 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.273 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906062 00:13:30.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (906062) - No such process 00:13:30.273 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 906062 00:13:30.273 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:30.273 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:30.273 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:30.273 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:30.273 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:30.273 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:30.273 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:30.273 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:30.274 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:30.274 rmmod nvme_tcp 00:13:30.274 rmmod nvme_fabrics 00:13:30.274 rmmod nvme_keyring 00:13:30.274 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:30.274 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:30.274 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:30.274 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@515 -- # '[' -n 905716 ']' 00:13:30.274 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # killprocess 905716 00:13:30.274 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 905716 ']' 00:13:30.274 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 905716 00:13:30.274 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:13:30.274 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:30.274 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 905716 00:13:30.274 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:30.274 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:30.274 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 905716' 00:13:30.274 killing process with pid 905716 00:13:30.274 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 905716 00:13:30.274 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 905716 00:13:30.274 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:30.274 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:30.274 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:30.274 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:30.534 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-save 00:13:30.534 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:30.534 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-restore 00:13:30.534 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:30.534 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:30.535 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.535 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:30.535 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.446 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:32.446 00:13:32.446 real 0m21.196s 00:13:32.446 user 0m42.459s 00:13:32.446 sys 0m9.227s 00:13:32.446 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:32.446 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.446 ************************************ 00:13:32.446 END TEST nvmf_connect_stress 00:13:32.446 ************************************ 00:13:32.446 11:58:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:32.446 11:58:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:32.446 11:58:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:32.446 11:58:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:32.446 ************************************ 00:13:32.446 START TEST nvmf_fused_ordering 00:13:32.446 ************************************ 00:13:32.446 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:32.706 * Looking for test storage... 00:13:32.706 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:32.706 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:32.706 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:13:32.706 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:32.706 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:32.706 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:32.706 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:32.706 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:32.706 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:32.706 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:32.706 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:32.706 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:32.706 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:32.706 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:32.706 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:32.706 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:32.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.707 --rc genhtml_branch_coverage=1 00:13:32.707 --rc genhtml_function_coverage=1 00:13:32.707 --rc genhtml_legend=1 00:13:32.707 --rc geninfo_all_blocks=1 00:13:32.707 --rc geninfo_unexecuted_blocks=1 00:13:32.707 00:13:32.707 ' 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:32.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.707 --rc genhtml_branch_coverage=1 00:13:32.707 --rc genhtml_function_coverage=1 00:13:32.707 --rc genhtml_legend=1 00:13:32.707 --rc geninfo_all_blocks=1 00:13:32.707 --rc geninfo_unexecuted_blocks=1 00:13:32.707 00:13:32.707 ' 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:32.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.707 --rc genhtml_branch_coverage=1 00:13:32.707 --rc genhtml_function_coverage=1 00:13:32.707 --rc genhtml_legend=1 00:13:32.707 --rc geninfo_all_blocks=1 00:13:32.707 --rc geninfo_unexecuted_blocks=1 00:13:32.707 00:13:32.707 ' 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:32.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.707 --rc genhtml_branch_coverage=1 00:13:32.707 --rc genhtml_function_coverage=1 00:13:32.707 --rc genhtml_legend=1 00:13:32.707 --rc geninfo_all_blocks=1 00:13:32.707 --rc geninfo_unexecuted_blocks=1 00:13:32.707 00:13:32.707 ' 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:32.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:32.707 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:40.851 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:40.851 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:40.851 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:40.851 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:40.852 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # is_hw=yes 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:40.852 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:40.852 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:13:40.852 00:13:40.852 --- 10.0.0.2 ping statistics --- 00:13:40.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.852 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:40.852 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:40.852 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:13:40.852 00:13:40.852 --- 10.0.0.1 ping statistics --- 00:13:40.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.852 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # return 0 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # nvmfpid=912154 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # waitforlisten 912154 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 912154 ']' 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:40.852 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:40.852 [2024-10-21 11:58:16.826271] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:13:40.852 [2024-10-21 11:58:16.826354] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:40.852 [2024-10-21 11:58:16.915260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.852 [2024-10-21 11:58:16.966362] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:40.852 [2024-10-21 11:58:16.966412] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:40.852 [2024-10-21 11:58:16.966421] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:40.852 [2024-10-21 11:58:16.966428] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:40.852 [2024-10-21 11:58:16.966435] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:40.852 [2024-10-21 11:58:16.967187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:41.114 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:41.114 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:13:41.114 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:41.114 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:41.114 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:41.114 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:41.114 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:41.114 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.114 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:41.114 [2024-10-21 11:58:17.681112] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:41.114 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.114 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:41.114 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.114 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:41.114 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.114 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:41.114 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.114 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:41.114 [2024-10-21 11:58:17.705391] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:41.375 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.375 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:41.375 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.375 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:41.375 NULL1 00:13:41.375 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.375 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:41.375 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.375 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:41.375 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.375 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:41.375 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.375 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:41.375 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.375 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:41.375 [2024-10-21 11:58:17.775031] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:13:41.375 [2024-10-21 11:58:17.775076] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid912442 ] 00:13:41.637 Attached to nqn.2016-06.io.spdk:cnode1 00:13:41.637 Namespace ID: 1 size: 1GB 00:13:41.637 fused_ordering(0) 00:13:41.637 fused_ordering(1) 00:13:41.637 fused_ordering(2) 00:13:41.637 fused_ordering(3) 00:13:41.637 fused_ordering(4) 00:13:41.637 fused_ordering(5) 00:13:41.637 fused_ordering(6) 00:13:41.637 fused_ordering(7) 00:13:41.637 fused_ordering(8) 00:13:41.637 fused_ordering(9) 00:13:41.637 fused_ordering(10) 00:13:41.637 fused_ordering(11) 00:13:41.637 fused_ordering(12) 00:13:41.637 fused_ordering(13) 00:13:41.637 fused_ordering(14) 00:13:41.637 fused_ordering(15) 00:13:41.637 fused_ordering(16) 00:13:41.637 fused_ordering(17) 00:13:41.637 fused_ordering(18) 00:13:41.637 fused_ordering(19) 00:13:41.637 fused_ordering(20) 00:13:41.637 fused_ordering(21) 00:13:41.637 fused_ordering(22) 00:13:41.637 fused_ordering(23) 00:13:41.637 fused_ordering(24) 00:13:41.637 fused_ordering(25) 00:13:41.637 fused_ordering(26) 00:13:41.637 fused_ordering(27) 00:13:41.637 fused_ordering(28) 00:13:41.637 fused_ordering(29) 00:13:41.637 fused_ordering(30) 00:13:41.637 fused_ordering(31) 00:13:41.637 fused_ordering(32) 00:13:41.637 fused_ordering(33) 00:13:41.637 fused_ordering(34) 00:13:41.637 fused_ordering(35) 00:13:41.637 fused_ordering(36) 00:13:41.637 fused_ordering(37) 00:13:41.637 fused_ordering(38) 00:13:41.637 fused_ordering(39) 00:13:41.637 fused_ordering(40) 00:13:41.637 fused_ordering(41) 00:13:41.637 fused_ordering(42) 00:13:41.637 fused_ordering(43) 00:13:41.637 fused_ordering(44) 00:13:41.637 fused_ordering(45) 00:13:41.637 fused_ordering(46) 00:13:41.637 fused_ordering(47) 00:13:41.637 fused_ordering(48) 00:13:41.637 fused_ordering(49) 00:13:41.637 fused_ordering(50) 00:13:41.637 fused_ordering(51) 00:13:41.637 fused_ordering(52) 00:13:41.637 fused_ordering(53) 00:13:41.637 fused_ordering(54) 00:13:41.637 fused_ordering(55) 00:13:41.637 fused_ordering(56) 00:13:41.637 fused_ordering(57) 00:13:41.637 fused_ordering(58) 00:13:41.637 fused_ordering(59) 00:13:41.637 fused_ordering(60) 00:13:41.637 fused_ordering(61) 00:13:41.637 fused_ordering(62) 00:13:41.637 fused_ordering(63) 00:13:41.637 fused_ordering(64) 00:13:41.637 fused_ordering(65) 00:13:41.637 fused_ordering(66) 00:13:41.637 fused_ordering(67) 00:13:41.637 fused_ordering(68) 00:13:41.637 fused_ordering(69) 00:13:41.637 fused_ordering(70) 00:13:41.637 fused_ordering(71) 00:13:41.637 fused_ordering(72) 00:13:41.637 fused_ordering(73) 00:13:41.637 fused_ordering(74) 00:13:41.637 fused_ordering(75) 00:13:41.637 fused_ordering(76) 00:13:41.637 fused_ordering(77) 00:13:41.637 fused_ordering(78) 00:13:41.637 fused_ordering(79) 00:13:41.637 fused_ordering(80) 00:13:41.637 fused_ordering(81) 00:13:41.637 fused_ordering(82) 00:13:41.637 fused_ordering(83) 00:13:41.637 fused_ordering(84) 00:13:41.637 fused_ordering(85) 00:13:41.637 fused_ordering(86) 00:13:41.637 fused_ordering(87) 00:13:41.637 fused_ordering(88) 00:13:41.637 fused_ordering(89) 00:13:41.637 fused_ordering(90) 00:13:41.637 fused_ordering(91) 00:13:41.637 fused_ordering(92) 00:13:41.637 fused_ordering(93) 00:13:41.637 fused_ordering(94) 00:13:41.637 fused_ordering(95) 00:13:41.637 fused_ordering(96) 00:13:41.637 fused_ordering(97) 00:13:41.637 fused_ordering(98) 00:13:41.637 fused_ordering(99) 00:13:41.637 fused_ordering(100) 00:13:41.637 fused_ordering(101) 00:13:41.637 fused_ordering(102) 00:13:41.637 fused_ordering(103) 00:13:41.637 fused_ordering(104) 00:13:41.637 fused_ordering(105) 00:13:41.637 fused_ordering(106) 00:13:41.637 fused_ordering(107) 00:13:41.637 fused_ordering(108) 00:13:41.637 fused_ordering(109) 00:13:41.637 fused_ordering(110) 00:13:41.637 fused_ordering(111) 00:13:41.637 fused_ordering(112) 00:13:41.637 fused_ordering(113) 00:13:41.637 fused_ordering(114) 00:13:41.637 fused_ordering(115) 00:13:41.637 fused_ordering(116) 00:13:41.637 fused_ordering(117) 00:13:41.637 fused_ordering(118) 00:13:41.637 fused_ordering(119) 00:13:41.637 fused_ordering(120) 00:13:41.637 fused_ordering(121) 00:13:41.637 fused_ordering(122) 00:13:41.637 fused_ordering(123) 00:13:41.637 fused_ordering(124) 00:13:41.637 fused_ordering(125) 00:13:41.637 fused_ordering(126) 00:13:41.637 fused_ordering(127) 00:13:41.637 fused_ordering(128) 00:13:41.637 fused_ordering(129) 00:13:41.637 fused_ordering(130) 00:13:41.637 fused_ordering(131) 00:13:41.637 fused_ordering(132) 00:13:41.637 fused_ordering(133) 00:13:41.637 fused_ordering(134) 00:13:41.637 fused_ordering(135) 00:13:41.637 fused_ordering(136) 00:13:41.637 fused_ordering(137) 00:13:41.637 fused_ordering(138) 00:13:41.637 fused_ordering(139) 00:13:41.637 fused_ordering(140) 00:13:41.637 fused_ordering(141) 00:13:41.637 fused_ordering(142) 00:13:41.637 fused_ordering(143) 00:13:41.637 fused_ordering(144) 00:13:41.637 fused_ordering(145) 00:13:41.637 fused_ordering(146) 00:13:41.637 fused_ordering(147) 00:13:41.637 fused_ordering(148) 00:13:41.637 fused_ordering(149) 00:13:41.637 fused_ordering(150) 00:13:41.637 fused_ordering(151) 00:13:41.637 fused_ordering(152) 00:13:41.637 fused_ordering(153) 00:13:41.637 fused_ordering(154) 00:13:41.637 fused_ordering(155) 00:13:41.637 fused_ordering(156) 00:13:41.637 fused_ordering(157) 00:13:41.637 fused_ordering(158) 00:13:41.637 fused_ordering(159) 00:13:41.637 fused_ordering(160) 00:13:41.637 fused_ordering(161) 00:13:41.637 fused_ordering(162) 00:13:41.637 fused_ordering(163) 00:13:41.637 fused_ordering(164) 00:13:41.637 fused_ordering(165) 00:13:41.637 fused_ordering(166) 00:13:41.637 fused_ordering(167) 00:13:41.637 fused_ordering(168) 00:13:41.637 fused_ordering(169) 00:13:41.637 fused_ordering(170) 00:13:41.637 fused_ordering(171) 00:13:41.637 fused_ordering(172) 00:13:41.637 fused_ordering(173) 00:13:41.637 fused_ordering(174) 00:13:41.637 fused_ordering(175) 00:13:41.637 fused_ordering(176) 00:13:41.637 fused_ordering(177) 00:13:41.637 fused_ordering(178) 00:13:41.637 fused_ordering(179) 00:13:41.637 fused_ordering(180) 00:13:41.637 fused_ordering(181) 00:13:41.637 fused_ordering(182) 00:13:41.637 fused_ordering(183) 00:13:41.637 fused_ordering(184) 00:13:41.637 fused_ordering(185) 00:13:41.637 fused_ordering(186) 00:13:41.637 fused_ordering(187) 00:13:41.637 fused_ordering(188) 00:13:41.637 fused_ordering(189) 00:13:41.637 fused_ordering(190) 00:13:41.637 fused_ordering(191) 00:13:41.637 fused_ordering(192) 00:13:41.637 fused_ordering(193) 00:13:41.637 fused_ordering(194) 00:13:41.637 fused_ordering(195) 00:13:41.637 fused_ordering(196) 00:13:41.637 fused_ordering(197) 00:13:41.637 fused_ordering(198) 00:13:41.637 fused_ordering(199) 00:13:41.637 fused_ordering(200) 00:13:41.637 fused_ordering(201) 00:13:41.637 fused_ordering(202) 00:13:41.637 fused_ordering(203) 00:13:41.637 fused_ordering(204) 00:13:41.637 fused_ordering(205) 00:13:42.210 fused_ordering(206) 00:13:42.210 fused_ordering(207) 00:13:42.210 fused_ordering(208) 00:13:42.210 fused_ordering(209) 00:13:42.210 fused_ordering(210) 00:13:42.210 fused_ordering(211) 00:13:42.210 fused_ordering(212) 00:13:42.210 fused_ordering(213) 00:13:42.210 fused_ordering(214) 00:13:42.210 fused_ordering(215) 00:13:42.210 fused_ordering(216) 00:13:42.210 fused_ordering(217) 00:13:42.210 fused_ordering(218) 00:13:42.210 fused_ordering(219) 00:13:42.210 fused_ordering(220) 00:13:42.210 fused_ordering(221) 00:13:42.210 fused_ordering(222) 00:13:42.210 fused_ordering(223) 00:13:42.210 fused_ordering(224) 00:13:42.210 fused_ordering(225) 00:13:42.210 fused_ordering(226) 00:13:42.211 fused_ordering(227) 00:13:42.211 fused_ordering(228) 00:13:42.211 fused_ordering(229) 00:13:42.211 fused_ordering(230) 00:13:42.211 fused_ordering(231) 00:13:42.211 fused_ordering(232) 00:13:42.211 fused_ordering(233) 00:13:42.211 fused_ordering(234) 00:13:42.211 fused_ordering(235) 00:13:42.211 fused_ordering(236) 00:13:42.211 fused_ordering(237) 00:13:42.211 fused_ordering(238) 00:13:42.211 fused_ordering(239) 00:13:42.211 fused_ordering(240) 00:13:42.211 fused_ordering(241) 00:13:42.211 fused_ordering(242) 00:13:42.211 fused_ordering(243) 00:13:42.211 fused_ordering(244) 00:13:42.211 fused_ordering(245) 00:13:42.211 fused_ordering(246) 00:13:42.211 fused_ordering(247) 00:13:42.211 fused_ordering(248) 00:13:42.211 fused_ordering(249) 00:13:42.211 fused_ordering(250) 00:13:42.211 fused_ordering(251) 00:13:42.211 fused_ordering(252) 00:13:42.211 fused_ordering(253) 00:13:42.211 fused_ordering(254) 00:13:42.211 fused_ordering(255) 00:13:42.211 fused_ordering(256) 00:13:42.211 fused_ordering(257) 00:13:42.211 fused_ordering(258) 00:13:42.211 fused_ordering(259) 00:13:42.211 fused_ordering(260) 00:13:42.211 fused_ordering(261) 00:13:42.211 fused_ordering(262) 00:13:42.211 fused_ordering(263) 00:13:42.211 fused_ordering(264) 00:13:42.211 fused_ordering(265) 00:13:42.211 fused_ordering(266) 00:13:42.211 fused_ordering(267) 00:13:42.211 fused_ordering(268) 00:13:42.211 fused_ordering(269) 00:13:42.211 fused_ordering(270) 00:13:42.211 fused_ordering(271) 00:13:42.211 fused_ordering(272) 00:13:42.211 fused_ordering(273) 00:13:42.211 fused_ordering(274) 00:13:42.211 fused_ordering(275) 00:13:42.211 fused_ordering(276) 00:13:42.211 fused_ordering(277) 00:13:42.211 fused_ordering(278) 00:13:42.211 fused_ordering(279) 00:13:42.211 fused_ordering(280) 00:13:42.211 fused_ordering(281) 00:13:42.211 fused_ordering(282) 00:13:42.211 fused_ordering(283) 00:13:42.211 fused_ordering(284) 00:13:42.211 fused_ordering(285) 00:13:42.211 fused_ordering(286) 00:13:42.211 fused_ordering(287) 00:13:42.211 fused_ordering(288) 00:13:42.211 fused_ordering(289) 00:13:42.211 fused_ordering(290) 00:13:42.211 fused_ordering(291) 00:13:42.211 fused_ordering(292) 00:13:42.211 fused_ordering(293) 00:13:42.211 fused_ordering(294) 00:13:42.211 fused_ordering(295) 00:13:42.211 fused_ordering(296) 00:13:42.211 fused_ordering(297) 00:13:42.211 fused_ordering(298) 00:13:42.211 fused_ordering(299) 00:13:42.211 fused_ordering(300) 00:13:42.211 fused_ordering(301) 00:13:42.211 fused_ordering(302) 00:13:42.211 fused_ordering(303) 00:13:42.211 fused_ordering(304) 00:13:42.211 fused_ordering(305) 00:13:42.211 fused_ordering(306) 00:13:42.211 fused_ordering(307) 00:13:42.211 fused_ordering(308) 00:13:42.211 fused_ordering(309) 00:13:42.211 fused_ordering(310) 00:13:42.211 fused_ordering(311) 00:13:42.211 fused_ordering(312) 00:13:42.211 fused_ordering(313) 00:13:42.211 fused_ordering(314) 00:13:42.211 fused_ordering(315) 00:13:42.211 fused_ordering(316) 00:13:42.211 fused_ordering(317) 00:13:42.211 fused_ordering(318) 00:13:42.211 fused_ordering(319) 00:13:42.211 fused_ordering(320) 00:13:42.211 fused_ordering(321) 00:13:42.211 fused_ordering(322) 00:13:42.211 fused_ordering(323) 00:13:42.211 fused_ordering(324) 00:13:42.211 fused_ordering(325) 00:13:42.211 fused_ordering(326) 00:13:42.211 fused_ordering(327) 00:13:42.211 fused_ordering(328) 00:13:42.211 fused_ordering(329) 00:13:42.211 fused_ordering(330) 00:13:42.211 fused_ordering(331) 00:13:42.211 fused_ordering(332) 00:13:42.211 fused_ordering(333) 00:13:42.211 fused_ordering(334) 00:13:42.211 fused_ordering(335) 00:13:42.211 fused_ordering(336) 00:13:42.211 fused_ordering(337) 00:13:42.211 fused_ordering(338) 00:13:42.211 fused_ordering(339) 00:13:42.211 fused_ordering(340) 00:13:42.211 fused_ordering(341) 00:13:42.211 fused_ordering(342) 00:13:42.211 fused_ordering(343) 00:13:42.211 fused_ordering(344) 00:13:42.211 fused_ordering(345) 00:13:42.211 fused_ordering(346) 00:13:42.211 fused_ordering(347) 00:13:42.211 fused_ordering(348) 00:13:42.211 fused_ordering(349) 00:13:42.211 fused_ordering(350) 00:13:42.211 fused_ordering(351) 00:13:42.211 fused_ordering(352) 00:13:42.211 fused_ordering(353) 00:13:42.211 fused_ordering(354) 00:13:42.211 fused_ordering(355) 00:13:42.211 fused_ordering(356) 00:13:42.211 fused_ordering(357) 00:13:42.211 fused_ordering(358) 00:13:42.211 fused_ordering(359) 00:13:42.211 fused_ordering(360) 00:13:42.211 fused_ordering(361) 00:13:42.211 fused_ordering(362) 00:13:42.211 fused_ordering(363) 00:13:42.211 fused_ordering(364) 00:13:42.211 fused_ordering(365) 00:13:42.211 fused_ordering(366) 00:13:42.211 fused_ordering(367) 00:13:42.211 fused_ordering(368) 00:13:42.211 fused_ordering(369) 00:13:42.211 fused_ordering(370) 00:13:42.211 fused_ordering(371) 00:13:42.211 fused_ordering(372) 00:13:42.211 fused_ordering(373) 00:13:42.211 fused_ordering(374) 00:13:42.211 fused_ordering(375) 00:13:42.211 fused_ordering(376) 00:13:42.211 fused_ordering(377) 00:13:42.211 fused_ordering(378) 00:13:42.211 fused_ordering(379) 00:13:42.211 fused_ordering(380) 00:13:42.211 fused_ordering(381) 00:13:42.211 fused_ordering(382) 00:13:42.211 fused_ordering(383) 00:13:42.211 fused_ordering(384) 00:13:42.211 fused_ordering(385) 00:13:42.211 fused_ordering(386) 00:13:42.211 fused_ordering(387) 00:13:42.211 fused_ordering(388) 00:13:42.211 fused_ordering(389) 00:13:42.211 fused_ordering(390) 00:13:42.211 fused_ordering(391) 00:13:42.211 fused_ordering(392) 00:13:42.211 fused_ordering(393) 00:13:42.211 fused_ordering(394) 00:13:42.211 fused_ordering(395) 00:13:42.211 fused_ordering(396) 00:13:42.211 fused_ordering(397) 00:13:42.211 fused_ordering(398) 00:13:42.211 fused_ordering(399) 00:13:42.211 fused_ordering(400) 00:13:42.211 fused_ordering(401) 00:13:42.211 fused_ordering(402) 00:13:42.211 fused_ordering(403) 00:13:42.211 fused_ordering(404) 00:13:42.211 fused_ordering(405) 00:13:42.211 fused_ordering(406) 00:13:42.211 fused_ordering(407) 00:13:42.211 fused_ordering(408) 00:13:42.211 fused_ordering(409) 00:13:42.211 fused_ordering(410) 00:13:42.476 fused_ordering(411) 00:13:42.476 fused_ordering(412) 00:13:42.476 fused_ordering(413) 00:13:42.476 fused_ordering(414) 00:13:42.476 fused_ordering(415) 00:13:42.476 fused_ordering(416) 00:13:42.476 fused_ordering(417) 00:13:42.476 fused_ordering(418) 00:13:42.476 fused_ordering(419) 00:13:42.476 fused_ordering(420) 00:13:42.476 fused_ordering(421) 00:13:42.476 fused_ordering(422) 00:13:42.476 fused_ordering(423) 00:13:42.476 fused_ordering(424) 00:13:42.476 fused_ordering(425) 00:13:42.476 fused_ordering(426) 00:13:42.476 fused_ordering(427) 00:13:42.476 fused_ordering(428) 00:13:42.476 fused_ordering(429) 00:13:42.476 fused_ordering(430) 00:13:42.476 fused_ordering(431) 00:13:42.476 fused_ordering(432) 00:13:42.476 fused_ordering(433) 00:13:42.476 fused_ordering(434) 00:13:42.476 fused_ordering(435) 00:13:42.476 fused_ordering(436) 00:13:42.476 fused_ordering(437) 00:13:42.476 fused_ordering(438) 00:13:42.476 fused_ordering(439) 00:13:42.476 fused_ordering(440) 00:13:42.476 fused_ordering(441) 00:13:42.476 fused_ordering(442) 00:13:42.476 fused_ordering(443) 00:13:42.476 fused_ordering(444) 00:13:42.476 fused_ordering(445) 00:13:42.476 fused_ordering(446) 00:13:42.476 fused_ordering(447) 00:13:42.476 fused_ordering(448) 00:13:42.476 fused_ordering(449) 00:13:42.476 fused_ordering(450) 00:13:42.476 fused_ordering(451) 00:13:42.476 fused_ordering(452) 00:13:42.476 fused_ordering(453) 00:13:42.476 fused_ordering(454) 00:13:42.476 fused_ordering(455) 00:13:42.476 fused_ordering(456) 00:13:42.476 fused_ordering(457) 00:13:42.476 fused_ordering(458) 00:13:42.476 fused_ordering(459) 00:13:42.476 fused_ordering(460) 00:13:42.476 fused_ordering(461) 00:13:42.476 fused_ordering(462) 00:13:42.476 fused_ordering(463) 00:13:42.476 fused_ordering(464) 00:13:42.476 fused_ordering(465) 00:13:42.476 fused_ordering(466) 00:13:42.476 fused_ordering(467) 00:13:42.476 fused_ordering(468) 00:13:42.476 fused_ordering(469) 00:13:42.476 fused_ordering(470) 00:13:42.476 fused_ordering(471) 00:13:42.476 fused_ordering(472) 00:13:42.476 fused_ordering(473) 00:13:42.476 fused_ordering(474) 00:13:42.476 fused_ordering(475) 00:13:42.476 fused_ordering(476) 00:13:42.477 fused_ordering(477) 00:13:42.477 fused_ordering(478) 00:13:42.477 fused_ordering(479) 00:13:42.477 fused_ordering(480) 00:13:42.477 fused_ordering(481) 00:13:42.477 fused_ordering(482) 00:13:42.477 fused_ordering(483) 00:13:42.477 fused_ordering(484) 00:13:42.477 fused_ordering(485) 00:13:42.477 fused_ordering(486) 00:13:42.477 fused_ordering(487) 00:13:42.477 fused_ordering(488) 00:13:42.477 fused_ordering(489) 00:13:42.477 fused_ordering(490) 00:13:42.477 fused_ordering(491) 00:13:42.477 fused_ordering(492) 00:13:42.477 fused_ordering(493) 00:13:42.477 fused_ordering(494) 00:13:42.477 fused_ordering(495) 00:13:42.477 fused_ordering(496) 00:13:42.477 fused_ordering(497) 00:13:42.477 fused_ordering(498) 00:13:42.477 fused_ordering(499) 00:13:42.477 fused_ordering(500) 00:13:42.477 fused_ordering(501) 00:13:42.477 fused_ordering(502) 00:13:42.477 fused_ordering(503) 00:13:42.477 fused_ordering(504) 00:13:42.477 fused_ordering(505) 00:13:42.477 fused_ordering(506) 00:13:42.477 fused_ordering(507) 00:13:42.477 fused_ordering(508) 00:13:42.477 fused_ordering(509) 00:13:42.477 fused_ordering(510) 00:13:42.477 fused_ordering(511) 00:13:42.477 fused_ordering(512) 00:13:42.477 fused_ordering(513) 00:13:42.477 fused_ordering(514) 00:13:42.477 fused_ordering(515) 00:13:42.477 fused_ordering(516) 00:13:42.477 fused_ordering(517) 00:13:42.477 fused_ordering(518) 00:13:42.477 fused_ordering(519) 00:13:42.477 fused_ordering(520) 00:13:42.477 fused_ordering(521) 00:13:42.477 fused_ordering(522) 00:13:42.477 fused_ordering(523) 00:13:42.477 fused_ordering(524) 00:13:42.477 fused_ordering(525) 00:13:42.477 fused_ordering(526) 00:13:42.477 fused_ordering(527) 00:13:42.477 fused_ordering(528) 00:13:42.477 fused_ordering(529) 00:13:42.477 fused_ordering(530) 00:13:42.477 fused_ordering(531) 00:13:42.477 fused_ordering(532) 00:13:42.477 fused_ordering(533) 00:13:42.477 fused_ordering(534) 00:13:42.477 fused_ordering(535) 00:13:42.477 fused_ordering(536) 00:13:42.477 fused_ordering(537) 00:13:42.477 fused_ordering(538) 00:13:42.477 fused_ordering(539) 00:13:42.477 fused_ordering(540) 00:13:42.477 fused_ordering(541) 00:13:42.477 fused_ordering(542) 00:13:42.477 fused_ordering(543) 00:13:42.477 fused_ordering(544) 00:13:42.477 fused_ordering(545) 00:13:42.477 fused_ordering(546) 00:13:42.477 fused_ordering(547) 00:13:42.477 fused_ordering(548) 00:13:42.477 fused_ordering(549) 00:13:42.477 fused_ordering(550) 00:13:42.477 fused_ordering(551) 00:13:42.477 fused_ordering(552) 00:13:42.477 fused_ordering(553) 00:13:42.477 fused_ordering(554) 00:13:42.477 fused_ordering(555) 00:13:42.477 fused_ordering(556) 00:13:42.477 fused_ordering(557) 00:13:42.477 fused_ordering(558) 00:13:42.477 fused_ordering(559) 00:13:42.477 fused_ordering(560) 00:13:42.477 fused_ordering(561) 00:13:42.477 fused_ordering(562) 00:13:42.477 fused_ordering(563) 00:13:42.477 fused_ordering(564) 00:13:42.477 fused_ordering(565) 00:13:42.477 fused_ordering(566) 00:13:42.477 fused_ordering(567) 00:13:42.477 fused_ordering(568) 00:13:42.477 fused_ordering(569) 00:13:42.477 fused_ordering(570) 00:13:42.477 fused_ordering(571) 00:13:42.477 fused_ordering(572) 00:13:42.477 fused_ordering(573) 00:13:42.477 fused_ordering(574) 00:13:42.477 fused_ordering(575) 00:13:42.477 fused_ordering(576) 00:13:42.477 fused_ordering(577) 00:13:42.477 fused_ordering(578) 00:13:42.477 fused_ordering(579) 00:13:42.477 fused_ordering(580) 00:13:42.477 fused_ordering(581) 00:13:42.477 fused_ordering(582) 00:13:42.477 fused_ordering(583) 00:13:42.477 fused_ordering(584) 00:13:42.477 fused_ordering(585) 00:13:42.477 fused_ordering(586) 00:13:42.477 fused_ordering(587) 00:13:42.477 fused_ordering(588) 00:13:42.477 fused_ordering(589) 00:13:42.477 fused_ordering(590) 00:13:42.477 fused_ordering(591) 00:13:42.477 fused_ordering(592) 00:13:42.477 fused_ordering(593) 00:13:42.477 fused_ordering(594) 00:13:42.477 fused_ordering(595) 00:13:42.477 fused_ordering(596) 00:13:42.477 fused_ordering(597) 00:13:42.477 fused_ordering(598) 00:13:42.477 fused_ordering(599) 00:13:42.477 fused_ordering(600) 00:13:42.477 fused_ordering(601) 00:13:42.477 fused_ordering(602) 00:13:42.477 fused_ordering(603) 00:13:42.477 fused_ordering(604) 00:13:42.477 fused_ordering(605) 00:13:42.477 fused_ordering(606) 00:13:42.477 fused_ordering(607) 00:13:42.477 fused_ordering(608) 00:13:42.477 fused_ordering(609) 00:13:42.477 fused_ordering(610) 00:13:42.477 fused_ordering(611) 00:13:42.477 fused_ordering(612) 00:13:42.477 fused_ordering(613) 00:13:42.477 fused_ordering(614) 00:13:42.477 fused_ordering(615) 00:13:43.048 fused_ordering(616) 00:13:43.048 fused_ordering(617) 00:13:43.048 fused_ordering(618) 00:13:43.048 fused_ordering(619) 00:13:43.048 fused_ordering(620) 00:13:43.048 fused_ordering(621) 00:13:43.048 fused_ordering(622) 00:13:43.048 fused_ordering(623) 00:13:43.048 fused_ordering(624) 00:13:43.048 fused_ordering(625) 00:13:43.048 fused_ordering(626) 00:13:43.048 fused_ordering(627) 00:13:43.048 fused_ordering(628) 00:13:43.048 fused_ordering(629) 00:13:43.048 fused_ordering(630) 00:13:43.048 fused_ordering(631) 00:13:43.048 fused_ordering(632) 00:13:43.048 fused_ordering(633) 00:13:43.048 fused_ordering(634) 00:13:43.048 fused_ordering(635) 00:13:43.048 fused_ordering(636) 00:13:43.048 fused_ordering(637) 00:13:43.048 fused_ordering(638) 00:13:43.048 fused_ordering(639) 00:13:43.048 fused_ordering(640) 00:13:43.048 fused_ordering(641) 00:13:43.048 fused_ordering(642) 00:13:43.048 fused_ordering(643) 00:13:43.048 fused_ordering(644) 00:13:43.048 fused_ordering(645) 00:13:43.048 fused_ordering(646) 00:13:43.048 fused_ordering(647) 00:13:43.048 fused_ordering(648) 00:13:43.048 fused_ordering(649) 00:13:43.048 fused_ordering(650) 00:13:43.048 fused_ordering(651) 00:13:43.048 fused_ordering(652) 00:13:43.048 fused_ordering(653) 00:13:43.048 fused_ordering(654) 00:13:43.048 fused_ordering(655) 00:13:43.048 fused_ordering(656) 00:13:43.048 fused_ordering(657) 00:13:43.048 fused_ordering(658) 00:13:43.048 fused_ordering(659) 00:13:43.048 fused_ordering(660) 00:13:43.048 fused_ordering(661) 00:13:43.048 fused_ordering(662) 00:13:43.048 fused_ordering(663) 00:13:43.048 fused_ordering(664) 00:13:43.049 fused_ordering(665) 00:13:43.049 fused_ordering(666) 00:13:43.049 fused_ordering(667) 00:13:43.049 fused_ordering(668) 00:13:43.049 fused_ordering(669) 00:13:43.049 fused_ordering(670) 00:13:43.049 fused_ordering(671) 00:13:43.049 fused_ordering(672) 00:13:43.049 fused_ordering(673) 00:13:43.049 fused_ordering(674) 00:13:43.049 fused_ordering(675) 00:13:43.049 fused_ordering(676) 00:13:43.049 fused_ordering(677) 00:13:43.049 fused_ordering(678) 00:13:43.049 fused_ordering(679) 00:13:43.049 fused_ordering(680) 00:13:43.049 fused_ordering(681) 00:13:43.049 fused_ordering(682) 00:13:43.049 fused_ordering(683) 00:13:43.049 fused_ordering(684) 00:13:43.049 fused_ordering(685) 00:13:43.049 fused_ordering(686) 00:13:43.049 fused_ordering(687) 00:13:43.049 fused_ordering(688) 00:13:43.049 fused_ordering(689) 00:13:43.049 fused_ordering(690) 00:13:43.049 fused_ordering(691) 00:13:43.049 fused_ordering(692) 00:13:43.049 fused_ordering(693) 00:13:43.049 fused_ordering(694) 00:13:43.049 fused_ordering(695) 00:13:43.049 fused_ordering(696) 00:13:43.049 fused_ordering(697) 00:13:43.049 fused_ordering(698) 00:13:43.049 fused_ordering(699) 00:13:43.049 fused_ordering(700) 00:13:43.049 fused_ordering(701) 00:13:43.049 fused_ordering(702) 00:13:43.049 fused_ordering(703) 00:13:43.049 fused_ordering(704) 00:13:43.049 fused_ordering(705) 00:13:43.049 fused_ordering(706) 00:13:43.049 fused_ordering(707) 00:13:43.049 fused_ordering(708) 00:13:43.049 fused_ordering(709) 00:13:43.049 fused_ordering(710) 00:13:43.049 fused_ordering(711) 00:13:43.049 fused_ordering(712) 00:13:43.049 fused_ordering(713) 00:13:43.049 fused_ordering(714) 00:13:43.049 fused_ordering(715) 00:13:43.049 fused_ordering(716) 00:13:43.049 fused_ordering(717) 00:13:43.049 fused_ordering(718) 00:13:43.049 fused_ordering(719) 00:13:43.049 fused_ordering(720) 00:13:43.049 fused_ordering(721) 00:13:43.049 fused_ordering(722) 00:13:43.049 fused_ordering(723) 00:13:43.049 fused_ordering(724) 00:13:43.049 fused_ordering(725) 00:13:43.049 fused_ordering(726) 00:13:43.049 fused_ordering(727) 00:13:43.049 fused_ordering(728) 00:13:43.049 fused_ordering(729) 00:13:43.049 fused_ordering(730) 00:13:43.049 fused_ordering(731) 00:13:43.049 fused_ordering(732) 00:13:43.049 fused_ordering(733) 00:13:43.049 fused_ordering(734) 00:13:43.049 fused_ordering(735) 00:13:43.049 fused_ordering(736) 00:13:43.049 fused_ordering(737) 00:13:43.049 fused_ordering(738) 00:13:43.049 fused_ordering(739) 00:13:43.049 fused_ordering(740) 00:13:43.049 fused_ordering(741) 00:13:43.049 fused_ordering(742) 00:13:43.049 fused_ordering(743) 00:13:43.049 fused_ordering(744) 00:13:43.049 fused_ordering(745) 00:13:43.049 fused_ordering(746) 00:13:43.049 fused_ordering(747) 00:13:43.049 fused_ordering(748) 00:13:43.049 fused_ordering(749) 00:13:43.049 fused_ordering(750) 00:13:43.049 fused_ordering(751) 00:13:43.049 fused_ordering(752) 00:13:43.049 fused_ordering(753) 00:13:43.049 fused_ordering(754) 00:13:43.049 fused_ordering(755) 00:13:43.049 fused_ordering(756) 00:13:43.049 fused_ordering(757) 00:13:43.049 fused_ordering(758) 00:13:43.049 fused_ordering(759) 00:13:43.049 fused_ordering(760) 00:13:43.049 fused_ordering(761) 00:13:43.049 fused_ordering(762) 00:13:43.049 fused_ordering(763) 00:13:43.049 fused_ordering(764) 00:13:43.049 fused_ordering(765) 00:13:43.049 fused_ordering(766) 00:13:43.049 fused_ordering(767) 00:13:43.049 fused_ordering(768) 00:13:43.049 fused_ordering(769) 00:13:43.049 fused_ordering(770) 00:13:43.049 fused_ordering(771) 00:13:43.049 fused_ordering(772) 00:13:43.049 fused_ordering(773) 00:13:43.049 fused_ordering(774) 00:13:43.049 fused_ordering(775) 00:13:43.049 fused_ordering(776) 00:13:43.049 fused_ordering(777) 00:13:43.049 fused_ordering(778) 00:13:43.049 fused_ordering(779) 00:13:43.049 fused_ordering(780) 00:13:43.049 fused_ordering(781) 00:13:43.049 fused_ordering(782) 00:13:43.049 fused_ordering(783) 00:13:43.049 fused_ordering(784) 00:13:43.049 fused_ordering(785) 00:13:43.049 fused_ordering(786) 00:13:43.049 fused_ordering(787) 00:13:43.049 fused_ordering(788) 00:13:43.049 fused_ordering(789) 00:13:43.049 fused_ordering(790) 00:13:43.049 fused_ordering(791) 00:13:43.049 fused_ordering(792) 00:13:43.049 fused_ordering(793) 00:13:43.049 fused_ordering(794) 00:13:43.049 fused_ordering(795) 00:13:43.049 fused_ordering(796) 00:13:43.049 fused_ordering(797) 00:13:43.049 fused_ordering(798) 00:13:43.049 fused_ordering(799) 00:13:43.049 fused_ordering(800) 00:13:43.049 fused_ordering(801) 00:13:43.049 fused_ordering(802) 00:13:43.049 fused_ordering(803) 00:13:43.049 fused_ordering(804) 00:13:43.049 fused_ordering(805) 00:13:43.049 fused_ordering(806) 00:13:43.049 fused_ordering(807) 00:13:43.049 fused_ordering(808) 00:13:43.049 fused_ordering(809) 00:13:43.049 fused_ordering(810) 00:13:43.049 fused_ordering(811) 00:13:43.049 fused_ordering(812) 00:13:43.049 fused_ordering(813) 00:13:43.049 fused_ordering(814) 00:13:43.049 fused_ordering(815) 00:13:43.049 fused_ordering(816) 00:13:43.049 fused_ordering(817) 00:13:43.049 fused_ordering(818) 00:13:43.049 fused_ordering(819) 00:13:43.049 fused_ordering(820) 00:13:43.622 fused_o[2024-10-21 11:58:20.103716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf35fb0 is same with the state(6) to be set 00:13:43.622 rdering(821) 00:13:43.622 fused_ordering(822) 00:13:43.622 fused_ordering(823) 00:13:43.622 fused_ordering(824) 00:13:43.622 fused_ordering(825) 00:13:43.622 fused_ordering(826) 00:13:43.622 fused_ordering(827) 00:13:43.622 fused_ordering(828) 00:13:43.622 fused_ordering(829) 00:13:43.622 fused_ordering(830) 00:13:43.622 fused_ordering(831) 00:13:43.622 fused_ordering(832) 00:13:43.622 fused_ordering(833) 00:13:43.622 fused_ordering(834) 00:13:43.622 fused_ordering(835) 00:13:43.622 fused_ordering(836) 00:13:43.622 fused_ordering(837) 00:13:43.622 fused_ordering(838) 00:13:43.622 fused_ordering(839) 00:13:43.622 fused_ordering(840) 00:13:43.622 fused_ordering(841) 00:13:43.622 fused_ordering(842) 00:13:43.622 fused_ordering(843) 00:13:43.622 fused_ordering(844) 00:13:43.622 fused_ordering(845) 00:13:43.622 fused_ordering(846) 00:13:43.622 fused_ordering(847) 00:13:43.622 fused_ordering(848) 00:13:43.622 fused_ordering(849) 00:13:43.622 fused_ordering(850) 00:13:43.622 fused_ordering(851) 00:13:43.622 fused_ordering(852) 00:13:43.622 fused_ordering(853) 00:13:43.622 fused_ordering(854) 00:13:43.622 fused_ordering(855) 00:13:43.622 fused_ordering(856) 00:13:43.622 fused_ordering(857) 00:13:43.622 fused_ordering(858) 00:13:43.622 fused_ordering(859) 00:13:43.622 fused_ordering(860) 00:13:43.622 fused_ordering(861) 00:13:43.622 fused_ordering(862) 00:13:43.622 fused_ordering(863) 00:13:43.622 fused_ordering(864) 00:13:43.622 fused_ordering(865) 00:13:43.622 fused_ordering(866) 00:13:43.622 fused_ordering(867) 00:13:43.622 fused_ordering(868) 00:13:43.622 fused_ordering(869) 00:13:43.622 fused_ordering(870) 00:13:43.622 fused_ordering(871) 00:13:43.622 fused_ordering(872) 00:13:43.622 fused_ordering(873) 00:13:43.622 fused_ordering(874) 00:13:43.622 fused_ordering(875) 00:13:43.622 fused_ordering(876) 00:13:43.622 fused_ordering(877) 00:13:43.622 fused_ordering(878) 00:13:43.622 fused_ordering(879) 00:13:43.622 fused_ordering(880) 00:13:43.622 fused_ordering(881) 00:13:43.622 fused_ordering(882) 00:13:43.622 fused_ordering(883) 00:13:43.622 fused_ordering(884) 00:13:43.622 fused_ordering(885) 00:13:43.622 fused_ordering(886) 00:13:43.622 fused_ordering(887) 00:13:43.622 fused_ordering(888) 00:13:43.622 fused_ordering(889) 00:13:43.622 fused_ordering(890) 00:13:43.622 fused_ordering(891) 00:13:43.622 fused_ordering(892) 00:13:43.622 fused_ordering(893) 00:13:43.622 fused_ordering(894) 00:13:43.622 fused_ordering(895) 00:13:43.622 fused_ordering(896) 00:13:43.622 fused_ordering(897) 00:13:43.622 fused_ordering(898) 00:13:43.622 fused_ordering(899) 00:13:43.622 fused_ordering(900) 00:13:43.622 fused_ordering(901) 00:13:43.622 fused_ordering(902) 00:13:43.622 fused_ordering(903) 00:13:43.622 fused_ordering(904) 00:13:43.622 fused_ordering(905) 00:13:43.622 fused_ordering(906) 00:13:43.622 fused_ordering(907) 00:13:43.622 fused_ordering(908) 00:13:43.622 fused_ordering(909) 00:13:43.622 fused_ordering(910) 00:13:43.622 fused_ordering(911) 00:13:43.622 fused_ordering(912) 00:13:43.622 fused_ordering(913) 00:13:43.622 fused_ordering(914) 00:13:43.622 fused_ordering(915) 00:13:43.622 fused_ordering(916) 00:13:43.622 fused_ordering(917) 00:13:43.622 fused_ordering(918) 00:13:43.622 fused_ordering(919) 00:13:43.622 fused_ordering(920) 00:13:43.622 fused_ordering(921) 00:13:43.622 fused_ordering(922) 00:13:43.622 fused_ordering(923) 00:13:43.622 fused_ordering(924) 00:13:43.622 fused_ordering(925) 00:13:43.622 fused_ordering(926) 00:13:43.622 fused_ordering(927) 00:13:43.622 fused_ordering(928) 00:13:43.622 fused_ordering(929) 00:13:43.622 fused_ordering(930) 00:13:43.622 fused_ordering(931) 00:13:43.622 fused_ordering(932) 00:13:43.622 fused_ordering(933) 00:13:43.622 fused_ordering(934) 00:13:43.622 fused_ordering(935) 00:13:43.622 fused_ordering(936) 00:13:43.622 fused_ordering(937) 00:13:43.622 fused_ordering(938) 00:13:43.622 fused_ordering(939) 00:13:43.622 fused_ordering(940) 00:13:43.622 fused_ordering(941) 00:13:43.622 fused_ordering(942) 00:13:43.622 fused_ordering(943) 00:13:43.622 fused_ordering(944) 00:13:43.622 fused_ordering(945) 00:13:43.622 fused_ordering(946) 00:13:43.622 fused_ordering(947) 00:13:43.622 fused_ordering(948) 00:13:43.622 fused_ordering(949) 00:13:43.622 fused_ordering(950) 00:13:43.622 fused_ordering(951) 00:13:43.622 fused_ordering(952) 00:13:43.622 fused_ordering(953) 00:13:43.622 fused_ordering(954) 00:13:43.622 fused_ordering(955) 00:13:43.622 fused_ordering(956) 00:13:43.622 fused_ordering(957) 00:13:43.622 fused_ordering(958) 00:13:43.622 fused_ordering(959) 00:13:43.622 fused_ordering(960) 00:13:43.622 fused_ordering(961) 00:13:43.622 fused_ordering(962) 00:13:43.622 fused_ordering(963) 00:13:43.622 fused_ordering(964) 00:13:43.622 fused_ordering(965) 00:13:43.622 fused_ordering(966) 00:13:43.622 fused_ordering(967) 00:13:43.622 fused_ordering(968) 00:13:43.622 fused_ordering(969) 00:13:43.622 fused_ordering(970) 00:13:43.622 fused_ordering(971) 00:13:43.622 fused_ordering(972) 00:13:43.622 fused_ordering(973) 00:13:43.622 fused_ordering(974) 00:13:43.622 fused_ordering(975) 00:13:43.622 fused_ordering(976) 00:13:43.622 fused_ordering(977) 00:13:43.622 fused_ordering(978) 00:13:43.622 fused_ordering(979) 00:13:43.622 fused_ordering(980) 00:13:43.622 fused_ordering(981) 00:13:43.622 fused_ordering(982) 00:13:43.622 fused_ordering(983) 00:13:43.622 fused_ordering(984) 00:13:43.622 fused_ordering(985) 00:13:43.622 fused_ordering(986) 00:13:43.622 fused_ordering(987) 00:13:43.622 fused_ordering(988) 00:13:43.622 fused_ordering(989) 00:13:43.622 fused_ordering(990) 00:13:43.622 fused_ordering(991) 00:13:43.622 fused_ordering(992) 00:13:43.622 fused_ordering(993) 00:13:43.622 fused_ordering(994) 00:13:43.622 fused_ordering(995) 00:13:43.622 fused_ordering(996) 00:13:43.622 fused_ordering(997) 00:13:43.622 fused_ordering(998) 00:13:43.622 fused_ordering(999) 00:13:43.622 fused_ordering(1000) 00:13:43.622 fused_ordering(1001) 00:13:43.622 fused_ordering(1002) 00:13:43.622 fused_ordering(1003) 00:13:43.622 fused_ordering(1004) 00:13:43.622 fused_ordering(1005) 00:13:43.622 fused_ordering(1006) 00:13:43.622 fused_ordering(1007) 00:13:43.622 fused_ordering(1008) 00:13:43.622 fused_ordering(1009) 00:13:43.622 fused_ordering(1010) 00:13:43.622 fused_ordering(1011) 00:13:43.622 fused_ordering(1012) 00:13:43.622 fused_ordering(1013) 00:13:43.622 fused_ordering(1014) 00:13:43.622 fused_ordering(1015) 00:13:43.622 fused_ordering(1016) 00:13:43.622 fused_ordering(1017) 00:13:43.622 fused_ordering(1018) 00:13:43.622 fused_ordering(1019) 00:13:43.622 fused_ordering(1020) 00:13:43.622 fused_ordering(1021) 00:13:43.622 fused_ordering(1022) 00:13:43.622 fused_ordering(1023) 00:13:43.622 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:43.622 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:43.622 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:43.622 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:13:43.622 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:43.622 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:13:43.622 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:43.622 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:43.622 rmmod nvme_tcp 00:13:43.622 rmmod nvme_fabrics 00:13:43.622 rmmod nvme_keyring 00:13:43.622 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:43.622 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:13:43.622 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:13:43.622 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@515 -- # '[' -n 912154 ']' 00:13:43.622 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # killprocess 912154 00:13:43.622 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 912154 ']' 00:13:43.622 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 912154 00:13:43.622 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:13:43.622 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:43.622 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 912154 00:13:43.883 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:43.883 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:43.883 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 912154' 00:13:43.883 killing process with pid 912154 00:13:43.883 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 912154 00:13:43.883 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 912154 00:13:43.883 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:43.883 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:43.883 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:43.883 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:13:43.883 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-save 00:13:43.883 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:43.883 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-restore 00:13:43.883 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:43.883 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:43.883 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.883 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:43.883 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:46.432 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:46.432 00:13:46.432 real 0m13.400s 00:13:46.432 user 0m6.907s 00:13:46.432 sys 0m7.305s 00:13:46.432 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:46.432 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:46.432 ************************************ 00:13:46.432 END TEST nvmf_fused_ordering 00:13:46.432 ************************************ 00:13:46.432 11:58:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:46.432 11:58:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:46.432 11:58:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:46.432 11:58:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:46.432 ************************************ 00:13:46.432 START TEST nvmf_ns_masking 00:13:46.432 ************************************ 00:13:46.432 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:46.432 * Looking for test storage... 00:13:46.432 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:46.432 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:46.432 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:13:46.432 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:46.432 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:46.432 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:46.432 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:46.432 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:46.432 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:13:46.432 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:13:46.432 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:13:46.432 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:13:46.432 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:13:46.432 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:13:46.432 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:13:46.432 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:46.432 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:13:46.432 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:13:46.432 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:46.432 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:46.432 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:13:46.432 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:13:46.432 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:46.432 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:13:46.432 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:13:46.432 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:13:46.432 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:13:46.432 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:46.432 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:13:46.432 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:13:46.432 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:46.432 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:46.432 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:13:46.432 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:46.432 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:46.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.432 --rc genhtml_branch_coverage=1 00:13:46.432 --rc genhtml_function_coverage=1 00:13:46.432 --rc genhtml_legend=1 00:13:46.432 --rc geninfo_all_blocks=1 00:13:46.432 --rc geninfo_unexecuted_blocks=1 00:13:46.432 00:13:46.432 ' 00:13:46.432 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:46.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.432 --rc genhtml_branch_coverage=1 00:13:46.432 --rc genhtml_function_coverage=1 00:13:46.432 --rc genhtml_legend=1 00:13:46.432 --rc geninfo_all_blocks=1 00:13:46.432 --rc geninfo_unexecuted_blocks=1 00:13:46.432 00:13:46.432 ' 00:13:46.432 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:46.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.432 --rc genhtml_branch_coverage=1 00:13:46.432 --rc genhtml_function_coverage=1 00:13:46.432 --rc genhtml_legend=1 00:13:46.432 --rc geninfo_all_blocks=1 00:13:46.433 --rc geninfo_unexecuted_blocks=1 00:13:46.433 00:13:46.433 ' 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:46.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.433 --rc genhtml_branch_coverage=1 00:13:46.433 --rc genhtml_function_coverage=1 00:13:46.433 --rc genhtml_legend=1 00:13:46.433 --rc geninfo_all_blocks=1 00:13:46.433 --rc geninfo_unexecuted_blocks=1 00:13:46.433 00:13:46.433 ' 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:46.433 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=95648c38-e6b3-4186-909b-dbe05c67ce02 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=3cbe8c93-268c-4051-94f2-f567a7565c7a 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=6ef13575-9ab4-4a30-bf93-101b39b20adf 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:13:46.433 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:54.575 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:54.575 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:54.575 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:54.575 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # is_hw=yes 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:54.575 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:54.575 11:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:54.575 11:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:54.575 11:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:54.575 11:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:54.575 11:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:54.575 11:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:54.575 11:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:54.575 11:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:54.575 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:54.575 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:13:54.575 00:13:54.575 --- 10.0.0.2 ping statistics --- 00:13:54.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:54.575 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:13:54.575 11:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:54.575 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:54.575 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:13:54.575 00:13:54.575 --- 10.0.0.1 ping statistics --- 00:13:54.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:54.575 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:13:54.575 11:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:54.575 11:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # return 0 00:13:54.575 11:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:54.576 11:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:54.576 11:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:54.576 11:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:54.576 11:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:54.576 11:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:54.576 11:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:54.576 11:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:54.576 11:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:54.576 11:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:54.576 11:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:54.576 11:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # nvmfpid=917117 00:13:54.576 11:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # waitforlisten 917117 00:13:54.576 11:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:54.576 11:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 917117 ']' 00:13:54.576 11:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.576 11:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:54.576 11:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.576 11:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:54.576 11:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:54.576 [2024-10-21 11:58:30.364161] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:13:54.576 [2024-10-21 11:58:30.364262] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:54.576 [2024-10-21 11:58:30.457156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.576 [2024-10-21 11:58:30.509199] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:54.576 [2024-10-21 11:58:30.509244] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:54.576 [2024-10-21 11:58:30.509253] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:54.576 [2024-10-21 11:58:30.509260] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:54.576 [2024-10-21 11:58:30.509266] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:54.576 [2024-10-21 11:58:30.510022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.838 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:54.838 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:13:54.838 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:54.838 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:54.838 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:54.838 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:54.838 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:54.838 [2024-10-21 11:58:31.376519] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:54.838 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:54.838 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:54.838 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:55.100 Malloc1 00:13:55.100 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:55.360 Malloc2 00:13:55.360 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:55.621 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:55.621 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:55.882 [2024-10-21 11:58:32.315763] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:55.882 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:55.882 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6ef13575-9ab4-4a30-bf93-101b39b20adf -a 10.0.0.2 -s 4420 -i 4 00:13:56.142 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:56.142 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:56.142 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:56.142 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:56.142 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:58.058 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:58.058 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:58.058 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:58.058 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:58.058 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:58.058 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:58.058 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:58.058 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:58.058 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:58.058 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:58.058 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:58.058 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:58.058 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:58.058 [ 0]:0x1 00:13:58.318 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:58.318 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:58.318 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3df3ebbd5d9342ae8c41ee7783ecc1ca 00:13:58.318 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3df3ebbd5d9342ae8c41ee7783ecc1ca != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:58.318 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:58.318 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:58.318 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:58.318 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:58.318 [ 0]:0x1 00:13:58.318 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:58.318 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:58.579 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3df3ebbd5d9342ae8c41ee7783ecc1ca 00:13:58.579 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3df3ebbd5d9342ae8c41ee7783ecc1ca != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:58.579 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:58.579 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:58.579 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:58.579 [ 1]:0x2 00:13:58.579 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:58.579 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:58.579 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=399f795df70a44c2a04c7435c8dea7b2 00:13:58.579 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 399f795df70a44c2a04c7435c8dea7b2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:58.579 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:58.579 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:58.839 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.839 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.100 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:59.100 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:59.100 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6ef13575-9ab4-4a30-bf93-101b39b20adf -a 10.0.0.2 -s 4420 -i 4 00:13:59.360 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:59.360 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:59.361 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:59.361 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:13:59.361 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:13:59.361 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:01.274 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:01.274 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:01.274 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:01.274 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:01.274 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:01.274 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:01.274 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:01.274 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:01.274 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:01.274 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:01.274 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:01.274 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:01.274 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:01.274 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:01.274 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:01.274 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:01.274 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:01.274 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:01.274 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:01.274 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:01.274 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:01.274 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:01.535 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:01.535 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:01.535 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:01.535 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:01.535 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:01.535 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:01.535 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:01.535 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:01.535 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:01.535 [ 0]:0x2 00:14:01.535 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:01.535 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:01.535 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=399f795df70a44c2a04c7435c8dea7b2 00:14:01.535 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 399f795df70a44c2a04c7435c8dea7b2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:01.535 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:01.535 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:01.535 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:01.535 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:01.535 [ 0]:0x1 00:14:01.535 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:01.535 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:01.796 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3df3ebbd5d9342ae8c41ee7783ecc1ca 00:14:01.796 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3df3ebbd5d9342ae8c41ee7783ecc1ca != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:01.796 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:01.796 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:01.796 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:01.796 [ 1]:0x2 00:14:01.796 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:01.796 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:01.796 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=399f795df70a44c2a04c7435c8dea7b2 00:14:01.796 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 399f795df70a44c2a04c7435c8dea7b2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:01.796 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:02.057 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:02.057 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:02.057 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:02.057 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:02.057 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:02.057 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:02.057 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:02.057 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:02.057 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:02.057 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:02.057 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:02.057 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:02.057 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:02.057 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:02.057 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:02.057 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:02.057 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:02.058 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:02.058 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:02.058 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:02.058 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:02.058 [ 0]:0x2 00:14:02.058 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:02.058 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:02.058 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=399f795df70a44c2a04c7435c8dea7b2 00:14:02.058 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 399f795df70a44c2a04c7435c8dea7b2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:02.058 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:02.058 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:02.058 Failed to open ns nvme0n2, errno 2 00:14:02.317 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.317 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:02.317 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:02.317 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6ef13575-9ab4-4a30-bf93-101b39b20adf -a 10.0.0.2 -s 4420 -i 4 00:14:02.577 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:02.577 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:02.577 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:02.577 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:02.577 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:02.577 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:04.492 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:04.492 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:04.492 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:04.752 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:04.752 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:04.752 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:04.752 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:04.752 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:04.752 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:04.752 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:04.752 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:04.752 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:04.752 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:04.752 [ 0]:0x1 00:14:04.752 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:04.752 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:04.752 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3df3ebbd5d9342ae8c41ee7783ecc1ca 00:14:04.752 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3df3ebbd5d9342ae8c41ee7783ecc1ca != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:04.752 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:04.752 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:04.752 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:05.013 [ 1]:0x2 00:14:05.013 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:05.013 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:05.013 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=399f795df70a44c2a04c7435c8dea7b2 00:14:05.013 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 399f795df70a44c2a04c7435c8dea7b2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:05.013 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:05.013 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:05.013 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:05.013 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:05.013 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:05.317 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:05.317 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:05.317 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:05.317 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:05.317 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:05.317 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:05.317 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:05.317 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:05.317 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:05.317 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:05.317 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:05.317 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:05.317 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:05.317 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:05.317 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:05.317 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:05.317 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:05.317 [ 0]:0x2 00:14:05.317 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:05.317 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:05.317 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=399f795df70a44c2a04c7435c8dea7b2 00:14:05.317 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 399f795df70a44c2a04c7435c8dea7b2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:05.317 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:05.317 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:05.317 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:05.317 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:05.317 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:05.317 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:05.317 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:05.317 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:05.317 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:05.317 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:05.317 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:05.317 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:05.651 [2024-10-21 11:58:41.942262] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:05.651 request: 00:14:05.651 { 00:14:05.651 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:05.651 "nsid": 2, 00:14:05.651 "host": "nqn.2016-06.io.spdk:host1", 00:14:05.651 "method": "nvmf_ns_remove_host", 00:14:05.651 "req_id": 1 00:14:05.651 } 00:14:05.651 Got JSON-RPC error response 00:14:05.651 response: 00:14:05.651 { 00:14:05.651 "code": -32602, 00:14:05.651 "message": "Invalid parameters" 00:14:05.651 } 00:14:05.651 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:05.651 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:05.651 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:05.651 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:05.651 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:05.651 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:05.651 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:05.651 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:05.651 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:05.651 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:05.651 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:05.651 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:05.651 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:05.651 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:05.651 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:05.651 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:05.651 11:58:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:05.651 11:58:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:05.651 11:58:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:05.651 11:58:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:05.651 11:58:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:05.651 11:58:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:05.651 11:58:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:05.651 11:58:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:05.651 11:58:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:05.651 [ 0]:0x2 00:14:05.651 11:58:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:05.651 11:58:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:05.651 11:58:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=399f795df70a44c2a04c7435c8dea7b2 00:14:05.651 11:58:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 399f795df70a44c2a04c7435c8dea7b2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:05.651 11:58:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:05.651 11:58:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:05.929 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.929 11:58:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=919603 00:14:05.929 11:58:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:05.929 11:58:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:05.929 11:58:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 919603 /var/tmp/host.sock 00:14:05.929 11:58:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 919603 ']' 00:14:05.929 11:58:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:14:05.929 11:58:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:05.929 11:58:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:05.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:05.929 11:58:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:05.929 11:58:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:05.929 [2024-10-21 11:58:42.314319] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:14:05.929 [2024-10-21 11:58:42.314384] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid919603 ] 00:14:05.929 [2024-10-21 11:58:42.392751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.929 [2024-10-21 11:58:42.428799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:06.871 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:06.871 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:14:06.871 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.871 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:07.132 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 95648c38-e6b3-4186-909b-dbe05c67ce02 00:14:07.132 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:14:07.132 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 95648C38E6B34186909BDBE05C67CE02 -i 00:14:07.132 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 3cbe8c93-268c-4051-94f2-f567a7565c7a 00:14:07.132 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:14:07.132 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 3CBE8C93268C405194F2F567A7565C7A -i 00:14:07.393 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:07.654 11:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:07.654 11:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:07.654 11:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:07.914 nvme0n1 00:14:07.914 11:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:07.914 11:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:08.484 nvme1n2 00:14:08.484 11:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:08.484 11:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:08.484 11:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:08.484 11:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:08.484 11:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:08.484 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:08.484 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:08.484 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:08.484 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:08.744 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 95648c38-e6b3-4186-909b-dbe05c67ce02 == \9\5\6\4\8\c\3\8\-\e\6\b\3\-\4\1\8\6\-\9\0\9\b\-\d\b\e\0\5\c\6\7\c\e\0\2 ]] 00:14:08.744 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:08.744 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:08.744 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:09.004 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 3cbe8c93-268c-4051-94f2-f567a7565c7a == \3\c\b\e\8\c\9\3\-\2\6\8\c\-\4\0\5\1\-\9\4\f\2\-\f\5\6\7\a\7\5\6\5\c\7\a ]] 00:14:09.004 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.004 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:09.265 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 95648c38-e6b3-4186-909b-dbe05c67ce02 00:14:09.265 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:14:09.265 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 95648C38E6B34186909BDBE05C67CE02 00:14:09.265 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:09.265 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 95648C38E6B34186909BDBE05C67CE02 00:14:09.265 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:09.265 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:09.265 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:09.265 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:09.265 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:09.265 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:09.265 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:09.265 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:09.265 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 95648C38E6B34186909BDBE05C67CE02 00:14:09.527 [2024-10-21 11:58:45.932719] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:14:09.527 [2024-10-21 11:58:45.932747] subsystem.c:2151:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:14:09.527 [2024-10-21 11:58:45.932753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.527 request: 00:14:09.527 { 00:14:09.527 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:09.527 "namespace": { 00:14:09.527 "bdev_name": "invalid", 00:14:09.527 "nsid": 1, 00:14:09.527 "nguid": "95648C38E6B34186909BDBE05C67CE02", 00:14:09.527 "no_auto_visible": false 00:14:09.527 }, 00:14:09.527 "method": "nvmf_subsystem_add_ns", 00:14:09.527 "req_id": 1 00:14:09.527 } 00:14:09.527 Got JSON-RPC error response 00:14:09.527 response: 00:14:09.527 { 00:14:09.527 "code": -32602, 00:14:09.527 "message": "Invalid parameters" 00:14:09.527 } 00:14:09.527 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:09.527 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:09.527 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:09.527 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:09.527 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 95648c38-e6b3-4186-909b-dbe05c67ce02 00:14:09.527 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:14:09.527 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 95648C38E6B34186909BDBE05C67CE02 -i 00:14:09.787 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:14:11.702 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:14:11.702 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:14:11.702 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:11.963 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:14:11.963 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 919603 00:14:11.963 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 919603 ']' 00:14:11.963 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 919603 00:14:11.963 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:14:11.963 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:11.963 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 919603 00:14:11.963 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:11.963 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:11.963 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 919603' 00:14:11.963 killing process with pid 919603 00:14:11.963 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 919603 00:14:11.963 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 919603 00:14:12.224 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:12.224 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:14:12.224 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:14:12.224 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:12.224 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:14:12.224 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:12.224 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:14:12.224 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:12.224 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:12.224 rmmod nvme_tcp 00:14:12.224 rmmod nvme_fabrics 00:14:12.224 rmmod nvme_keyring 00:14:12.484 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:12.484 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:14:12.484 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:14:12.484 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@515 -- # '[' -n 917117 ']' 00:14:12.484 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # killprocess 917117 00:14:12.484 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 917117 ']' 00:14:12.484 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 917117 00:14:12.484 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:14:12.484 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:12.484 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 917117 00:14:12.484 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:12.484 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:12.484 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 917117' 00:14:12.484 killing process with pid 917117 00:14:12.484 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 917117 00:14:12.484 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 917117 00:14:12.484 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:12.484 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:12.484 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:12.484 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:14:12.484 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-save 00:14:12.484 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:12.484 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-restore 00:14:12.485 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:12.485 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:12.485 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.485 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:12.485 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.031 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:15.031 00:14:15.031 real 0m28.594s 00:14:15.031 user 0m32.440s 00:14:15.031 sys 0m8.270s 00:14:15.031 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:15.031 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:15.031 ************************************ 00:14:15.031 END TEST nvmf_ns_masking 00:14:15.031 ************************************ 00:14:15.031 11:58:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:15.031 11:58:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:15.031 11:58:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:15.031 11:58:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:15.031 11:58:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:15.031 ************************************ 00:14:15.031 START TEST nvmf_nvme_cli 00:14:15.031 ************************************ 00:14:15.031 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:15.031 * Looking for test storage... 00:14:15.031 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:15.031 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:15.031 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:14:15.031 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:15.031 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:15.031 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:15.031 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:15.031 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:15.031 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:15.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.032 --rc genhtml_branch_coverage=1 00:14:15.032 --rc genhtml_function_coverage=1 00:14:15.032 --rc genhtml_legend=1 00:14:15.032 --rc geninfo_all_blocks=1 00:14:15.032 --rc geninfo_unexecuted_blocks=1 00:14:15.032 00:14:15.032 ' 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:15.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.032 --rc genhtml_branch_coverage=1 00:14:15.032 --rc genhtml_function_coverage=1 00:14:15.032 --rc genhtml_legend=1 00:14:15.032 --rc geninfo_all_blocks=1 00:14:15.032 --rc geninfo_unexecuted_blocks=1 00:14:15.032 00:14:15.032 ' 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:15.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.032 --rc genhtml_branch_coverage=1 00:14:15.032 --rc genhtml_function_coverage=1 00:14:15.032 --rc genhtml_legend=1 00:14:15.032 --rc geninfo_all_blocks=1 00:14:15.032 --rc geninfo_unexecuted_blocks=1 00:14:15.032 00:14:15.032 ' 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:15.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.032 --rc genhtml_branch_coverage=1 00:14:15.032 --rc genhtml_function_coverage=1 00:14:15.032 --rc genhtml_legend=1 00:14:15.032 --rc geninfo_all_blocks=1 00:14:15.032 --rc geninfo_unexecuted_blocks=1 00:14:15.032 00:14:15.032 ' 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:15.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:14:15.032 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:23.177 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:23.177 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:23.177 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:23.177 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:23.177 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:23.177 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:23.177 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:23.177 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:23.177 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:23.177 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:23.177 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:23.177 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:23.177 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:23.177 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:23.177 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:23.177 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:23.177 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:23.177 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:23.177 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:23.177 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:23.177 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:23.177 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:23.177 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:23.177 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:23.177 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:23.177 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:23.177 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:23.177 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:23.177 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:23.177 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:23.177 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:23.177 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:23.177 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:23.177 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:23.177 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:23.178 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:23.178 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:23.178 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:23.178 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # is_hw=yes 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:23.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:23.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.601 ms 00:14:23.178 00:14:23.178 --- 10.0.0.2 ping statistics --- 00:14:23.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.178 rtt min/avg/max/mdev = 0.601/0.601/0.601/0.000 ms 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:23.178 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:23.178 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:14:23.178 00:14:23.178 --- 10.0.0.1 ping statistics --- 00:14:23.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.178 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # return 0 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # nvmfpid=925027 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # waitforlisten 925027 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 925027 ']' 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:23.178 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:23.178 [2024-10-21 11:58:58.775141] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:14:23.178 [2024-10-21 11:58:58.775189] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:23.178 [2024-10-21 11:58:58.858605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:23.178 [2024-10-21 11:58:58.895614] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:23.179 [2024-10-21 11:58:58.895648] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:23.179 [2024-10-21 11:58:58.895656] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:23.179 [2024-10-21 11:58:58.895662] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:23.179 [2024-10-21 11:58:58.895668] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:23.179 [2024-10-21 11:58:58.900335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:23.179 [2024-10-21 11:58:58.900438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:23.179 [2024-10-21 11:58:58.900673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:23.179 [2024-10-21 11:58:58.900674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.179 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:23.179 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:14:23.179 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:23.179 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:23.179 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:23.179 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:23.179 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:23.179 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.179 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:23.179 [2024-10-21 11:58:59.626246] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:23.179 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.179 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:23.179 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.179 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:23.179 Malloc0 00:14:23.179 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.179 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:23.179 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.179 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:23.179 Malloc1 00:14:23.179 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.179 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:23.179 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.179 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:23.179 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.179 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:23.179 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.179 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:23.179 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.179 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:23.179 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.179 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:23.179 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.179 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:23.179 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.179 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:23.179 [2024-10-21 11:58:59.729655] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:23.179 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.179 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:23.179 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.179 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:23.179 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.179 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:14:23.440 00:14:23.440 Discovery Log Number of Records 2, Generation counter 2 00:14:23.440 =====Discovery Log Entry 0====== 00:14:23.440 trtype: tcp 00:14:23.440 adrfam: ipv4 00:14:23.440 subtype: current discovery subsystem 00:14:23.440 treq: not required 00:14:23.440 portid: 0 00:14:23.440 trsvcid: 4420 00:14:23.440 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:23.440 traddr: 10.0.0.2 00:14:23.440 eflags: explicit discovery connections, duplicate discovery information 00:14:23.440 sectype: none 00:14:23.440 =====Discovery Log Entry 1====== 00:14:23.440 trtype: tcp 00:14:23.440 adrfam: ipv4 00:14:23.440 subtype: nvme subsystem 00:14:23.440 treq: not required 00:14:23.440 portid: 0 00:14:23.440 trsvcid: 4420 00:14:23.440 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:23.440 traddr: 10.0.0.2 00:14:23.440 eflags: none 00:14:23.440 sectype: none 00:14:23.440 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:23.440 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:23.440 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:14:23.440 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:23.440 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:14:23.440 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:14:23.440 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:23.440 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:14:23.440 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:23.440 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:23.440 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:25.352 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:25.352 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:14:25.352 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:25.352 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:25.352 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:25.352 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:14:27.267 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:27.267 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:27.267 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:27.267 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:27.267 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:27.267 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:14:27.267 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:27.267 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:14:27.267 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:27.267 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:14:27.267 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:14:27.267 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:27.267 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:14:27.267 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:27.267 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:27.267 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:14:27.267 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:27.267 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:27.267 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:14:27.267 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:27.267 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:27.267 /dev/nvme0n2 ]] 00:14:27.267 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:27.267 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:27.267 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:14:27.267 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:27.267 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:14:27.267 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:14:27.267 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:27.267 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:14:27.267 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:27.267 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:27.267 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:14:27.267 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:27.267 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:27.267 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:14:27.267 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:27.267 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:27.267 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:27.267 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.267 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:27.267 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:14:27.267 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:27.267 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:27.267 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:27.267 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:27.267 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:14:27.267 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:27.268 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:27.268 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.268 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:27.268 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.268 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:27.268 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:27.268 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:27.268 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:27.268 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:27.268 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:27.268 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:27.268 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:27.268 rmmod nvme_tcp 00:14:27.268 rmmod nvme_fabrics 00:14:27.268 rmmod nvme_keyring 00:14:27.268 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:27.268 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:27.268 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:27.268 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@515 -- # '[' -n 925027 ']' 00:14:27.268 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # killprocess 925027 00:14:27.268 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 925027 ']' 00:14:27.268 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 925027 00:14:27.268 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:14:27.268 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:27.268 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 925027 00:14:27.529 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:27.529 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:27.529 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 925027' 00:14:27.529 killing process with pid 925027 00:14:27.529 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 925027 00:14:27.529 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 925027 00:14:27.529 11:59:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:27.529 11:59:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:27.529 11:59:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:27.529 11:59:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:27.529 11:59:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-save 00:14:27.529 11:59:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:27.529 11:59:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-restore 00:14:27.529 11:59:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:27.529 11:59:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:27.529 11:59:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:27.529 11:59:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:27.529 11:59:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.074 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:30.074 00:14:30.074 real 0m14.888s 00:14:30.074 user 0m22.551s 00:14:30.074 sys 0m6.175s 00:14:30.074 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:30.074 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:30.074 ************************************ 00:14:30.074 END TEST nvmf_nvme_cli 00:14:30.074 ************************************ 00:14:30.074 11:59:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:30.074 11:59:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:30.074 11:59:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:30.074 11:59:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:30.074 11:59:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:30.074 ************************************ 00:14:30.074 START TEST nvmf_vfio_user 00:14:30.074 ************************************ 00:14:30.074 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:30.074 * Looking for test storage... 00:14:30.074 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:30.074 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:30.074 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lcov --version 00:14:30.074 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:30.074 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:30.074 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:30.074 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:30.074 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:30.074 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:30.074 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:30.074 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:30.074 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:30.074 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:30.074 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:30.074 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:30.074 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:30.074 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:30.074 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:30.074 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:30.074 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:30.074 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:30.074 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:30.074 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:30.074 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:30.074 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:30.074 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:30.074 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:30.074 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:30.074 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:30.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.075 --rc genhtml_branch_coverage=1 00:14:30.075 --rc genhtml_function_coverage=1 00:14:30.075 --rc genhtml_legend=1 00:14:30.075 --rc geninfo_all_blocks=1 00:14:30.075 --rc geninfo_unexecuted_blocks=1 00:14:30.075 00:14:30.075 ' 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:30.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.075 --rc genhtml_branch_coverage=1 00:14:30.075 --rc genhtml_function_coverage=1 00:14:30.075 --rc genhtml_legend=1 00:14:30.075 --rc geninfo_all_blocks=1 00:14:30.075 --rc geninfo_unexecuted_blocks=1 00:14:30.075 00:14:30.075 ' 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:30.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.075 --rc genhtml_branch_coverage=1 00:14:30.075 --rc genhtml_function_coverage=1 00:14:30.075 --rc genhtml_legend=1 00:14:30.075 --rc geninfo_all_blocks=1 00:14:30.075 --rc geninfo_unexecuted_blocks=1 00:14:30.075 00:14:30.075 ' 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:30.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.075 --rc genhtml_branch_coverage=1 00:14:30.075 --rc genhtml_function_coverage=1 00:14:30.075 --rc genhtml_legend=1 00:14:30.075 --rc geninfo_all_blocks=1 00:14:30.075 --rc geninfo_unexecuted_blocks=1 00:14:30.075 00:14:30.075 ' 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:30.075 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=926660 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 926660' 00:14:30.075 Process pid: 926660 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 926660 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 926660 ']' 00:14:30.075 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:30.076 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.076 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:30.076 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.076 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:30.076 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:30.076 [2024-10-21 11:59:06.470693] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:14:30.076 [2024-10-21 11:59:06.470772] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:30.076 [2024-10-21 11:59:06.553914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:30.076 [2024-10-21 11:59:06.589200] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:30.076 [2024-10-21 11:59:06.589233] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:30.076 [2024-10-21 11:59:06.589239] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:30.076 [2024-10-21 11:59:06.589244] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:30.076 [2024-10-21 11:59:06.589248] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:30.076 [2024-10-21 11:59:06.590840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:30.076 [2024-10-21 11:59:06.590995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:30.076 [2024-10-21 11:59:06.591147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.076 [2024-10-21 11:59:06.591150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:31.017 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:31.017 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:14:31.017 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:31.961 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:31.961 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:31.961 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:31.961 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:31.961 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:31.961 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:32.222 Malloc1 00:14:32.222 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:32.483 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:32.483 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:32.743 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:32.743 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:32.743 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:33.004 Malloc2 00:14:33.004 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:33.004 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:33.265 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:33.528 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:33.528 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:33.528 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:33.528 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:33.528 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:33.528 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:33.528 [2024-10-21 11:59:09.986453] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:14:33.528 [2024-10-21 11:59:09.986495] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid927381 ] 00:14:33.528 [2024-10-21 11:59:10.017037] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:33.528 [2024-10-21 11:59:10.028326] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:33.528 [2024-10-21 11:59:10.028345] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fbeb9e47000 00:14:33.528 [2024-10-21 11:59:10.029317] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:33.528 [2024-10-21 11:59:10.030328] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:33.528 [2024-10-21 11:59:10.031335] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:33.528 [2024-10-21 11:59:10.032338] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:33.528 [2024-10-21 11:59:10.033342] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:33.528 [2024-10-21 11:59:10.034349] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:33.528 [2024-10-21 11:59:10.035360] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:33.528 [2024-10-21 11:59:10.036363] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:33.528 [2024-10-21 11:59:10.037368] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:33.528 [2024-10-21 11:59:10.037379] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fbeb9e3c000 00:14:33.528 [2024-10-21 11:59:10.038390] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:33.528 [2024-10-21 11:59:10.050748] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:33.528 [2024-10-21 11:59:10.050773] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:14:33.528 [2024-10-21 11:59:10.053462] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:33.528 [2024-10-21 11:59:10.053498] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:33.528 [2024-10-21 11:59:10.053563] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:14:33.528 [2024-10-21 11:59:10.053575] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:14:33.528 [2024-10-21 11:59:10.053580] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:14:33.528 [2024-10-21 11:59:10.054463] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:33.528 [2024-10-21 11:59:10.054472] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:14:33.528 [2024-10-21 11:59:10.054477] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:14:33.528 [2024-10-21 11:59:10.055464] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:33.528 [2024-10-21 11:59:10.055471] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:14:33.528 [2024-10-21 11:59:10.055477] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:14:33.528 [2024-10-21 11:59:10.056472] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:33.528 [2024-10-21 11:59:10.056479] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:33.528 [2024-10-21 11:59:10.057477] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:33.528 [2024-10-21 11:59:10.057483] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:14:33.529 [2024-10-21 11:59:10.057487] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:14:33.529 [2024-10-21 11:59:10.057491] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:33.529 [2024-10-21 11:59:10.057596] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:14:33.529 [2024-10-21 11:59:10.057600] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:33.529 [2024-10-21 11:59:10.057603] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:33.529 [2024-10-21 11:59:10.058482] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:33.529 [2024-10-21 11:59:10.059492] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:33.529 [2024-10-21 11:59:10.060503] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:33.529 [2024-10-21 11:59:10.061500] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:33.529 [2024-10-21 11:59:10.061559] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:33.529 [2024-10-21 11:59:10.062512] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:33.529 [2024-10-21 11:59:10.062518] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:33.529 [2024-10-21 11:59:10.062521] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:14:33.529 [2024-10-21 11:59:10.062536] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:14:33.529 [2024-10-21 11:59:10.062542] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:14:33.529 [2024-10-21 11:59:10.062554] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:33.529 [2024-10-21 11:59:10.062558] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:33.529 [2024-10-21 11:59:10.062561] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:33.529 [2024-10-21 11:59:10.062570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:33.529 [2024-10-21 11:59:10.062610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:33.529 [2024-10-21 11:59:10.062618] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:14:33.529 [2024-10-21 11:59:10.062621] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:14:33.529 [2024-10-21 11:59:10.062625] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:14:33.529 [2024-10-21 11:59:10.062628] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:33.529 [2024-10-21 11:59:10.062631] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:14:33.529 [2024-10-21 11:59:10.062635] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:14:33.529 [2024-10-21 11:59:10.062638] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:14:33.529 [2024-10-21 11:59:10.062644] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:14:33.529 [2024-10-21 11:59:10.062651] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:33.529 [2024-10-21 11:59:10.062662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:33.529 [2024-10-21 11:59:10.062669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:33.529 [2024-10-21 11:59:10.062676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:33.529 [2024-10-21 11:59:10.062682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:33.529 [2024-10-21 11:59:10.062688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:33.529 [2024-10-21 11:59:10.062691] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:33.529 [2024-10-21 11:59:10.062700] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:33.529 [2024-10-21 11:59:10.062707] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:33.529 [2024-10-21 11:59:10.062718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:33.529 [2024-10-21 11:59:10.062722] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:14:33.529 [2024-10-21 11:59:10.062725] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:33.529 [2024-10-21 11:59:10.062730] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:14:33.529 [2024-10-21 11:59:10.062737] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:33.529 [2024-10-21 11:59:10.062744] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:33.529 [2024-10-21 11:59:10.062754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:33.529 [2024-10-21 11:59:10.062797] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:14:33.529 [2024-10-21 11:59:10.062803] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:33.529 [2024-10-21 11:59:10.062809] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:33.529 [2024-10-21 11:59:10.062812] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:33.529 [2024-10-21 11:59:10.062815] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:33.529 [2024-10-21 11:59:10.062819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:33.529 [2024-10-21 11:59:10.062835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:33.529 [2024-10-21 11:59:10.062841] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:14:33.529 [2024-10-21 11:59:10.062851] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:14:33.529 [2024-10-21 11:59:10.062857] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:14:33.529 [2024-10-21 11:59:10.062862] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:33.529 [2024-10-21 11:59:10.062865] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:33.529 [2024-10-21 11:59:10.062867] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:33.529 [2024-10-21 11:59:10.062871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:33.529 [2024-10-21 11:59:10.062888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:33.529 [2024-10-21 11:59:10.062898] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:33.529 [2024-10-21 11:59:10.062905] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:33.529 [2024-10-21 11:59:10.062910] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:33.529 [2024-10-21 11:59:10.062913] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:33.529 [2024-10-21 11:59:10.062915] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:33.529 [2024-10-21 11:59:10.062919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:33.529 [2024-10-21 11:59:10.062928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:33.529 [2024-10-21 11:59:10.062933] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:33.529 [2024-10-21 11:59:10.062938] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:14:33.529 [2024-10-21 11:59:10.062943] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:14:33.529 [2024-10-21 11:59:10.062947] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:33.529 [2024-10-21 11:59:10.062951] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:33.529 [2024-10-21 11:59:10.062955] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:14:33.529 [2024-10-21 11:59:10.062958] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:14:33.529 [2024-10-21 11:59:10.062961] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:14:33.529 [2024-10-21 11:59:10.062965] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:14:33.529 [2024-10-21 11:59:10.062979] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:33.529 [2024-10-21 11:59:10.062987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:33.529 [2024-10-21 11:59:10.062995] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:33.529 [2024-10-21 11:59:10.063003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:33.529 [2024-10-21 11:59:10.063012] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:33.529 [2024-10-21 11:59:10.063018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:33.529 [2024-10-21 11:59:10.063026] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:33.529 [2024-10-21 11:59:10.063038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:33.529 [2024-10-21 11:59:10.063048] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:33.529 [2024-10-21 11:59:10.063052] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:33.529 [2024-10-21 11:59:10.063055] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:33.529 [2024-10-21 11:59:10.063059] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:33.529 [2024-10-21 11:59:10.063061] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:33.529 [2024-10-21 11:59:10.063066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:33.529 [2024-10-21 11:59:10.063072] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:33.530 [2024-10-21 11:59:10.063075] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:33.530 [2024-10-21 11:59:10.063077] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:33.530 [2024-10-21 11:59:10.063081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:33.530 [2024-10-21 11:59:10.063086] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:33.530 [2024-10-21 11:59:10.063089] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:33.530 [2024-10-21 11:59:10.063092] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:33.530 [2024-10-21 11:59:10.063096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:33.530 [2024-10-21 11:59:10.063102] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:33.530 [2024-10-21 11:59:10.063105] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:33.530 [2024-10-21 11:59:10.063107] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:33.530 [2024-10-21 11:59:10.063111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:33.530 [2024-10-21 11:59:10.063116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:33.530 [2024-10-21 11:59:10.063125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:33.530 [2024-10-21 11:59:10.063133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:33.530 [2024-10-21 11:59:10.063138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:33.530 ===================================================== 00:14:33.530 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:33.530 ===================================================== 00:14:33.530 Controller Capabilities/Features 00:14:33.530 ================================ 00:14:33.530 Vendor ID: 4e58 00:14:33.530 Subsystem Vendor ID: 4e58 00:14:33.530 Serial Number: SPDK1 00:14:33.530 Model Number: SPDK bdev Controller 00:14:33.530 Firmware Version: 25.01 00:14:33.530 Recommended Arb Burst: 6 00:14:33.530 IEEE OUI Identifier: 8d 6b 50 00:14:33.530 Multi-path I/O 00:14:33.530 May have multiple subsystem ports: Yes 00:14:33.530 May have multiple controllers: Yes 00:14:33.530 Associated with SR-IOV VF: No 00:14:33.530 Max Data Transfer Size: 131072 00:14:33.530 Max Number of Namespaces: 32 00:14:33.530 Max Number of I/O Queues: 127 00:14:33.530 NVMe Specification Version (VS): 1.3 00:14:33.530 NVMe Specification Version (Identify): 1.3 00:14:33.530 Maximum Queue Entries: 256 00:14:33.530 Contiguous Queues Required: Yes 00:14:33.530 Arbitration Mechanisms Supported 00:14:33.530 Weighted Round Robin: Not Supported 00:14:33.530 Vendor Specific: Not Supported 00:14:33.530 Reset Timeout: 15000 ms 00:14:33.530 Doorbell Stride: 4 bytes 00:14:33.530 NVM Subsystem Reset: Not Supported 00:14:33.530 Command Sets Supported 00:14:33.530 NVM Command Set: Supported 00:14:33.530 Boot Partition: Not Supported 00:14:33.530 Memory Page Size Minimum: 4096 bytes 00:14:33.530 Memory Page Size Maximum: 4096 bytes 00:14:33.530 Persistent Memory Region: Not Supported 00:14:33.530 Optional Asynchronous Events Supported 00:14:33.530 Namespace Attribute Notices: Supported 00:14:33.530 Firmware Activation Notices: Not Supported 00:14:33.530 ANA Change Notices: Not Supported 00:14:33.530 PLE Aggregate Log Change Notices: Not Supported 00:14:33.530 LBA Status Info Alert Notices: Not Supported 00:14:33.530 EGE Aggregate Log Change Notices: Not Supported 00:14:33.530 Normal NVM Subsystem Shutdown event: Not Supported 00:14:33.530 Zone Descriptor Change Notices: Not Supported 00:14:33.530 Discovery Log Change Notices: Not Supported 00:14:33.530 Controller Attributes 00:14:33.530 128-bit Host Identifier: Supported 00:14:33.530 Non-Operational Permissive Mode: Not Supported 00:14:33.530 NVM Sets: Not Supported 00:14:33.530 Read Recovery Levels: Not Supported 00:14:33.530 Endurance Groups: Not Supported 00:14:33.530 Predictable Latency Mode: Not Supported 00:14:33.530 Traffic Based Keep ALive: Not Supported 00:14:33.530 Namespace Granularity: Not Supported 00:14:33.530 SQ Associations: Not Supported 00:14:33.530 UUID List: Not Supported 00:14:33.530 Multi-Domain Subsystem: Not Supported 00:14:33.530 Fixed Capacity Management: Not Supported 00:14:33.530 Variable Capacity Management: Not Supported 00:14:33.530 Delete Endurance Group: Not Supported 00:14:33.530 Delete NVM Set: Not Supported 00:14:33.530 Extended LBA Formats Supported: Not Supported 00:14:33.530 Flexible Data Placement Supported: Not Supported 00:14:33.530 00:14:33.530 Controller Memory Buffer Support 00:14:33.530 ================================ 00:14:33.530 Supported: No 00:14:33.530 00:14:33.530 Persistent Memory Region Support 00:14:33.530 ================================ 00:14:33.530 Supported: No 00:14:33.530 00:14:33.530 Admin Command Set Attributes 00:14:33.530 ============================ 00:14:33.530 Security Send/Receive: Not Supported 00:14:33.530 Format NVM: Not Supported 00:14:33.530 Firmware Activate/Download: Not Supported 00:14:33.530 Namespace Management: Not Supported 00:14:33.530 Device Self-Test: Not Supported 00:14:33.530 Directives: Not Supported 00:14:33.530 NVMe-MI: Not Supported 00:14:33.530 Virtualization Management: Not Supported 00:14:33.530 Doorbell Buffer Config: Not Supported 00:14:33.530 Get LBA Status Capability: Not Supported 00:14:33.530 Command & Feature Lockdown Capability: Not Supported 00:14:33.530 Abort Command Limit: 4 00:14:33.530 Async Event Request Limit: 4 00:14:33.530 Number of Firmware Slots: N/A 00:14:33.530 Firmware Slot 1 Read-Only: N/A 00:14:33.530 Firmware Activation Without Reset: N/A 00:14:33.530 Multiple Update Detection Support: N/A 00:14:33.530 Firmware Update Granularity: No Information Provided 00:14:33.530 Per-Namespace SMART Log: No 00:14:33.530 Asymmetric Namespace Access Log Page: Not Supported 00:14:33.530 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:33.530 Command Effects Log Page: Supported 00:14:33.530 Get Log Page Extended Data: Supported 00:14:33.530 Telemetry Log Pages: Not Supported 00:14:33.530 Persistent Event Log Pages: Not Supported 00:14:33.530 Supported Log Pages Log Page: May Support 00:14:33.530 Commands Supported & Effects Log Page: Not Supported 00:14:33.530 Feature Identifiers & Effects Log Page:May Support 00:14:33.530 NVMe-MI Commands & Effects Log Page: May Support 00:14:33.530 Data Area 4 for Telemetry Log: Not Supported 00:14:33.530 Error Log Page Entries Supported: 128 00:14:33.530 Keep Alive: Supported 00:14:33.530 Keep Alive Granularity: 10000 ms 00:14:33.530 00:14:33.530 NVM Command Set Attributes 00:14:33.530 ========================== 00:14:33.530 Submission Queue Entry Size 00:14:33.530 Max: 64 00:14:33.530 Min: 64 00:14:33.530 Completion Queue Entry Size 00:14:33.530 Max: 16 00:14:33.530 Min: 16 00:14:33.530 Number of Namespaces: 32 00:14:33.530 Compare Command: Supported 00:14:33.530 Write Uncorrectable Command: Not Supported 00:14:33.530 Dataset Management Command: Supported 00:14:33.530 Write Zeroes Command: Supported 00:14:33.530 Set Features Save Field: Not Supported 00:14:33.530 Reservations: Not Supported 00:14:33.530 Timestamp: Not Supported 00:14:33.530 Copy: Supported 00:14:33.530 Volatile Write Cache: Present 00:14:33.530 Atomic Write Unit (Normal): 1 00:14:33.530 Atomic Write Unit (PFail): 1 00:14:33.530 Atomic Compare & Write Unit: 1 00:14:33.530 Fused Compare & Write: Supported 00:14:33.530 Scatter-Gather List 00:14:33.530 SGL Command Set: Supported (Dword aligned) 00:14:33.530 SGL Keyed: Not Supported 00:14:33.530 SGL Bit Bucket Descriptor: Not Supported 00:14:33.530 SGL Metadata Pointer: Not Supported 00:14:33.530 Oversized SGL: Not Supported 00:14:33.530 SGL Metadata Address: Not Supported 00:14:33.530 SGL Offset: Not Supported 00:14:33.530 Transport SGL Data Block: Not Supported 00:14:33.530 Replay Protected Memory Block: Not Supported 00:14:33.530 00:14:33.530 Firmware Slot Information 00:14:33.530 ========================= 00:14:33.530 Active slot: 1 00:14:33.530 Slot 1 Firmware Revision: 25.01 00:14:33.530 00:14:33.530 00:14:33.530 Commands Supported and Effects 00:14:33.530 ============================== 00:14:33.530 Admin Commands 00:14:33.530 -------------- 00:14:33.530 Get Log Page (02h): Supported 00:14:33.530 Identify (06h): Supported 00:14:33.530 Abort (08h): Supported 00:14:33.530 Set Features (09h): Supported 00:14:33.530 Get Features (0Ah): Supported 00:14:33.530 Asynchronous Event Request (0Ch): Supported 00:14:33.530 Keep Alive (18h): Supported 00:14:33.530 I/O Commands 00:14:33.530 ------------ 00:14:33.530 Flush (00h): Supported LBA-Change 00:14:33.530 Write (01h): Supported LBA-Change 00:14:33.530 Read (02h): Supported 00:14:33.530 Compare (05h): Supported 00:14:33.530 Write Zeroes (08h): Supported LBA-Change 00:14:33.530 Dataset Management (09h): Supported LBA-Change 00:14:33.530 Copy (19h): Supported LBA-Change 00:14:33.530 00:14:33.530 Error Log 00:14:33.530 ========= 00:14:33.530 00:14:33.530 Arbitration 00:14:33.530 =========== 00:14:33.530 Arbitration Burst: 1 00:14:33.530 00:14:33.530 Power Management 00:14:33.530 ================ 00:14:33.530 Number of Power States: 1 00:14:33.530 Current Power State: Power State #0 00:14:33.531 Power State #0: 00:14:33.531 Max Power: 0.00 W 00:14:33.531 Non-Operational State: Operational 00:14:33.531 Entry Latency: Not Reported 00:14:33.531 Exit Latency: Not Reported 00:14:33.531 Relative Read Throughput: 0 00:14:33.531 Relative Read Latency: 0 00:14:33.531 Relative Write Throughput: 0 00:14:33.531 Relative Write Latency: 0 00:14:33.531 Idle Power: Not Reported 00:14:33.531 Active Power: Not Reported 00:14:33.531 Non-Operational Permissive Mode: Not Supported 00:14:33.531 00:14:33.531 Health Information 00:14:33.531 ================== 00:14:33.531 Critical Warnings: 00:14:33.531 Available Spare Space: OK 00:14:33.531 Temperature: OK 00:14:33.531 Device Reliability: OK 00:14:33.531 Read Only: No 00:14:33.531 Volatile Memory Backup: OK 00:14:33.531 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:33.531 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:33.531 Available Spare: 0% 00:14:33.531 Available Sp[2024-10-21 11:59:10.063209] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:33.531 [2024-10-21 11:59:10.063219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:33.531 [2024-10-21 11:59:10.063240] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:14:33.531 [2024-10-21 11:59:10.063248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.531 [2024-10-21 11:59:10.063252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.531 [2024-10-21 11:59:10.063257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.531 [2024-10-21 11:59:10.063261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.531 [2024-10-21 11:59:10.065326] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:33.531 [2024-10-21 11:59:10.065335] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:33.531 [2024-10-21 11:59:10.065527] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:33.531 [2024-10-21 11:59:10.065564] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:14:33.531 [2024-10-21 11:59:10.065569] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:14:33.531 [2024-10-21 11:59:10.066537] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:33.531 [2024-10-21 11:59:10.066545] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:14:33.531 [2024-10-21 11:59:10.066622] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:33.531 [2024-10-21 11:59:10.068557] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:33.531 are Threshold: 0% 00:14:33.531 Life Percentage Used: 0% 00:14:33.531 Data Units Read: 0 00:14:33.531 Data Units Written: 0 00:14:33.531 Host Read Commands: 0 00:14:33.531 Host Write Commands: 0 00:14:33.531 Controller Busy Time: 0 minutes 00:14:33.531 Power Cycles: 0 00:14:33.531 Power On Hours: 0 hours 00:14:33.531 Unsafe Shutdowns: 0 00:14:33.531 Unrecoverable Media Errors: 0 00:14:33.531 Lifetime Error Log Entries: 0 00:14:33.531 Warning Temperature Time: 0 minutes 00:14:33.531 Critical Temperature Time: 0 minutes 00:14:33.531 00:14:33.531 Number of Queues 00:14:33.531 ================ 00:14:33.531 Number of I/O Submission Queues: 127 00:14:33.531 Number of I/O Completion Queues: 127 00:14:33.531 00:14:33.531 Active Namespaces 00:14:33.531 ================= 00:14:33.531 Namespace ID:1 00:14:33.531 Error Recovery Timeout: Unlimited 00:14:33.531 Command Set Identifier: NVM (00h) 00:14:33.531 Deallocate: Supported 00:14:33.531 Deallocated/Unwritten Error: Not Supported 00:14:33.531 Deallocated Read Value: Unknown 00:14:33.531 Deallocate in Write Zeroes: Not Supported 00:14:33.531 Deallocated Guard Field: 0xFFFF 00:14:33.531 Flush: Supported 00:14:33.531 Reservation: Supported 00:14:33.531 Namespace Sharing Capabilities: Multiple Controllers 00:14:33.531 Size (in LBAs): 131072 (0GiB) 00:14:33.531 Capacity (in LBAs): 131072 (0GiB) 00:14:33.531 Utilization (in LBAs): 131072 (0GiB) 00:14:33.531 NGUID: B8DFA5D2F362465AB9619D9A609B9007 00:14:33.531 UUID: b8dfa5d2-f362-465a-b961-9d9a609b9007 00:14:33.531 Thin Provisioning: Not Supported 00:14:33.531 Per-NS Atomic Units: Yes 00:14:33.531 Atomic Boundary Size (Normal): 0 00:14:33.531 Atomic Boundary Size (PFail): 0 00:14:33.531 Atomic Boundary Offset: 0 00:14:33.531 Maximum Single Source Range Length: 65535 00:14:33.531 Maximum Copy Length: 65535 00:14:33.531 Maximum Source Range Count: 1 00:14:33.531 NGUID/EUI64 Never Reused: No 00:14:33.531 Namespace Write Protected: No 00:14:33.531 Number of LBA Formats: 1 00:14:33.531 Current LBA Format: LBA Format #00 00:14:33.531 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:33.531 00:14:33.531 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:33.792 [2024-10-21 11:59:10.247006] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:39.082 Initializing NVMe Controllers 00:14:39.082 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:39.082 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:39.082 Initialization complete. Launching workers. 00:14:39.082 ======================================================== 00:14:39.082 Latency(us) 00:14:39.082 Device Information : IOPS MiB/s Average min max 00:14:39.082 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39966.14 156.12 3202.94 851.37 10775.46 00:14:39.082 ======================================================== 00:14:39.082 Total : 39966.14 156.12 3202.94 851.37 10775.46 00:14:39.082 00:14:39.082 [2024-10-21 11:59:15.267698] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:39.082 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:39.082 [2024-10-21 11:59:15.446507] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:44.368 Initializing NVMe Controllers 00:14:44.368 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:44.368 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:44.368 Initialization complete. Launching workers. 00:14:44.368 ======================================================== 00:14:44.368 Latency(us) 00:14:44.368 Device Information : IOPS MiB/s Average min max 00:14:44.368 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16007.80 62.53 8005.53 5408.26 15962.29 00:14:44.368 ======================================================== 00:14:44.368 Total : 16007.80 62.53 8005.53 5408.26 15962.29 00:14:44.368 00:14:44.368 [2024-10-21 11:59:20.486261] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:44.369 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:44.369 [2024-10-21 11:59:20.683123] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:49.659 [2024-10-21 11:59:25.785698] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:49.659 Initializing NVMe Controllers 00:14:49.659 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:49.659 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:49.660 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:49.660 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:49.660 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:49.660 Initialization complete. Launching workers. 00:14:49.660 Starting thread on core 2 00:14:49.660 Starting thread on core 3 00:14:49.660 Starting thread on core 1 00:14:49.660 11:59:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:49.660 [2024-10-21 11:59:26.021647] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:52.961 [2024-10-21 11:59:29.080523] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:52.961 Initializing NVMe Controllers 00:14:52.961 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:52.961 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:52.961 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:52.961 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:52.961 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:52.961 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:52.961 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:52.961 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:52.961 Initialization complete. Launching workers. 00:14:52.961 Starting thread on core 1 with urgent priority queue 00:14:52.961 Starting thread on core 2 with urgent priority queue 00:14:52.961 Starting thread on core 3 with urgent priority queue 00:14:52.961 Starting thread on core 0 with urgent priority queue 00:14:52.961 SPDK bdev Controller (SPDK1 ) core 0: 9364.00 IO/s 10.68 secs/100000 ios 00:14:52.961 SPDK bdev Controller (SPDK1 ) core 1: 14874.67 IO/s 6.72 secs/100000 ios 00:14:52.961 SPDK bdev Controller (SPDK1 ) core 2: 8889.33 IO/s 11.25 secs/100000 ios 00:14:52.961 SPDK bdev Controller (SPDK1 ) core 3: 16768.67 IO/s 5.96 secs/100000 ios 00:14:52.961 ======================================================== 00:14:52.961 00:14:52.961 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:52.961 [2024-10-21 11:59:29.307758] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:52.961 Initializing NVMe Controllers 00:14:52.961 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:52.961 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:52.961 Namespace ID: 1 size: 0GB 00:14:52.961 Initialization complete. 00:14:52.961 INFO: using host memory buffer for IO 00:14:52.961 Hello world! 00:14:52.961 [2024-10-21 11:59:29.343981] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:52.961 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:53.222 [2024-10-21 11:59:29.573766] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:54.165 Initializing NVMe Controllers 00:14:54.165 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:54.165 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:54.165 Initialization complete. Launching workers. 00:14:54.165 submit (in ns) avg, min, max = 6150.8, 2845.0, 3997894.2 00:14:54.165 complete (in ns) avg, min, max = 15111.3, 1635.0, 3998665.8 00:14:54.165 00:14:54.165 Submit histogram 00:14:54.165 ================ 00:14:54.165 Range in us Cumulative Count 00:14:54.165 2.840 - 2.853: 0.1778% ( 36) 00:14:54.165 2.853 - 2.867: 1.3237% ( 232) 00:14:54.165 2.867 - 2.880: 4.3167% ( 606) 00:14:54.165 2.880 - 2.893: 10.1447% ( 1180) 00:14:54.165 2.893 - 2.907: 15.3702% ( 1058) 00:14:54.165 2.907 - 2.920: 20.8969% ( 1119) 00:14:54.165 2.920 - 2.933: 27.6535% ( 1368) 00:14:54.165 2.933 - 2.947: 33.5457% ( 1193) 00:14:54.165 2.947 - 2.960: 39.0329% ( 1111) 00:14:54.165 2.960 - 2.973: 43.9374% ( 993) 00:14:54.165 2.973 - 2.987: 48.8418% ( 993) 00:14:54.165 2.987 - 3.000: 54.9612% ( 1239) 00:14:54.165 3.000 - 3.013: 63.6440% ( 1758) 00:14:54.165 3.013 - 3.027: 72.4008% ( 1773) 00:14:54.165 3.027 - 3.040: 80.8614% ( 1713) 00:14:54.165 3.040 - 3.053: 87.6574% ( 1376) 00:14:54.165 3.053 - 3.067: 92.6310% ( 1007) 00:14:54.165 3.067 - 3.080: 95.5253% ( 586) 00:14:54.165 3.080 - 3.093: 97.2835% ( 356) 00:14:54.165 3.093 - 3.107: 98.2516% ( 196) 00:14:54.165 3.107 - 3.120: 98.7356% ( 98) 00:14:54.165 3.120 - 3.133: 98.8640% ( 26) 00:14:54.165 3.133 - 3.147: 98.9579% ( 19) 00:14:54.165 3.147 - 3.160: 98.9727% ( 3) 00:14:54.165 3.160 - 3.173: 98.9826% ( 2) 00:14:54.165 3.173 - 3.187: 98.9924% ( 2) 00:14:54.165 3.187 - 3.200: 99.0171% ( 5) 00:14:54.165 3.200 - 3.213: 99.0270% ( 2) 00:14:54.165 3.227 - 3.240: 99.0320% ( 1) 00:14:54.165 3.240 - 3.253: 99.0369% ( 1) 00:14:54.165 3.253 - 3.267: 99.0468% ( 2) 00:14:54.165 3.267 - 3.280: 99.0567% ( 2) 00:14:54.165 3.280 - 3.293: 99.0665% ( 2) 00:14:54.165 3.293 - 3.307: 99.0764% ( 2) 00:14:54.165 3.307 - 3.320: 99.0863% ( 2) 00:14:54.165 3.320 - 3.333: 99.0962% ( 2) 00:14:54.165 3.333 - 3.347: 99.1060% ( 2) 00:14:54.165 3.347 - 3.360: 99.1307% ( 5) 00:14:54.165 3.360 - 3.373: 99.1505% ( 4) 00:14:54.165 3.373 - 3.387: 99.1554% ( 1) 00:14:54.165 3.387 - 3.400: 99.1653% ( 2) 00:14:54.165 3.413 - 3.440: 99.1801% ( 3) 00:14:54.165 3.440 - 3.467: 99.2345% ( 11) 00:14:54.165 3.467 - 3.493: 99.2443% ( 2) 00:14:54.165 3.493 - 3.520: 99.2641% ( 4) 00:14:54.165 3.520 - 3.547: 99.2937% ( 6) 00:14:54.165 3.547 - 3.573: 99.3085% ( 3) 00:14:54.165 3.573 - 3.600: 99.3283% ( 4) 00:14:54.165 3.600 - 3.627: 99.3332% ( 1) 00:14:54.165 3.627 - 3.653: 99.3382% ( 1) 00:14:54.165 3.653 - 3.680: 99.3481% ( 2) 00:14:54.165 3.680 - 3.707: 99.3678% ( 4) 00:14:54.165 3.707 - 3.733: 99.3876% ( 4) 00:14:54.165 3.733 - 3.760: 99.3925% ( 1) 00:14:54.165 3.760 - 3.787: 99.3974% ( 1) 00:14:54.165 3.787 - 3.813: 99.4024% ( 1) 00:14:54.165 3.813 - 3.840: 99.4123% ( 2) 00:14:54.165 3.840 - 3.867: 99.4172% ( 1) 00:14:54.165 3.867 - 3.893: 99.4320% ( 3) 00:14:54.165 3.920 - 3.947: 99.4419% ( 2) 00:14:54.165 3.947 - 3.973: 99.4468% ( 1) 00:14:54.165 3.973 - 4.000: 99.4518% ( 1) 00:14:54.165 4.000 - 4.027: 99.4567% ( 1) 00:14:54.165 4.027 - 4.053: 99.4616% ( 1) 00:14:54.165 4.160 - 4.187: 99.4666% ( 1) 00:14:54.165 4.187 - 4.213: 99.4715% ( 1) 00:14:54.165 4.267 - 4.293: 99.4765% ( 1) 00:14:54.165 4.427 - 4.453: 99.4814% ( 1) 00:14:54.165 4.507 - 4.533: 99.4913% ( 2) 00:14:54.165 4.560 - 4.587: 99.4962% ( 1) 00:14:54.165 4.587 - 4.613: 99.5012% ( 1) 00:14:54.165 4.613 - 4.640: 99.5061% ( 1) 00:14:54.165 4.640 - 4.667: 99.5209% ( 3) 00:14:54.165 4.667 - 4.693: 99.5259% ( 1) 00:14:54.165 4.693 - 4.720: 99.5357% ( 2) 00:14:54.165 4.747 - 4.773: 99.5407% ( 1) 00:14:54.165 4.773 - 4.800: 99.5555% ( 3) 00:14:54.165 4.853 - 4.880: 99.5703% ( 3) 00:14:54.165 4.880 - 4.907: 99.5752% ( 1) 00:14:54.165 4.933 - 4.960: 99.5901% ( 3) 00:14:54.165 5.013 - 5.040: 99.5999% ( 2) 00:14:54.165 5.040 - 5.067: 99.6148% ( 3) 00:14:54.165 5.067 - 5.093: 99.6246% ( 2) 00:14:54.165 5.093 - 5.120: 99.6395% ( 3) 00:14:54.165 5.120 - 5.147: 99.6493% ( 2) 00:14:54.165 5.147 - 5.173: 99.6592% ( 2) 00:14:54.165 5.200 - 5.227: 99.6641% ( 1) 00:14:54.165 5.227 - 5.253: 99.6691% ( 1) 00:14:54.165 5.253 - 5.280: 99.6790% ( 2) 00:14:54.165 5.280 - 5.307: 99.6888% ( 2) 00:14:54.165 5.333 - 5.360: 99.6938% ( 1) 00:14:54.165 5.360 - 5.387: 99.7086% ( 3) 00:14:54.165 5.413 - 5.440: 99.7135% ( 1) 00:14:54.165 5.440 - 5.467: 99.7185% ( 1) 00:14:54.165 5.467 - 5.493: 99.7234% ( 1) 00:14:54.165 5.520 - 5.547: 99.7333% ( 2) 00:14:54.165 5.600 - 5.627: 99.7382% ( 1) 00:14:54.165 5.627 - 5.653: 99.7481% ( 2) 00:14:54.165 5.653 - 5.680: 99.7530% ( 1) 00:14:54.165 5.680 - 5.707: 99.7580% ( 1) 00:14:54.165 5.707 - 5.733: 99.7629% ( 1) 00:14:54.165 5.733 - 5.760: 99.7728% ( 2) 00:14:54.165 5.840 - 5.867: 99.7777% ( 1) 00:14:54.165 5.893 - 5.920: 99.7926% ( 3) 00:14:54.165 5.947 - 5.973: 99.7975% ( 1) 00:14:54.165 6.053 - 6.080: 99.8074% ( 2) 00:14:54.165 6.213 - 6.240: 99.8123% ( 1) 00:14:54.165 6.267 - 6.293: 99.8173% ( 1) 00:14:54.165 6.347 - 6.373: 99.8370% ( 4) 00:14:54.166 6.373 - 6.400: 99.8420% ( 1) 00:14:54.166 6.400 - 6.427: 99.8469% ( 1) 00:14:54.166 6.480 - 6.507: 99.8518% ( 1) 00:14:54.166 6.827 - 6.880: 99.8617% ( 2) 00:14:54.166 6.880 - 6.933: 99.8716% ( 2) 00:14:54.166 6.933 - 6.987: 99.8765% ( 1) 00:14:54.166 6.987 - 7.040: 99.8815% ( 1) 00:14:54.166 7.840 - 7.893: 99.8864% ( 1) 00:14:54.166 7.893 - 7.947: 99.8913% ( 1) 00:14:54.166 8.000 - 8.053: 99.8963% ( 1) 00:14:54.166 8.693 - 8.747: 99.9012% ( 1) 00:14:54.166 8.747 - 8.800: 99.9062% ( 1) 00:14:54.166 9.920 - 9.973: 99.9111% ( 1) 00:14:54.166 10.507 - 10.560: 99.9160% ( 1) 00:14:54.166 10.933 - 10.987: 99.9210% ( 1) 00:14:54.166 3986.773 - 4014.080: 100.0000% ( 16) 00:14:54.166 00:14:54.166 Complete histogram 00:14:54.166 ================== 00:14:54.166 Range in us Cumulative Count 00:14:54.166 1.633 - 1.640: 0.0049% ( 1) 00:14:54.166 1.640 - 1.647: 0.0148% ( 2) 00:14:54.166 1.647 - 1.653: 0.4593% ( 90) 00:14:54.166 1.653 - 1.660: 0.6026% ( 29) 00:14:54.166 1.660 - 1.667: 0.6569% ( 11) 00:14:54.166 1.667 - 1.673: 0.7359% ( 16) 00:14:54.166 1.673 - 1.680: 0.7606% ( 5) 00:14:54.166 1.680 - 1.687: 1.8669% ( 224) 00:14:54.166 1.687 - 1.693: 46.0019% ( 8936) 00:14:54.166 1.693 - 1.700: 53.1832% ( 1454) 00:14:54.166 1.700 - 1.707: 61.2634% ( 1636) 00:14:54.166 1.707 - 1.720: 77.4831% ( 3284) 00:14:54.166 1.720 - 1.733: 81.8837% ( 891) 00:14:54.166 1.733 - 1.747: 83.3111% ( 289) 00:14:54.166 1.747 - 1.760: 88.7243% ( 1096) 00:14:54.166 1.760 - 1.773: 94.3399% ( 1137) 00:14:54.166 1.773 - 1.787: 97.3576% ( 611) 00:14:54.166 1.787 - 1.800: 98.5776% ( 247) 00:14:54.166 1.800 - 1.813: 98.9480% ( 75) 00:14:54.166 1.813 - 1.827: 99.0023% ( 11) 00:14:54.166 1.827 - 1.840: 99.0369% ( 7) 00:14:54.166 1.840 - 1.853: 99.0567% ( 4) 00:14:54.166 1.853 - 1.867: 99.0616% ( 1) 00:14:54.166 1.867 - 1.880: 99.0863% ( 5) 00:14:54.166 1.893 - 1.907: 99.0962% ( 2) 00:14:54.166 1.907 - 1.920: 99.1011% ( 1) 00:14:54.166 1.920 - 1.933: 99.1258% ( 5) 00:14:54.166 1.933 - 1.947: 99.1505% ( 5) 00:14:54.166 1.947 - 1.960: 99.1653% ( 3) 00:14:54.166 1.960 - 1.973: 99.1801% ( 3) 00:14:54.166 1.973 - 1.987: 99.1999% ( 4) 00:14:54.166 1.987 - 2.000: 99.2098% ( 2) 00:14:54.166 2.000 - 2.013: 99.2443% ( 7) 00:14:54.166 2.013 - 2.027: 99.2591% ( 3) 00:14:54.166 2.027 - 2.040: 99.3036% ( 9) 00:14:54.166 2.040 - 2.053: 99.3184% ( 3) 00:14:54.166 2.053 - 2.067: 99.3481% ( 6) 00:14:54.166 2.067 - 2.080: 99.3579% ( 2) 00:14:54.166 2.080 - 2.093: 99.3629% ( 1) 00:14:54.166 2.093 - 2.1[2024-10-21 11:59:30.592343] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:54.166 07: 99.3925% ( 6) 00:14:54.166 2.107 - 2.120: 99.3974% ( 1) 00:14:54.166 2.120 - 2.133: 99.4024% ( 1) 00:14:54.166 2.147 - 2.160: 99.4123% ( 2) 00:14:54.166 2.173 - 2.187: 99.4221% ( 2) 00:14:54.166 2.187 - 2.200: 99.4271% ( 1) 00:14:54.166 2.200 - 2.213: 99.4370% ( 2) 00:14:54.166 2.227 - 2.240: 99.4419% ( 1) 00:14:54.166 2.240 - 2.253: 99.4468% ( 1) 00:14:54.166 2.253 - 2.267: 99.4567% ( 2) 00:14:54.166 2.280 - 2.293: 99.4666% ( 2) 00:14:54.166 2.333 - 2.347: 99.4715% ( 1) 00:14:54.166 2.347 - 2.360: 99.4765% ( 1) 00:14:54.166 2.493 - 2.507: 99.4814% ( 1) 00:14:54.166 2.560 - 2.573: 99.4863% ( 1) 00:14:54.166 2.587 - 2.600: 99.4913% ( 1) 00:14:54.166 2.600 - 2.613: 99.4962% ( 1) 00:14:54.166 2.680 - 2.693: 99.5012% ( 1) 00:14:54.166 2.747 - 2.760: 99.5061% ( 1) 00:14:54.166 2.760 - 2.773: 99.5110% ( 1) 00:14:54.166 2.827 - 2.840: 99.5160% ( 1) 00:14:54.166 2.947 - 2.960: 99.5209% ( 1) 00:14:54.166 2.960 - 2.973: 99.5259% ( 1) 00:14:54.166 3.093 - 3.107: 99.5308% ( 1) 00:14:54.166 3.200 - 3.213: 99.5357% ( 1) 00:14:54.166 3.240 - 3.253: 99.5407% ( 1) 00:14:54.166 3.280 - 3.293: 99.5456% ( 1) 00:14:54.166 3.360 - 3.373: 99.5506% ( 1) 00:14:54.166 3.520 - 3.547: 99.5555% ( 1) 00:14:54.166 3.547 - 3.573: 99.5604% ( 1) 00:14:54.166 3.573 - 3.600: 99.5703% ( 2) 00:14:54.166 3.707 - 3.733: 99.5752% ( 1) 00:14:54.166 3.760 - 3.787: 99.5802% ( 1) 00:14:54.166 3.787 - 3.813: 99.5851% ( 1) 00:14:54.166 3.867 - 3.893: 99.5901% ( 1) 00:14:54.166 4.107 - 4.133: 99.5950% ( 1) 00:14:54.166 4.480 - 4.507: 99.5999% ( 1) 00:14:54.166 4.587 - 4.613: 99.6049% ( 1) 00:14:54.166 4.800 - 4.827: 99.6098% ( 1) 00:14:54.166 4.827 - 4.853: 99.6148% ( 1) 00:14:54.166 5.307 - 5.333: 99.6197% ( 1) 00:14:54.166 5.627 - 5.653: 99.6246% ( 1) 00:14:54.166 6.187 - 6.213: 99.6296% ( 1) 00:14:54.166 6.213 - 6.240: 99.6345% ( 1) 00:14:54.166 6.427 - 6.453: 99.6395% ( 1) 00:14:54.166 6.667 - 6.693: 99.6444% ( 1) 00:14:54.166 6.747 - 6.773: 99.6493% ( 1) 00:14:54.166 15.893 - 16.000: 99.6543% ( 1) 00:14:54.166 87.467 - 87.893: 99.6592% ( 1) 00:14:54.166 99.413 - 99.840: 99.6641% ( 1) 00:14:54.166 3577.173 - 3604.480: 99.6691% ( 1) 00:14:54.166 3850.240 - 3877.547: 99.6740% ( 1) 00:14:54.166 3986.773 - 4014.080: 100.0000% ( 66) 00:14:54.166 00:14:54.166 11:59:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:54.166 11:59:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:54.166 11:59:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:54.166 11:59:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:54.166 11:59:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:54.427 [ 00:14:54.428 { 00:14:54.428 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:54.428 "subtype": "Discovery", 00:14:54.428 "listen_addresses": [], 00:14:54.428 "allow_any_host": true, 00:14:54.428 "hosts": [] 00:14:54.428 }, 00:14:54.428 { 00:14:54.428 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:54.428 "subtype": "NVMe", 00:14:54.428 "listen_addresses": [ 00:14:54.428 { 00:14:54.428 "trtype": "VFIOUSER", 00:14:54.428 "adrfam": "IPv4", 00:14:54.428 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:54.428 "trsvcid": "0" 00:14:54.428 } 00:14:54.428 ], 00:14:54.428 "allow_any_host": true, 00:14:54.428 "hosts": [], 00:14:54.428 "serial_number": "SPDK1", 00:14:54.428 "model_number": "SPDK bdev Controller", 00:14:54.428 "max_namespaces": 32, 00:14:54.428 "min_cntlid": 1, 00:14:54.428 "max_cntlid": 65519, 00:14:54.428 "namespaces": [ 00:14:54.428 { 00:14:54.428 "nsid": 1, 00:14:54.428 "bdev_name": "Malloc1", 00:14:54.428 "name": "Malloc1", 00:14:54.428 "nguid": "B8DFA5D2F362465AB9619D9A609B9007", 00:14:54.428 "uuid": "b8dfa5d2-f362-465a-b961-9d9a609b9007" 00:14:54.428 } 00:14:54.428 ] 00:14:54.428 }, 00:14:54.428 { 00:14:54.428 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:54.428 "subtype": "NVMe", 00:14:54.428 "listen_addresses": [ 00:14:54.428 { 00:14:54.428 "trtype": "VFIOUSER", 00:14:54.428 "adrfam": "IPv4", 00:14:54.428 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:54.428 "trsvcid": "0" 00:14:54.428 } 00:14:54.428 ], 00:14:54.428 "allow_any_host": true, 00:14:54.428 "hosts": [], 00:14:54.428 "serial_number": "SPDK2", 00:14:54.428 "model_number": "SPDK bdev Controller", 00:14:54.428 "max_namespaces": 32, 00:14:54.428 "min_cntlid": 1, 00:14:54.428 "max_cntlid": 65519, 00:14:54.428 "namespaces": [ 00:14:54.428 { 00:14:54.428 "nsid": 1, 00:14:54.428 "bdev_name": "Malloc2", 00:14:54.428 "name": "Malloc2", 00:14:54.428 "nguid": "A49C9F9B96C9438EA1494212F343C99C", 00:14:54.428 "uuid": "a49c9f9b-96c9-438e-a149-4212f343c99c" 00:14:54.428 } 00:14:54.428 ] 00:14:54.428 } 00:14:54.428 ] 00:14:54.428 11:59:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:54.428 11:59:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=931454 00:14:54.428 11:59:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:54.428 11:59:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:54.428 11:59:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:14:54.428 11:59:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:54.428 11:59:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:54.428 11:59:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:14:54.428 11:59:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:54.428 11:59:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:54.428 [2024-10-21 11:59:30.959722] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:54.428 Malloc3 00:14:54.428 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:54.688 [2024-10-21 11:59:31.178275] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:54.688 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:54.688 Asynchronous Event Request test 00:14:54.688 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:54.688 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:54.688 Registering asynchronous event callbacks... 00:14:54.688 Starting namespace attribute notice tests for all controllers... 00:14:54.688 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:54.688 aer_cb - Changed Namespace 00:14:54.688 Cleaning up... 00:14:54.949 [ 00:14:54.949 { 00:14:54.949 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:54.949 "subtype": "Discovery", 00:14:54.949 "listen_addresses": [], 00:14:54.949 "allow_any_host": true, 00:14:54.949 "hosts": [] 00:14:54.949 }, 00:14:54.949 { 00:14:54.949 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:54.949 "subtype": "NVMe", 00:14:54.949 "listen_addresses": [ 00:14:54.949 { 00:14:54.949 "trtype": "VFIOUSER", 00:14:54.949 "adrfam": "IPv4", 00:14:54.949 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:54.949 "trsvcid": "0" 00:14:54.949 } 00:14:54.949 ], 00:14:54.949 "allow_any_host": true, 00:14:54.949 "hosts": [], 00:14:54.949 "serial_number": "SPDK1", 00:14:54.949 "model_number": "SPDK bdev Controller", 00:14:54.949 "max_namespaces": 32, 00:14:54.949 "min_cntlid": 1, 00:14:54.949 "max_cntlid": 65519, 00:14:54.949 "namespaces": [ 00:14:54.949 { 00:14:54.949 "nsid": 1, 00:14:54.949 "bdev_name": "Malloc1", 00:14:54.949 "name": "Malloc1", 00:14:54.949 "nguid": "B8DFA5D2F362465AB9619D9A609B9007", 00:14:54.949 "uuid": "b8dfa5d2-f362-465a-b961-9d9a609b9007" 00:14:54.949 }, 00:14:54.949 { 00:14:54.949 "nsid": 2, 00:14:54.949 "bdev_name": "Malloc3", 00:14:54.949 "name": "Malloc3", 00:14:54.949 "nguid": "DCE55F915DFF49AFBF3350C5E43A60ED", 00:14:54.949 "uuid": "dce55f91-5dff-49af-bf33-50c5e43a60ed" 00:14:54.949 } 00:14:54.949 ] 00:14:54.949 }, 00:14:54.949 { 00:14:54.949 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:54.949 "subtype": "NVMe", 00:14:54.949 "listen_addresses": [ 00:14:54.949 { 00:14:54.949 "trtype": "VFIOUSER", 00:14:54.949 "adrfam": "IPv4", 00:14:54.949 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:54.949 "trsvcid": "0" 00:14:54.949 } 00:14:54.949 ], 00:14:54.949 "allow_any_host": true, 00:14:54.949 "hosts": [], 00:14:54.949 "serial_number": "SPDK2", 00:14:54.949 "model_number": "SPDK bdev Controller", 00:14:54.949 "max_namespaces": 32, 00:14:54.949 "min_cntlid": 1, 00:14:54.949 "max_cntlid": 65519, 00:14:54.949 "namespaces": [ 00:14:54.949 { 00:14:54.949 "nsid": 1, 00:14:54.949 "bdev_name": "Malloc2", 00:14:54.949 "name": "Malloc2", 00:14:54.949 "nguid": "A49C9F9B96C9438EA1494212F343C99C", 00:14:54.949 "uuid": "a49c9f9b-96c9-438e-a149-4212f343c99c" 00:14:54.949 } 00:14:54.949 ] 00:14:54.949 } 00:14:54.949 ] 00:14:54.949 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 931454 00:14:54.949 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:54.949 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:54.949 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:54.949 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:54.949 [2024-10-21 11:59:31.413547] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:14:54.949 [2024-10-21 11:59:31.413593] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid931570 ] 00:14:54.949 [2024-10-21 11:59:31.441342] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:54.949 [2024-10-21 11:59:31.452150] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:54.949 [2024-10-21 11:59:31.452167] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fc24fe9d000 00:14:54.949 [2024-10-21 11:59:31.453146] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:54.949 [2024-10-21 11:59:31.454151] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:54.949 [2024-10-21 11:59:31.455158] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:54.949 [2024-10-21 11:59:31.456161] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:54.949 [2024-10-21 11:59:31.457171] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:54.949 [2024-10-21 11:59:31.458178] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:54.949 [2024-10-21 11:59:31.459182] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:54.949 [2024-10-21 11:59:31.460192] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:54.949 [2024-10-21 11:59:31.461194] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:54.949 [2024-10-21 11:59:31.461204] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fc24fe92000 00:14:54.949 [2024-10-21 11:59:31.462121] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:54.949 [2024-10-21 11:59:31.470492] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:54.949 [2024-10-21 11:59:31.470510] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:14:54.949 [2024-10-21 11:59:31.475577] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:54.949 [2024-10-21 11:59:31.475610] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:54.949 [2024-10-21 11:59:31.475669] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:14:54.949 [2024-10-21 11:59:31.475682] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:14:54.949 [2024-10-21 11:59:31.475686] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:14:54.949 [2024-10-21 11:59:31.476585] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:54.949 [2024-10-21 11:59:31.476592] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:14:54.949 [2024-10-21 11:59:31.476597] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:14:54.949 [2024-10-21 11:59:31.477585] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:54.949 [2024-10-21 11:59:31.477591] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:14:54.949 [2024-10-21 11:59:31.477599] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:14:54.950 [2024-10-21 11:59:31.478597] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:54.950 [2024-10-21 11:59:31.478603] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:54.950 [2024-10-21 11:59:31.479603] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:54.950 [2024-10-21 11:59:31.479609] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:14:54.950 [2024-10-21 11:59:31.479613] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:14:54.950 [2024-10-21 11:59:31.479617] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:54.950 [2024-10-21 11:59:31.479721] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:14:54.950 [2024-10-21 11:59:31.479725] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:54.950 [2024-10-21 11:59:31.479728] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:54.950 [2024-10-21 11:59:31.480610] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:54.950 [2024-10-21 11:59:31.481612] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:54.950 [2024-10-21 11:59:31.482617] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:54.950 [2024-10-21 11:59:31.483624] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:54.950 [2024-10-21 11:59:31.483653] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:54.950 [2024-10-21 11:59:31.484633] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:54.950 [2024-10-21 11:59:31.484639] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:54.950 [2024-10-21 11:59:31.484643] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:14:54.950 [2024-10-21 11:59:31.484657] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:14:54.950 [2024-10-21 11:59:31.484665] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:14:54.950 [2024-10-21 11:59:31.484676] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:54.950 [2024-10-21 11:59:31.484680] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:54.950 [2024-10-21 11:59:31.484682] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:54.950 [2024-10-21 11:59:31.484692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:54.950 [2024-10-21 11:59:31.492326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:54.950 [2024-10-21 11:59:31.492336] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:14:54.950 [2024-10-21 11:59:31.492340] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:14:54.950 [2024-10-21 11:59:31.492343] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:14:54.950 [2024-10-21 11:59:31.492346] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:54.950 [2024-10-21 11:59:31.492349] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:14:54.950 [2024-10-21 11:59:31.492353] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:14:54.950 [2024-10-21 11:59:31.492356] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:14:54.950 [2024-10-21 11:59:31.492361] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:14:54.950 [2024-10-21 11:59:31.492369] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:54.950 [2024-10-21 11:59:31.500325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:54.950 [2024-10-21 11:59:31.500335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:54.950 [2024-10-21 11:59:31.500342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:54.950 [2024-10-21 11:59:31.500348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:54.950 [2024-10-21 11:59:31.500354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:54.950 [2024-10-21 11:59:31.500357] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:14:54.950 [2024-10-21 11:59:31.500364] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:54.950 [2024-10-21 11:59:31.500371] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:54.950 [2024-10-21 11:59:31.508323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:54.950 [2024-10-21 11:59:31.508329] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:14:54.950 [2024-10-21 11:59:31.508334] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:54.950 [2024-10-21 11:59:31.508338] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:14:54.950 [2024-10-21 11:59:31.508345] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:14:54.950 [2024-10-21 11:59:31.508351] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:54.950 [2024-10-21 11:59:31.516323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:54.950 [2024-10-21 11:59:31.516369] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:14:54.950 [2024-10-21 11:59:31.516376] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:14:54.950 [2024-10-21 11:59:31.516382] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:54.950 [2024-10-21 11:59:31.516385] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:54.950 [2024-10-21 11:59:31.516388] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:54.950 [2024-10-21 11:59:31.516392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:54.950 [2024-10-21 11:59:31.524324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:54.950 [2024-10-21 11:59:31.524332] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:14:54.950 [2024-10-21 11:59:31.524339] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:14:54.950 [2024-10-21 11:59:31.524345] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:14:54.950 [2024-10-21 11:59:31.524350] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:54.950 [2024-10-21 11:59:31.524353] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:54.950 [2024-10-21 11:59:31.524355] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:54.950 [2024-10-21 11:59:31.524360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:54.950 [2024-10-21 11:59:31.532323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:54.950 [2024-10-21 11:59:31.532333] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:54.950 [2024-10-21 11:59:31.532339] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:54.950 [2024-10-21 11:59:31.532344] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:54.950 [2024-10-21 11:59:31.532347] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:54.950 [2024-10-21 11:59:31.532350] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:54.950 [2024-10-21 11:59:31.532354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:54.950 [2024-10-21 11:59:31.540324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:54.950 [2024-10-21 11:59:31.540330] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:54.950 [2024-10-21 11:59:31.540336] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:14:54.950 [2024-10-21 11:59:31.540341] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:14:54.950 [2024-10-21 11:59:31.540346] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:14:54.950 [2024-10-21 11:59:31.540349] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:54.950 [2024-10-21 11:59:31.540356] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:14:54.950 [2024-10-21 11:59:31.540360] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:14:54.950 [2024-10-21 11:59:31.540363] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:14:54.950 [2024-10-21 11:59:31.540367] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:14:54.950 [2024-10-21 11:59:31.540380] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:55.210 [2024-10-21 11:59:31.548325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:55.210 [2024-10-21 11:59:31.548335] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:55.210 [2024-10-21 11:59:31.556323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:55.210 [2024-10-21 11:59:31.556333] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:55.210 [2024-10-21 11:59:31.564323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:55.210 [2024-10-21 11:59:31.564333] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:55.210 [2024-10-21 11:59:31.572325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:55.210 [2024-10-21 11:59:31.572337] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:55.210 [2024-10-21 11:59:31.572341] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:55.210 [2024-10-21 11:59:31.572343] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:55.210 [2024-10-21 11:59:31.572346] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:55.210 [2024-10-21 11:59:31.572348] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:55.210 [2024-10-21 11:59:31.572353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:55.210 [2024-10-21 11:59:31.572358] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:55.210 [2024-10-21 11:59:31.572361] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:55.210 [2024-10-21 11:59:31.572364] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:55.210 [2024-10-21 11:59:31.572368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:55.210 [2024-10-21 11:59:31.572373] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:55.210 [2024-10-21 11:59:31.572376] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:55.210 [2024-10-21 11:59:31.572378] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:55.210 [2024-10-21 11:59:31.572383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:55.210 [2024-10-21 11:59:31.572388] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:55.210 [2024-10-21 11:59:31.572391] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:55.210 [2024-10-21 11:59:31.572395] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:55.210 [2024-10-21 11:59:31.572400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:55.210 [2024-10-21 11:59:31.580326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:55.210 [2024-10-21 11:59:31.580337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:55.210 [2024-10-21 11:59:31.580345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:55.210 [2024-10-21 11:59:31.580350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:55.210 ===================================================== 00:14:55.210 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:55.210 ===================================================== 00:14:55.210 Controller Capabilities/Features 00:14:55.210 ================================ 00:14:55.210 Vendor ID: 4e58 00:14:55.210 Subsystem Vendor ID: 4e58 00:14:55.210 Serial Number: SPDK2 00:14:55.210 Model Number: SPDK bdev Controller 00:14:55.210 Firmware Version: 25.01 00:14:55.210 Recommended Arb Burst: 6 00:14:55.210 IEEE OUI Identifier: 8d 6b 50 00:14:55.210 Multi-path I/O 00:14:55.210 May have multiple subsystem ports: Yes 00:14:55.210 May have multiple controllers: Yes 00:14:55.210 Associated with SR-IOV VF: No 00:14:55.210 Max Data Transfer Size: 131072 00:14:55.210 Max Number of Namespaces: 32 00:14:55.210 Max Number of I/O Queues: 127 00:14:55.210 NVMe Specification Version (VS): 1.3 00:14:55.210 NVMe Specification Version (Identify): 1.3 00:14:55.210 Maximum Queue Entries: 256 00:14:55.210 Contiguous Queues Required: Yes 00:14:55.210 Arbitration Mechanisms Supported 00:14:55.210 Weighted Round Robin: Not Supported 00:14:55.211 Vendor Specific: Not Supported 00:14:55.211 Reset Timeout: 15000 ms 00:14:55.211 Doorbell Stride: 4 bytes 00:14:55.211 NVM Subsystem Reset: Not Supported 00:14:55.211 Command Sets Supported 00:14:55.211 NVM Command Set: Supported 00:14:55.211 Boot Partition: Not Supported 00:14:55.211 Memory Page Size Minimum: 4096 bytes 00:14:55.211 Memory Page Size Maximum: 4096 bytes 00:14:55.211 Persistent Memory Region: Not Supported 00:14:55.211 Optional Asynchronous Events Supported 00:14:55.211 Namespace Attribute Notices: Supported 00:14:55.211 Firmware Activation Notices: Not Supported 00:14:55.211 ANA Change Notices: Not Supported 00:14:55.211 PLE Aggregate Log Change Notices: Not Supported 00:14:55.211 LBA Status Info Alert Notices: Not Supported 00:14:55.211 EGE Aggregate Log Change Notices: Not Supported 00:14:55.211 Normal NVM Subsystem Shutdown event: Not Supported 00:14:55.211 Zone Descriptor Change Notices: Not Supported 00:14:55.211 Discovery Log Change Notices: Not Supported 00:14:55.211 Controller Attributes 00:14:55.211 128-bit Host Identifier: Supported 00:14:55.211 Non-Operational Permissive Mode: Not Supported 00:14:55.211 NVM Sets: Not Supported 00:14:55.211 Read Recovery Levels: Not Supported 00:14:55.211 Endurance Groups: Not Supported 00:14:55.211 Predictable Latency Mode: Not Supported 00:14:55.211 Traffic Based Keep ALive: Not Supported 00:14:55.211 Namespace Granularity: Not Supported 00:14:55.211 SQ Associations: Not Supported 00:14:55.211 UUID List: Not Supported 00:14:55.211 Multi-Domain Subsystem: Not Supported 00:14:55.211 Fixed Capacity Management: Not Supported 00:14:55.211 Variable Capacity Management: Not Supported 00:14:55.211 Delete Endurance Group: Not Supported 00:14:55.211 Delete NVM Set: Not Supported 00:14:55.211 Extended LBA Formats Supported: Not Supported 00:14:55.211 Flexible Data Placement Supported: Not Supported 00:14:55.211 00:14:55.211 Controller Memory Buffer Support 00:14:55.211 ================================ 00:14:55.211 Supported: No 00:14:55.211 00:14:55.211 Persistent Memory Region Support 00:14:55.211 ================================ 00:14:55.211 Supported: No 00:14:55.211 00:14:55.211 Admin Command Set Attributes 00:14:55.211 ============================ 00:14:55.211 Security Send/Receive: Not Supported 00:14:55.211 Format NVM: Not Supported 00:14:55.211 Firmware Activate/Download: Not Supported 00:14:55.211 Namespace Management: Not Supported 00:14:55.211 Device Self-Test: Not Supported 00:14:55.211 Directives: Not Supported 00:14:55.211 NVMe-MI: Not Supported 00:14:55.211 Virtualization Management: Not Supported 00:14:55.211 Doorbell Buffer Config: Not Supported 00:14:55.211 Get LBA Status Capability: Not Supported 00:14:55.211 Command & Feature Lockdown Capability: Not Supported 00:14:55.211 Abort Command Limit: 4 00:14:55.211 Async Event Request Limit: 4 00:14:55.211 Number of Firmware Slots: N/A 00:14:55.211 Firmware Slot 1 Read-Only: N/A 00:14:55.211 Firmware Activation Without Reset: N/A 00:14:55.211 Multiple Update Detection Support: N/A 00:14:55.211 Firmware Update Granularity: No Information Provided 00:14:55.211 Per-Namespace SMART Log: No 00:14:55.211 Asymmetric Namespace Access Log Page: Not Supported 00:14:55.211 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:55.211 Command Effects Log Page: Supported 00:14:55.211 Get Log Page Extended Data: Supported 00:14:55.211 Telemetry Log Pages: Not Supported 00:14:55.211 Persistent Event Log Pages: Not Supported 00:14:55.211 Supported Log Pages Log Page: May Support 00:14:55.211 Commands Supported & Effects Log Page: Not Supported 00:14:55.211 Feature Identifiers & Effects Log Page:May Support 00:14:55.211 NVMe-MI Commands & Effects Log Page: May Support 00:14:55.211 Data Area 4 for Telemetry Log: Not Supported 00:14:55.211 Error Log Page Entries Supported: 128 00:14:55.211 Keep Alive: Supported 00:14:55.211 Keep Alive Granularity: 10000 ms 00:14:55.211 00:14:55.211 NVM Command Set Attributes 00:14:55.211 ========================== 00:14:55.211 Submission Queue Entry Size 00:14:55.211 Max: 64 00:14:55.211 Min: 64 00:14:55.211 Completion Queue Entry Size 00:14:55.211 Max: 16 00:14:55.211 Min: 16 00:14:55.211 Number of Namespaces: 32 00:14:55.211 Compare Command: Supported 00:14:55.211 Write Uncorrectable Command: Not Supported 00:14:55.211 Dataset Management Command: Supported 00:14:55.211 Write Zeroes Command: Supported 00:14:55.211 Set Features Save Field: Not Supported 00:14:55.211 Reservations: Not Supported 00:14:55.211 Timestamp: Not Supported 00:14:55.211 Copy: Supported 00:14:55.211 Volatile Write Cache: Present 00:14:55.211 Atomic Write Unit (Normal): 1 00:14:55.211 Atomic Write Unit (PFail): 1 00:14:55.211 Atomic Compare & Write Unit: 1 00:14:55.211 Fused Compare & Write: Supported 00:14:55.211 Scatter-Gather List 00:14:55.211 SGL Command Set: Supported (Dword aligned) 00:14:55.211 SGL Keyed: Not Supported 00:14:55.211 SGL Bit Bucket Descriptor: Not Supported 00:14:55.211 SGL Metadata Pointer: Not Supported 00:14:55.211 Oversized SGL: Not Supported 00:14:55.211 SGL Metadata Address: Not Supported 00:14:55.211 SGL Offset: Not Supported 00:14:55.211 Transport SGL Data Block: Not Supported 00:14:55.211 Replay Protected Memory Block: Not Supported 00:14:55.211 00:14:55.211 Firmware Slot Information 00:14:55.211 ========================= 00:14:55.211 Active slot: 1 00:14:55.211 Slot 1 Firmware Revision: 25.01 00:14:55.211 00:14:55.211 00:14:55.211 Commands Supported and Effects 00:14:55.211 ============================== 00:14:55.211 Admin Commands 00:14:55.211 -------------- 00:14:55.211 Get Log Page (02h): Supported 00:14:55.211 Identify (06h): Supported 00:14:55.211 Abort (08h): Supported 00:14:55.211 Set Features (09h): Supported 00:14:55.211 Get Features (0Ah): Supported 00:14:55.211 Asynchronous Event Request (0Ch): Supported 00:14:55.211 Keep Alive (18h): Supported 00:14:55.211 I/O Commands 00:14:55.211 ------------ 00:14:55.211 Flush (00h): Supported LBA-Change 00:14:55.211 Write (01h): Supported LBA-Change 00:14:55.211 Read (02h): Supported 00:14:55.211 Compare (05h): Supported 00:14:55.211 Write Zeroes (08h): Supported LBA-Change 00:14:55.211 Dataset Management (09h): Supported LBA-Change 00:14:55.211 Copy (19h): Supported LBA-Change 00:14:55.211 00:14:55.211 Error Log 00:14:55.211 ========= 00:14:55.211 00:14:55.211 Arbitration 00:14:55.211 =========== 00:14:55.211 Arbitration Burst: 1 00:14:55.211 00:14:55.211 Power Management 00:14:55.211 ================ 00:14:55.211 Number of Power States: 1 00:14:55.211 Current Power State: Power State #0 00:14:55.211 Power State #0: 00:14:55.211 Max Power: 0.00 W 00:14:55.211 Non-Operational State: Operational 00:14:55.211 Entry Latency: Not Reported 00:14:55.211 Exit Latency: Not Reported 00:14:55.211 Relative Read Throughput: 0 00:14:55.211 Relative Read Latency: 0 00:14:55.211 Relative Write Throughput: 0 00:14:55.211 Relative Write Latency: 0 00:14:55.211 Idle Power: Not Reported 00:14:55.211 Active Power: Not Reported 00:14:55.211 Non-Operational Permissive Mode: Not Supported 00:14:55.211 00:14:55.211 Health Information 00:14:55.211 ================== 00:14:55.211 Critical Warnings: 00:14:55.211 Available Spare Space: OK 00:14:55.211 Temperature: OK 00:14:55.211 Device Reliability: OK 00:14:55.211 Read Only: No 00:14:55.211 Volatile Memory Backup: OK 00:14:55.211 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:55.211 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:55.211 Available Spare: 0% 00:14:55.211 Available Sp[2024-10-21 11:59:31.580420] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:55.211 [2024-10-21 11:59:31.588325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:55.211 [2024-10-21 11:59:31.588350] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:14:55.211 [2024-10-21 11:59:31.588356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.211 [2024-10-21 11:59:31.588361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.211 [2024-10-21 11:59:31.588365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.211 [2024-10-21 11:59:31.588370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.211 [2024-10-21 11:59:31.588406] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:55.211 [2024-10-21 11:59:31.588413] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:55.211 [2024-10-21 11:59:31.589406] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:55.211 [2024-10-21 11:59:31.589442] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:14:55.211 [2024-10-21 11:59:31.589446] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:14:55.211 [2024-10-21 11:59:31.590411] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:55.211 [2024-10-21 11:59:31.590420] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:14:55.211 [2024-10-21 11:59:31.590463] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:55.211 [2024-10-21 11:59:31.593325] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:55.211 are Threshold: 0% 00:14:55.211 Life Percentage Used: 0% 00:14:55.211 Data Units Read: 0 00:14:55.211 Data Units Written: 0 00:14:55.211 Host Read Commands: 0 00:14:55.211 Host Write Commands: 0 00:14:55.211 Controller Busy Time: 0 minutes 00:14:55.211 Power Cycles: 0 00:14:55.211 Power On Hours: 0 hours 00:14:55.211 Unsafe Shutdowns: 0 00:14:55.211 Unrecoverable Media Errors: 0 00:14:55.211 Lifetime Error Log Entries: 0 00:14:55.211 Warning Temperature Time: 0 minutes 00:14:55.211 Critical Temperature Time: 0 minutes 00:14:55.211 00:14:55.211 Number of Queues 00:14:55.211 ================ 00:14:55.211 Number of I/O Submission Queues: 127 00:14:55.211 Number of I/O Completion Queues: 127 00:14:55.211 00:14:55.211 Active Namespaces 00:14:55.211 ================= 00:14:55.211 Namespace ID:1 00:14:55.211 Error Recovery Timeout: Unlimited 00:14:55.211 Command Set Identifier: NVM (00h) 00:14:55.211 Deallocate: Supported 00:14:55.211 Deallocated/Unwritten Error: Not Supported 00:14:55.211 Deallocated Read Value: Unknown 00:14:55.211 Deallocate in Write Zeroes: Not Supported 00:14:55.211 Deallocated Guard Field: 0xFFFF 00:14:55.211 Flush: Supported 00:14:55.211 Reservation: Supported 00:14:55.211 Namespace Sharing Capabilities: Multiple Controllers 00:14:55.211 Size (in LBAs): 131072 (0GiB) 00:14:55.211 Capacity (in LBAs): 131072 (0GiB) 00:14:55.211 Utilization (in LBAs): 131072 (0GiB) 00:14:55.211 NGUID: A49C9F9B96C9438EA1494212F343C99C 00:14:55.211 UUID: a49c9f9b-96c9-438e-a149-4212f343c99c 00:14:55.211 Thin Provisioning: Not Supported 00:14:55.211 Per-NS Atomic Units: Yes 00:14:55.211 Atomic Boundary Size (Normal): 0 00:14:55.211 Atomic Boundary Size (PFail): 0 00:14:55.211 Atomic Boundary Offset: 0 00:14:55.211 Maximum Single Source Range Length: 65535 00:14:55.211 Maximum Copy Length: 65535 00:14:55.211 Maximum Source Range Count: 1 00:14:55.211 NGUID/EUI64 Never Reused: No 00:14:55.211 Namespace Write Protected: No 00:14:55.211 Number of LBA Formats: 1 00:14:55.211 Current LBA Format: LBA Format #00 00:14:55.211 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:55.211 00:14:55.212 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:55.212 [2024-10-21 11:59:31.771347] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:00.495 Initializing NVMe Controllers 00:15:00.495 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:00.495 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:00.495 Initialization complete. Launching workers. 00:15:00.495 ======================================================== 00:15:00.495 Latency(us) 00:15:00.495 Device Information : IOPS MiB/s Average min max 00:15:00.495 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40022.60 156.34 3198.38 842.92 6963.14 00:15:00.495 ======================================================== 00:15:00.495 Total : 40022.60 156.34 3198.38 842.92 6963.14 00:15:00.495 00:15:00.495 [2024-10-21 11:59:36.873507] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:00.495 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:00.495 [2024-10-21 11:59:37.054093] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:05.785 Initializing NVMe Controllers 00:15:05.785 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:05.785 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:05.785 Initialization complete. Launching workers. 00:15:05.785 ======================================================== 00:15:05.785 Latency(us) 00:15:05.785 Device Information : IOPS MiB/s Average min max 00:15:05.785 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40029.20 156.36 3197.56 840.62 6959.14 00:15:05.785 ======================================================== 00:15:05.785 Total : 40029.20 156.36 3197.56 840.62 6959.14 00:15:05.785 00:15:05.785 [2024-10-21 11:59:42.073855] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:05.785 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:05.785 [2024-10-21 11:59:42.268060] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:11.076 [2024-10-21 11:59:47.402409] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:11.076 Initializing NVMe Controllers 00:15:11.076 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:11.076 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:11.076 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:11.076 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:11.076 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:11.076 Initialization complete. Launching workers. 00:15:11.076 Starting thread on core 2 00:15:11.076 Starting thread on core 3 00:15:11.076 Starting thread on core 1 00:15:11.077 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:11.077 [2024-10-21 11:59:47.634727] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:14.456 [2024-10-21 11:59:50.687212] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:14.457 Initializing NVMe Controllers 00:15:14.457 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:14.457 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:14.457 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:14.457 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:14.457 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:14.457 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:14.457 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:14.457 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:14.457 Initialization complete. Launching workers. 00:15:14.457 Starting thread on core 1 with urgent priority queue 00:15:14.457 Starting thread on core 2 with urgent priority queue 00:15:14.457 Starting thread on core 3 with urgent priority queue 00:15:14.457 Starting thread on core 0 with urgent priority queue 00:15:14.457 SPDK bdev Controller (SPDK2 ) core 0: 14623.67 IO/s 6.84 secs/100000 ios 00:15:14.457 SPDK bdev Controller (SPDK2 ) core 1: 13282.00 IO/s 7.53 secs/100000 ios 00:15:14.457 SPDK bdev Controller (SPDK2 ) core 2: 10792.67 IO/s 9.27 secs/100000 ios 00:15:14.457 SPDK bdev Controller (SPDK2 ) core 3: 12567.00 IO/s 7.96 secs/100000 ios 00:15:14.457 ======================================================== 00:15:14.457 00:15:14.457 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:14.457 [2024-10-21 11:59:50.908670] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:14.457 Initializing NVMe Controllers 00:15:14.457 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:14.457 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:14.457 Namespace ID: 1 size: 0GB 00:15:14.457 Initialization complete. 00:15:14.457 INFO: using host memory buffer for IO 00:15:14.457 Hello world! 00:15:14.457 [2024-10-21 11:59:50.918726] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:14.457 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:14.717 [2024-10-21 11:59:51.139135] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:15.659 Initializing NVMe Controllers 00:15:15.659 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:15.659 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:15.659 Initialization complete. Launching workers. 00:15:15.659 submit (in ns) avg, min, max = 6900.3, 2840.8, 3998053.3 00:15:15.659 complete (in ns) avg, min, max = 15188.5, 1627.5, 3997174.2 00:15:15.659 00:15:15.659 Submit histogram 00:15:15.659 ================ 00:15:15.659 Range in us Cumulative Count 00:15:15.659 2.840 - 2.853: 0.3230% ( 66) 00:15:15.659 2.853 - 2.867: 1.7031% ( 282) 00:15:15.659 2.867 - 2.880: 4.6493% ( 602) 00:15:15.659 2.880 - 2.893: 10.4488% ( 1185) 00:15:15.659 2.893 - 2.907: 16.0916% ( 1153) 00:15:15.659 2.907 - 2.920: 20.7263% ( 947) 00:15:15.659 2.920 - 2.933: 25.9531% ( 1068) 00:15:15.659 2.933 - 2.947: 31.6302% ( 1160) 00:15:15.659 2.947 - 2.960: 37.4296% ( 1185) 00:15:15.659 2.960 - 2.973: 42.0545% ( 945) 00:15:15.659 2.973 - 2.987: 47.3156% ( 1075) 00:15:15.659 2.987 - 3.000: 52.3026% ( 1019) 00:15:15.659 3.000 - 3.013: 60.2310% ( 1620) 00:15:15.659 3.013 - 3.027: 69.5688% ( 1908) 00:15:15.659 3.027 - 3.040: 78.3194% ( 1788) 00:15:15.659 3.040 - 3.053: 86.0128% ( 1572) 00:15:15.659 3.053 - 3.067: 91.7095% ( 1164) 00:15:15.659 3.067 - 3.080: 95.5660% ( 788) 00:15:15.659 3.080 - 3.093: 97.9983% ( 497) 00:15:15.659 3.093 - 3.107: 98.9233% ( 189) 00:15:15.659 3.107 - 3.120: 99.3393% ( 85) 00:15:15.659 3.120 - 3.133: 99.5449% ( 42) 00:15:15.659 3.133 - 3.147: 99.5791% ( 7) 00:15:15.659 3.147 - 3.160: 99.5840% ( 1) 00:15:15.659 3.173 - 3.187: 99.5889% ( 1) 00:15:15.659 3.200 - 3.213: 99.5938% ( 1) 00:15:15.659 3.253 - 3.267: 99.5987% ( 1) 00:15:15.659 3.267 - 3.280: 99.6036% ( 1) 00:15:15.659 3.467 - 3.493: 99.6134% ( 2) 00:15:15.659 3.600 - 3.627: 99.6183% ( 1) 00:15:15.659 3.680 - 3.707: 99.6232% ( 1) 00:15:15.659 3.840 - 3.867: 99.6281% ( 1) 00:15:15.659 3.893 - 3.920: 99.6329% ( 1) 00:15:15.659 4.587 - 4.613: 99.6378% ( 1) 00:15:15.659 4.613 - 4.640: 99.6427% ( 1) 00:15:15.659 4.667 - 4.693: 99.6476% ( 1) 00:15:15.659 4.693 - 4.720: 99.6525% ( 1) 00:15:15.659 4.720 - 4.747: 99.6623% ( 2) 00:15:15.659 4.747 - 4.773: 99.6721% ( 2) 00:15:15.659 4.773 - 4.800: 99.6770% ( 1) 00:15:15.659 4.907 - 4.933: 99.6819% ( 1) 00:15:15.659 4.933 - 4.960: 99.6868% ( 1) 00:15:15.659 4.987 - 5.013: 99.6917% ( 1) 00:15:15.659 5.013 - 5.040: 99.7015% ( 2) 00:15:15.659 5.067 - 5.093: 99.7064% ( 1) 00:15:15.659 5.093 - 5.120: 99.7113% ( 1) 00:15:15.659 5.147 - 5.173: 99.7210% ( 2) 00:15:15.659 5.173 - 5.200: 99.7259% ( 1) 00:15:15.659 5.200 - 5.227: 99.7357% ( 2) 00:15:15.659 5.253 - 5.280: 99.7455% ( 2) 00:15:15.659 5.307 - 5.333: 99.7504% ( 1) 00:15:15.659 5.333 - 5.360: 99.7651% ( 3) 00:15:15.659 5.360 - 5.387: 99.7700% ( 1) 00:15:15.659 5.387 - 5.413: 99.7749% ( 1) 00:15:15.659 5.413 - 5.440: 99.7798% ( 1) 00:15:15.659 5.493 - 5.520: 99.7847% ( 1) 00:15:15.659 5.520 - 5.547: 99.7896% ( 1) 00:15:15.659 5.547 - 5.573: 99.7945% ( 1) 00:15:15.659 5.573 - 5.600: 99.7993% ( 1) 00:15:15.659 5.680 - 5.707: 99.8042% ( 1) 00:15:15.659 5.733 - 5.760: 99.8091% ( 1) 00:15:15.659 5.760 - 5.787: 99.8140% ( 1) 00:15:15.659 5.787 - 5.813: 99.8189% ( 1) 00:15:15.659 5.813 - 5.840: 99.8238% ( 1) 00:15:15.659 5.840 - 5.867: 99.8287% ( 1) 00:15:15.659 6.000 - 6.027: 99.8385% ( 2) 00:15:15.659 6.027 - 6.053: 99.8434% ( 1) 00:15:15.659 6.053 - 6.080: 99.8532% ( 2) 00:15:15.659 6.133 - 6.160: 99.8581% ( 1) 00:15:15.659 6.160 - 6.187: 99.8679% ( 2) 00:15:15.659 6.240 - 6.267: 99.8728% ( 1) 00:15:15.659 6.267 - 6.293: 99.8776% ( 1) 00:15:15.659 6.293 - 6.320: 99.8825% ( 1) 00:15:15.659 6.373 - 6.400: 99.8874% ( 1) 00:15:15.659 9.120 - 9.173: 99.8923% ( 1) 00:15:15.659 9.707 - 9.760: 99.8972% ( 1) 00:15:15.659 11.680 - 11.733: 99.9021% ( 1) 00:15:15.659 3986.773 - 4014.080: 100.0000% ( 20) 00:15:15.659 00:15:15.659 Complete histogram 00:15:15.659 ================== 00:15:15.659 Ra[2024-10-21 11:59:52.232841] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:15.921 nge in us Cumulative Count 00:15:15.921 1.627 - 1.633: 0.0147% ( 3) 00:15:15.921 1.633 - 1.640: 0.0245% ( 2) 00:15:15.921 1.640 - 1.647: 0.1811% ( 32) 00:15:15.921 1.647 - 1.653: 0.5481% ( 75) 00:15:15.921 1.653 - 1.660: 0.6166% ( 14) 00:15:15.921 1.660 - 1.667: 0.7341% ( 24) 00:15:15.921 1.667 - 1.673: 0.7635% ( 6) 00:15:15.921 1.673 - 1.680: 0.7733% ( 2) 00:15:15.921 1.680 - 1.687: 2.1387% ( 279) 00:15:15.921 1.687 - 1.693: 49.0922% ( 9594) 00:15:15.921 1.693 - 1.700: 54.4658% ( 1098) 00:15:15.921 1.700 - 1.707: 64.5622% ( 2063) 00:15:15.921 1.707 - 1.720: 79.6848% ( 3090) 00:15:15.921 1.720 - 1.733: 83.6294% ( 806) 00:15:15.921 1.733 - 1.747: 84.9753% ( 275) 00:15:15.921 1.747 - 1.760: 89.0814% ( 839) 00:15:15.921 1.760 - 1.773: 94.5725% ( 1122) 00:15:15.921 1.773 - 1.787: 97.7977% ( 659) 00:15:15.921 1.787 - 1.800: 99.2170% ( 290) 00:15:15.921 1.800 - 1.813: 99.5008% ( 58) 00:15:15.921 1.813 - 1.827: 99.5497% ( 10) 00:15:15.921 3.360 - 3.373: 99.5546% ( 1) 00:15:15.921 3.600 - 3.627: 99.5595% ( 1) 00:15:15.921 3.760 - 3.787: 99.5644% ( 1) 00:15:15.921 3.787 - 3.813: 99.5742% ( 2) 00:15:15.921 3.867 - 3.893: 99.5791% ( 1) 00:15:15.921 3.893 - 3.920: 99.5840% ( 1) 00:15:15.921 3.947 - 3.973: 99.5889% ( 1) 00:15:15.921 4.107 - 4.133: 99.5938% ( 1) 00:15:15.921 4.133 - 4.160: 99.5987% ( 1) 00:15:15.921 4.267 - 4.293: 99.6036% ( 1) 00:15:15.921 4.400 - 4.427: 99.6085% ( 1) 00:15:15.921 4.427 - 4.453: 99.6134% ( 1) 00:15:15.921 4.747 - 4.773: 99.6183% ( 1) 00:15:15.921 4.800 - 4.827: 99.6232% ( 1) 00:15:15.921 4.987 - 5.013: 99.6281% ( 1) 00:15:15.921 5.280 - 5.307: 99.6329% ( 1) 00:15:15.921 6.107 - 6.133: 99.6378% ( 1) 00:15:15.921 6.160 - 6.187: 99.6427% ( 1) 00:15:15.921 9.280 - 9.333: 99.6476% ( 1) 00:15:15.921 10.187 - 10.240: 99.6525% ( 1) 00:15:15.921 10.507 - 10.560: 99.6574% ( 1) 00:15:15.921 56.320 - 56.747: 99.6623% ( 1) 00:15:15.921 3713.707 - 3741.013: 99.6672% ( 1) 00:15:15.921 3986.773 - 4014.080: 100.0000% ( 68) 00:15:15.921 00:15:15.921 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:15.921 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:15.921 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:15.921 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:15.921 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:15.921 [ 00:15:15.921 { 00:15:15.921 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:15.921 "subtype": "Discovery", 00:15:15.921 "listen_addresses": [], 00:15:15.921 "allow_any_host": true, 00:15:15.921 "hosts": [] 00:15:15.921 }, 00:15:15.921 { 00:15:15.921 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:15.921 "subtype": "NVMe", 00:15:15.921 "listen_addresses": [ 00:15:15.921 { 00:15:15.921 "trtype": "VFIOUSER", 00:15:15.921 "adrfam": "IPv4", 00:15:15.921 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:15.921 "trsvcid": "0" 00:15:15.921 } 00:15:15.921 ], 00:15:15.921 "allow_any_host": true, 00:15:15.921 "hosts": [], 00:15:15.921 "serial_number": "SPDK1", 00:15:15.921 "model_number": "SPDK bdev Controller", 00:15:15.921 "max_namespaces": 32, 00:15:15.921 "min_cntlid": 1, 00:15:15.921 "max_cntlid": 65519, 00:15:15.921 "namespaces": [ 00:15:15.921 { 00:15:15.921 "nsid": 1, 00:15:15.921 "bdev_name": "Malloc1", 00:15:15.921 "name": "Malloc1", 00:15:15.921 "nguid": "B8DFA5D2F362465AB9619D9A609B9007", 00:15:15.921 "uuid": "b8dfa5d2-f362-465a-b961-9d9a609b9007" 00:15:15.921 }, 00:15:15.921 { 00:15:15.921 "nsid": 2, 00:15:15.921 "bdev_name": "Malloc3", 00:15:15.921 "name": "Malloc3", 00:15:15.921 "nguid": "DCE55F915DFF49AFBF3350C5E43A60ED", 00:15:15.921 "uuid": "dce55f91-5dff-49af-bf33-50c5e43a60ed" 00:15:15.921 } 00:15:15.921 ] 00:15:15.921 }, 00:15:15.921 { 00:15:15.921 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:15.921 "subtype": "NVMe", 00:15:15.921 "listen_addresses": [ 00:15:15.921 { 00:15:15.921 "trtype": "VFIOUSER", 00:15:15.921 "adrfam": "IPv4", 00:15:15.921 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:15.921 "trsvcid": "0" 00:15:15.921 } 00:15:15.921 ], 00:15:15.921 "allow_any_host": true, 00:15:15.921 "hosts": [], 00:15:15.921 "serial_number": "SPDK2", 00:15:15.921 "model_number": "SPDK bdev Controller", 00:15:15.921 "max_namespaces": 32, 00:15:15.921 "min_cntlid": 1, 00:15:15.921 "max_cntlid": 65519, 00:15:15.921 "namespaces": [ 00:15:15.921 { 00:15:15.921 "nsid": 1, 00:15:15.921 "bdev_name": "Malloc2", 00:15:15.921 "name": "Malloc2", 00:15:15.921 "nguid": "A49C9F9B96C9438EA1494212F343C99C", 00:15:15.921 "uuid": "a49c9f9b-96c9-438e-a149-4212f343c99c" 00:15:15.921 } 00:15:15.921 ] 00:15:15.921 } 00:15:15.921 ] 00:15:15.921 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:15.921 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:15.921 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=935604 00:15:15.921 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:15.921 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:15.921 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:15.921 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:15.921 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:15.921 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:15.921 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:16.182 [2024-10-21 11:59:52.576909] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:16.182 Malloc4 00:15:16.182 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:16.443 [2024-10-21 11:59:52.803457] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:16.443 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:16.443 Asynchronous Event Request test 00:15:16.443 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:16.443 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:16.443 Registering asynchronous event callbacks... 00:15:16.443 Starting namespace attribute notice tests for all controllers... 00:15:16.443 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:16.443 aer_cb - Changed Namespace 00:15:16.443 Cleaning up... 00:15:16.443 [ 00:15:16.443 { 00:15:16.443 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:16.443 "subtype": "Discovery", 00:15:16.443 "listen_addresses": [], 00:15:16.443 "allow_any_host": true, 00:15:16.443 "hosts": [] 00:15:16.443 }, 00:15:16.443 { 00:15:16.443 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:16.443 "subtype": "NVMe", 00:15:16.443 "listen_addresses": [ 00:15:16.443 { 00:15:16.443 "trtype": "VFIOUSER", 00:15:16.443 "adrfam": "IPv4", 00:15:16.443 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:16.443 "trsvcid": "0" 00:15:16.443 } 00:15:16.443 ], 00:15:16.443 "allow_any_host": true, 00:15:16.443 "hosts": [], 00:15:16.443 "serial_number": "SPDK1", 00:15:16.443 "model_number": "SPDK bdev Controller", 00:15:16.443 "max_namespaces": 32, 00:15:16.443 "min_cntlid": 1, 00:15:16.443 "max_cntlid": 65519, 00:15:16.443 "namespaces": [ 00:15:16.443 { 00:15:16.443 "nsid": 1, 00:15:16.443 "bdev_name": "Malloc1", 00:15:16.443 "name": "Malloc1", 00:15:16.443 "nguid": "B8DFA5D2F362465AB9619D9A609B9007", 00:15:16.443 "uuid": "b8dfa5d2-f362-465a-b961-9d9a609b9007" 00:15:16.443 }, 00:15:16.443 { 00:15:16.443 "nsid": 2, 00:15:16.443 "bdev_name": "Malloc3", 00:15:16.443 "name": "Malloc3", 00:15:16.443 "nguid": "DCE55F915DFF49AFBF3350C5E43A60ED", 00:15:16.443 "uuid": "dce55f91-5dff-49af-bf33-50c5e43a60ed" 00:15:16.443 } 00:15:16.443 ] 00:15:16.443 }, 00:15:16.443 { 00:15:16.443 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:16.443 "subtype": "NVMe", 00:15:16.443 "listen_addresses": [ 00:15:16.443 { 00:15:16.443 "trtype": "VFIOUSER", 00:15:16.443 "adrfam": "IPv4", 00:15:16.443 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:16.443 "trsvcid": "0" 00:15:16.443 } 00:15:16.443 ], 00:15:16.443 "allow_any_host": true, 00:15:16.443 "hosts": [], 00:15:16.443 "serial_number": "SPDK2", 00:15:16.443 "model_number": "SPDK bdev Controller", 00:15:16.443 "max_namespaces": 32, 00:15:16.443 "min_cntlid": 1, 00:15:16.443 "max_cntlid": 65519, 00:15:16.443 "namespaces": [ 00:15:16.443 { 00:15:16.443 "nsid": 1, 00:15:16.443 "bdev_name": "Malloc2", 00:15:16.443 "name": "Malloc2", 00:15:16.443 "nguid": "A49C9F9B96C9438EA1494212F343C99C", 00:15:16.443 "uuid": "a49c9f9b-96c9-438e-a149-4212f343c99c" 00:15:16.443 }, 00:15:16.443 { 00:15:16.443 "nsid": 2, 00:15:16.443 "bdev_name": "Malloc4", 00:15:16.443 "name": "Malloc4", 00:15:16.443 "nguid": "FFDA56CA4AF04C56B6F44984221AE31A", 00:15:16.443 "uuid": "ffda56ca-4af0-4c56-b6f4-4984221ae31a" 00:15:16.443 } 00:15:16.443 ] 00:15:16.443 } 00:15:16.443 ] 00:15:16.443 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 935604 00:15:16.443 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:16.443 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 926660 00:15:16.443 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 926660 ']' 00:15:16.443 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 926660 00:15:16.443 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:16.443 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:16.443 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 926660 00:15:16.705 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:16.705 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:16.705 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 926660' 00:15:16.705 killing process with pid 926660 00:15:16.705 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 926660 00:15:16.705 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 926660 00:15:16.705 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:16.705 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:16.705 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:16.705 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:16.705 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:16.705 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=935812 00:15:16.705 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 935812' 00:15:16.705 Process pid: 935812 00:15:16.705 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:16.705 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:16.705 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 935812 00:15:16.705 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 935812 ']' 00:15:16.705 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:16.705 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:16.705 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:16.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:16.705 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:16.705 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:16.705 [2024-10-21 11:59:53.293576] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:16.705 [2024-10-21 11:59:53.294630] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:15:16.705 [2024-10-21 11:59:53.294677] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:16.967 [2024-10-21 11:59:53.374834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:16.967 [2024-10-21 11:59:53.409382] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:16.967 [2024-10-21 11:59:53.409413] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:16.967 [2024-10-21 11:59:53.409419] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:16.967 [2024-10-21 11:59:53.409424] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:16.967 [2024-10-21 11:59:53.409428] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:16.967 [2024-10-21 11:59:53.410729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:16.967 [2024-10-21 11:59:53.410884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:16.967 [2024-10-21 11:59:53.411037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.967 [2024-10-21 11:59:53.411039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:16.967 [2024-10-21 11:59:53.463165] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:16.967 [2024-10-21 11:59:53.463979] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:16.967 [2024-10-21 11:59:53.464852] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:16.967 [2024-10-21 11:59:53.465305] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:16.967 [2024-10-21 11:59:53.465339] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:17.540 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:17.540 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:15:17.540 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:18.927 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:18.927 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:18.927 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:18.927 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:18.927 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:18.927 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:18.927 Malloc1 00:15:18.927 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:19.188 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:19.448 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:19.707 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:19.707 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:19.707 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:19.707 Malloc2 00:15:19.707 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:19.967 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:20.227 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:20.227 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:20.488 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 935812 00:15:20.488 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 935812 ']' 00:15:20.488 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 935812 00:15:20.488 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:20.488 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:20.488 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 935812 00:15:20.488 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:20.488 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:20.488 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 935812' 00:15:20.488 killing process with pid 935812 00:15:20.488 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 935812 00:15:20.488 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 935812 00:15:20.488 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:20.488 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:20.488 00:15:20.488 real 0m50.854s 00:15:20.488 user 3m14.522s 00:15:20.488 sys 0m2.999s 00:15:20.488 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:20.488 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:20.488 ************************************ 00:15:20.488 END TEST nvmf_vfio_user 00:15:20.488 ************************************ 00:15:20.488 11:59:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:20.488 11:59:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:20.488 11:59:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:20.488 11:59:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:20.750 ************************************ 00:15:20.750 START TEST nvmf_vfio_user_nvme_compliance 00:15:20.750 ************************************ 00:15:20.750 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:20.750 * Looking for test storage... 00:15:20.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:20.750 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:20.750 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lcov --version 00:15:20.750 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:20.750 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:20.750 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:20.750 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:20.750 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:20.750 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:20.750 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:20.750 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:20.750 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:20.750 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:20.750 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:20.750 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:20.750 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:20.750 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:20.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.751 --rc genhtml_branch_coverage=1 00:15:20.751 --rc genhtml_function_coverage=1 00:15:20.751 --rc genhtml_legend=1 00:15:20.751 --rc geninfo_all_blocks=1 00:15:20.751 --rc geninfo_unexecuted_blocks=1 00:15:20.751 00:15:20.751 ' 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:20.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.751 --rc genhtml_branch_coverage=1 00:15:20.751 --rc genhtml_function_coverage=1 00:15:20.751 --rc genhtml_legend=1 00:15:20.751 --rc geninfo_all_blocks=1 00:15:20.751 --rc geninfo_unexecuted_blocks=1 00:15:20.751 00:15:20.751 ' 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:20.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.751 --rc genhtml_branch_coverage=1 00:15:20.751 --rc genhtml_function_coverage=1 00:15:20.751 --rc genhtml_legend=1 00:15:20.751 --rc geninfo_all_blocks=1 00:15:20.751 --rc geninfo_unexecuted_blocks=1 00:15:20.751 00:15:20.751 ' 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:20.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.751 --rc genhtml_branch_coverage=1 00:15:20.751 --rc genhtml_function_coverage=1 00:15:20.751 --rc genhtml_legend=1 00:15:20.751 --rc geninfo_all_blocks=1 00:15:20.751 --rc geninfo_unexecuted_blocks=1 00:15:20.751 00:15:20.751 ' 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:20.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=936694 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 936694' 00:15:20.751 Process pid: 936694 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 936694 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 936694 ']' 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.751 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:20.752 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.752 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:20.752 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:21.012 [2024-10-21 11:59:57.383661] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:15:21.012 [2024-10-21 11:59:57.383728] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:21.012 [2024-10-21 11:59:57.467385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:21.012 [2024-10-21 11:59:57.501966] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:21.012 [2024-10-21 11:59:57.501997] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:21.012 [2024-10-21 11:59:57.502004] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:21.012 [2024-10-21 11:59:57.502009] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:21.012 [2024-10-21 11:59:57.502013] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:21.012 [2024-10-21 11:59:57.503214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:21.012 [2024-10-21 11:59:57.503360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:21.012 [2024-10-21 11:59:57.503375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.583 11:59:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:21.583 11:59:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:15:21.583 11:59:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:22.969 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:22.969 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:22.969 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:22.969 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.969 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:22.969 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.969 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:22.969 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:22.969 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.969 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:22.969 malloc0 00:15:22.969 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.969 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:22.969 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.969 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:22.969 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.969 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:22.969 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.969 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:22.969 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.969 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:22.969 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.969 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:22.969 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.969 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:22.969 00:15:22.969 00:15:22.969 CUnit - A unit testing framework for C - Version 2.1-3 00:15:22.969 http://cunit.sourceforge.net/ 00:15:22.970 00:15:22.970 00:15:22.970 Suite: nvme_compliance 00:15:22.970 Test: admin_identify_ctrlr_verify_dptr ...[2024-10-21 11:59:59.403761] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:22.970 [2024-10-21 11:59:59.405048] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:22.970 [2024-10-21 11:59:59.405059] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:22.970 [2024-10-21 11:59:59.405064] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:22.970 [2024-10-21 11:59:59.406783] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:22.970 passed 00:15:22.970 Test: admin_identify_ctrlr_verify_fused ...[2024-10-21 11:59:59.485288] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:22.970 [2024-10-21 11:59:59.488304] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:22.970 passed 00:15:23.231 Test: admin_identify_ns ...[2024-10-21 11:59:59.566705] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:23.231 [2024-10-21 11:59:59.627327] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:23.231 [2024-10-21 11:59:59.635329] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:23.231 [2024-10-21 11:59:59.656403] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:23.231 passed 00:15:23.231 Test: admin_get_features_mandatory_features ...[2024-10-21 11:59:59.728611] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:23.231 [2024-10-21 11:59:59.731628] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:23.231 passed 00:15:23.231 Test: admin_get_features_optional_features ...[2024-10-21 11:59:59.807079] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:23.231 [2024-10-21 11:59:59.813122] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:23.491 passed 00:15:23.491 Test: admin_set_features_number_of_queues ...[2024-10-21 11:59:59.886852] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:23.491 [2024-10-21 11:59:59.992407] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:23.491 passed 00:15:23.491 Test: admin_get_log_page_mandatory_logs ...[2024-10-21 12:00:00.065638] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:23.491 [2024-10-21 12:00:00.068656] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:23.752 passed 00:15:23.752 Test: admin_get_log_page_with_lpo ...[2024-10-21 12:00:00.143685] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:23.752 [2024-10-21 12:00:00.215332] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:23.752 [2024-10-21 12:00:00.228369] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:23.752 passed 00:15:23.752 Test: fabric_property_get ...[2024-10-21 12:00:00.299652] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:23.752 [2024-10-21 12:00:00.300855] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:23.752 [2024-10-21 12:00:00.302668] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:23.752 passed 00:15:24.012 Test: admin_delete_io_sq_use_admin_qid ...[2024-10-21 12:00:00.380134] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:24.012 [2024-10-21 12:00:00.381334] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:24.012 [2024-10-21 12:00:00.383153] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:24.012 passed 00:15:24.012 Test: admin_delete_io_sq_delete_sq_twice ...[2024-10-21 12:00:00.458891] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:24.012 [2024-10-21 12:00:00.542329] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:24.013 [2024-10-21 12:00:00.558325] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:24.013 [2024-10-21 12:00:00.563430] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:24.013 passed 00:15:24.274 Test: admin_delete_io_cq_use_admin_qid ...[2024-10-21 12:00:00.640466] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:24.274 [2024-10-21 12:00:00.641666] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:24.274 [2024-10-21 12:00:00.643488] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:24.274 passed 00:15:24.274 Test: admin_delete_io_cq_delete_cq_first ...[2024-10-21 12:00:00.717186] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:24.274 [2024-10-21 12:00:00.793329] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:24.274 [2024-10-21 12:00:00.817329] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:24.274 [2024-10-21 12:00:00.822401] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:24.274 passed 00:15:24.535 Test: admin_create_io_cq_verify_iv_pc ...[2024-10-21 12:00:00.895592] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:24.535 [2024-10-21 12:00:00.896797] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:24.535 [2024-10-21 12:00:00.896815] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:24.535 [2024-10-21 12:00:00.898613] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:24.535 passed 00:15:24.535 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-10-21 12:00:00.975322] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:24.535 [2024-10-21 12:00:01.069327] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:24.535 [2024-10-21 12:00:01.077329] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:24.535 [2024-10-21 12:00:01.085330] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:24.535 [2024-10-21 12:00:01.093330] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:24.535 [2024-10-21 12:00:01.122393] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:24.796 passed 00:15:24.796 Test: admin_create_io_sq_verify_pc ...[2024-10-21 12:00:01.195597] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:24.796 [2024-10-21 12:00:01.212332] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:24.796 [2024-10-21 12:00:01.229739] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:24.796 passed 00:15:24.796 Test: admin_create_io_qp_max_qps ...[2024-10-21 12:00:01.305182] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:26.181 [2024-10-21 12:00:02.400328] nvme_ctrlr.c:5504:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:26.442 [2024-10-21 12:00:02.789374] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:26.442 passed 00:15:26.442 Test: admin_create_io_sq_shared_cq ...[2024-10-21 12:00:02.863115] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:26.442 [2024-10-21 12:00:02.994327] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:26.442 [2024-10-21 12:00:03.031373] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:26.732 passed 00:15:26.732 00:15:26.732 Run Summary: Type Total Ran Passed Failed Inactive 00:15:26.732 suites 1 1 n/a 0 0 00:15:26.732 tests 18 18 18 0 0 00:15:26.732 asserts 360 360 360 0 n/a 00:15:26.732 00:15:26.732 Elapsed time = 1.489 seconds 00:15:26.732 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 936694 00:15:26.732 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 936694 ']' 00:15:26.732 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 936694 00:15:26.732 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:15:26.732 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:26.732 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 936694 00:15:26.732 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:26.732 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:26.732 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 936694' 00:15:26.732 killing process with pid 936694 00:15:26.732 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 936694 00:15:26.732 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 936694 00:15:26.732 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:26.732 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:26.732 00:15:26.732 real 0m6.157s 00:15:26.732 user 0m17.481s 00:15:26.732 sys 0m0.529s 00:15:26.732 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:26.732 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:26.732 ************************************ 00:15:26.732 END TEST nvmf_vfio_user_nvme_compliance 00:15:26.732 ************************************ 00:15:26.732 12:00:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:26.732 12:00:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:26.732 12:00:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:26.732 12:00:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:26.994 ************************************ 00:15:26.994 START TEST nvmf_vfio_user_fuzz 00:15:26.994 ************************************ 00:15:26.994 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:26.994 * Looking for test storage... 00:15:26.994 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:26.994 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:26.994 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:15:26.994 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:26.994 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:26.994 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:26.994 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:26.994 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:26.994 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:26.994 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:26.994 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:26.994 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:26.994 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:26.994 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:26.994 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:26.994 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:26.994 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:26.994 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:26.994 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:26.994 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:26.994 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:26.994 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:26.994 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:26.994 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:26.994 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:26.994 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:26.994 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:26.994 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:26.994 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:26.994 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:26.994 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:26.994 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:26.994 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:26.994 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:26.994 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:26.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.994 --rc genhtml_branch_coverage=1 00:15:26.995 --rc genhtml_function_coverage=1 00:15:26.995 --rc genhtml_legend=1 00:15:26.995 --rc geninfo_all_blocks=1 00:15:26.995 --rc geninfo_unexecuted_blocks=1 00:15:26.995 00:15:26.995 ' 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:26.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.995 --rc genhtml_branch_coverage=1 00:15:26.995 --rc genhtml_function_coverage=1 00:15:26.995 --rc genhtml_legend=1 00:15:26.995 --rc geninfo_all_blocks=1 00:15:26.995 --rc geninfo_unexecuted_blocks=1 00:15:26.995 00:15:26.995 ' 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:26.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.995 --rc genhtml_branch_coverage=1 00:15:26.995 --rc genhtml_function_coverage=1 00:15:26.995 --rc genhtml_legend=1 00:15:26.995 --rc geninfo_all_blocks=1 00:15:26.995 --rc geninfo_unexecuted_blocks=1 00:15:26.995 00:15:26.995 ' 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:26.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.995 --rc genhtml_branch_coverage=1 00:15:26.995 --rc genhtml_function_coverage=1 00:15:26.995 --rc genhtml_legend=1 00:15:26.995 --rc geninfo_all_blocks=1 00:15:26.995 --rc geninfo_unexecuted_blocks=1 00:15:26.995 00:15:26.995 ' 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:26.995 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=938012 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 938012' 00:15:26.995 Process pid: 938012 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 938012 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 938012 ']' 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:26.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:26.995 12:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:27.938 12:00:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:27.938 12:00:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:15:27.938 12:00:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:28.880 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:28.880 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.880 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:28.880 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.880 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:28.880 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:28.880 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.880 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:28.880 malloc0 00:15:28.880 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.880 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:28.880 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.880 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:29.141 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.141 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:29.141 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.141 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:29.141 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.141 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:29.141 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.141 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:29.141 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.141 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:29.141 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:01.255 Fuzzing completed. Shutting down the fuzz application 00:16:01.255 00:16:01.255 Dumping successful admin opcodes: 00:16:01.255 8, 9, 10, 24, 00:16:01.255 Dumping successful io opcodes: 00:16:01.255 0, 00:16:01.255 NS: 0x20000081ef00 I/O qp, Total commands completed: 1337624, total successful commands: 5243, random_seed: 4258379840 00:16:01.255 NS: 0x20000081ef00 admin qp, Total commands completed: 307761, total successful commands: 2467, random_seed: 2008734336 00:16:01.255 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:01.255 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.255 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:01.255 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.255 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 938012 00:16:01.255 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 938012 ']' 00:16:01.255 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 938012 00:16:01.255 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:16:01.255 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:01.255 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 938012 00:16:01.255 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:01.255 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:01.255 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 938012' 00:16:01.255 killing process with pid 938012 00:16:01.255 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 938012 00:16:01.255 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 938012 00:16:01.255 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:01.255 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:01.255 00:16:01.255 real 0m32.796s 00:16:01.255 user 0m35.008s 00:16:01.255 sys 0m26.804s 00:16:01.255 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:01.255 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:01.255 ************************************ 00:16:01.255 END TEST nvmf_vfio_user_fuzz 00:16:01.255 ************************************ 00:16:01.255 12:00:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:01.255 12:00:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:01.255 12:00:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:01.255 12:00:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:01.255 ************************************ 00:16:01.255 START TEST nvmf_auth_target 00:16:01.255 ************************************ 00:16:01.255 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:01.256 * Looking for test storage... 00:16:01.256 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:01.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.256 --rc genhtml_branch_coverage=1 00:16:01.256 --rc genhtml_function_coverage=1 00:16:01.256 --rc genhtml_legend=1 00:16:01.256 --rc geninfo_all_blocks=1 00:16:01.256 --rc geninfo_unexecuted_blocks=1 00:16:01.256 00:16:01.256 ' 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:01.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.256 --rc genhtml_branch_coverage=1 00:16:01.256 --rc genhtml_function_coverage=1 00:16:01.256 --rc genhtml_legend=1 00:16:01.256 --rc geninfo_all_blocks=1 00:16:01.256 --rc geninfo_unexecuted_blocks=1 00:16:01.256 00:16:01.256 ' 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:01.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.256 --rc genhtml_branch_coverage=1 00:16:01.256 --rc genhtml_function_coverage=1 00:16:01.256 --rc genhtml_legend=1 00:16:01.256 --rc geninfo_all_blocks=1 00:16:01.256 --rc geninfo_unexecuted_blocks=1 00:16:01.256 00:16:01.256 ' 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:01.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.256 --rc genhtml_branch_coverage=1 00:16:01.256 --rc genhtml_function_coverage=1 00:16:01.256 --rc genhtml_legend=1 00:16:01.256 --rc geninfo_all_blocks=1 00:16:01.256 --rc geninfo_unexecuted_blocks=1 00:16:01.256 00:16:01.256 ' 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:01.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:01.256 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:01.257 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:01.257 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:16:01.257 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:01.257 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:01.257 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:01.257 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:01.257 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:01.257 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.257 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:01.257 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.257 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:16:01.257 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:16:01.257 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:16:01.257 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:07.845 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:07.845 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:07.845 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:07.845 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # is_hw=yes 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:07.845 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:07.845 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:16:07.845 00:16:07.845 --- 10.0.0.2 ping statistics --- 00:16:07.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.845 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:16:07.845 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:07.845 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:07.845 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.348 ms 00:16:07.845 00:16:07.845 --- 10.0.0.1 ping statistics --- 00:16:07.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.846 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:16:07.846 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:07.846 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # return 0 00:16:07.846 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:07.846 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:07.846 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:07.846 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:07.846 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:07.846 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:07.846 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:07.846 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:16:07.846 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:07.846 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:07.846 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.846 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=948578 00:16:07.846 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 948578 00:16:07.846 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:07.846 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 948578 ']' 00:16:07.846 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.846 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:07.846 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.846 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:07.846 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.106 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:08.106 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:08.106 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:08.106 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:08.106 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.106 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:08.106 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=948694 00:16:08.106 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:08.106 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:08.106 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:16:08.106 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:08.106 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:08.106 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:08.106 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=null 00:16:08.106 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:16:08.106 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:08.106 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=3124882c6d304a359ff42f8c297bef796870979263c69ea7 00:16:08.106 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:16:08.106 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.Q90 00:16:08.106 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 3124882c6d304a359ff42f8c297bef796870979263c69ea7 0 00:16:08.107 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 3124882c6d304a359ff42f8c297bef796870979263c69ea7 0 00:16:08.107 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:08.107 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:08.107 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=3124882c6d304a359ff42f8c297bef796870979263c69ea7 00:16:08.107 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=0 00:16:08.107 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:08.107 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.Q90 00:16:08.107 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.Q90 00:16:08.107 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.Q90 00:16:08.107 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:16:08.107 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:08.107 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:08.107 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:08.107 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:16:08.107 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:16:08.107 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:08.107 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=e30d9abdd3808550dcd69b489b0313572b0eadfbdfab823d02f432b6e9eba54a 00:16:08.107 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:16:08.107 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.crV 00:16:08.107 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key e30d9abdd3808550dcd69b489b0313572b0eadfbdfab823d02f432b6e9eba54a 3 00:16:08.107 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 e30d9abdd3808550dcd69b489b0313572b0eadfbdfab823d02f432b6e9eba54a 3 00:16:08.107 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:08.107 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:08.107 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=e30d9abdd3808550dcd69b489b0313572b0eadfbdfab823d02f432b6e9eba54a 00:16:08.107 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:16:08.107 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.crV 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.crV 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.crV 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=cef16b7b69e304b265bdf51dec329df3 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.L8B 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key cef16b7b69e304b265bdf51dec329df3 1 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 cef16b7b69e304b265bdf51dec329df3 1 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=cef16b7b69e304b265bdf51dec329df3 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.L8B 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.L8B 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.L8B 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=b108d5915cc9d7dcafc77090235d618f2be8e3df5d6bcf45 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.biw 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key b108d5915cc9d7dcafc77090235d618f2be8e3df5d6bcf45 2 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 b108d5915cc9d7dcafc77090235d618f2be8e3df5d6bcf45 2 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=b108d5915cc9d7dcafc77090235d618f2be8e3df5d6bcf45 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.biw 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.biw 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.biw 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=c5f74a6452b1e783d7ae65b34d5501737142a1fc5d71a6ca 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.E4x 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key c5f74a6452b1e783d7ae65b34d5501737142a1fc5d71a6ca 2 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 c5f74a6452b1e783d7ae65b34d5501737142a1fc5d71a6ca 2 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=c5f74a6452b1e783d7ae65b34d5501737142a1fc5d71a6ca 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.E4x 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.E4x 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.E4x 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=67626098b7953d24273d14c376e9dd44 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.si6 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 67626098b7953d24273d14c376e9dd44 1 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 67626098b7953d24273d14c376e9dd44 1 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=67626098b7953d24273d14c376e9dd44 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:16:08.368 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:08.630 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.si6 00:16:08.630 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.si6 00:16:08.630 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.si6 00:16:08.630 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:16:08.630 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:08.630 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:08.630 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:08.630 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:16:08.630 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:16:08.630 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:08.630 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=60134821e6ac12e06ec51216ee5759193b9164c5c97b268aba201809c6a20c6e 00:16:08.630 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:16:08.630 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.IES 00:16:08.630 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 60134821e6ac12e06ec51216ee5759193b9164c5c97b268aba201809c6a20c6e 3 00:16:08.630 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 60134821e6ac12e06ec51216ee5759193b9164c5c97b268aba201809c6a20c6e 3 00:16:08.630 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:08.630 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:08.630 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=60134821e6ac12e06ec51216ee5759193b9164c5c97b268aba201809c6a20c6e 00:16:08.630 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:16:08.630 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:08.630 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.IES 00:16:08.630 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.IES 00:16:08.630 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.IES 00:16:08.630 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:16:08.630 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 948578 00:16:08.630 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 948578 ']' 00:16:08.630 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.630 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:08.630 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.630 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:08.630 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.891 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:08.891 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:08.891 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 948694 /var/tmp/host.sock 00:16:08.891 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 948694 ']' 00:16:08.891 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:16:08.891 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:08.891 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:08.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:08.891 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:08.891 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.891 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:08.891 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:08.891 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:16:08.891 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.891 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.891 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.891 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:08.891 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Q90 00:16:08.891 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.891 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.891 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.891 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Q90 00:16:08.891 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Q90 00:16:09.153 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.crV ]] 00:16:09.153 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.crV 00:16:09.153 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.153 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.153 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.153 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.crV 00:16:09.153 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.crV 00:16:09.413 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:09.413 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.L8B 00:16:09.413 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.413 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.413 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.413 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.L8B 00:16:09.413 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.L8B 00:16:09.673 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.biw ]] 00:16:09.673 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.biw 00:16:09.673 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.673 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.673 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.673 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.biw 00:16:09.673 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.biw 00:16:09.673 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:09.673 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.E4x 00:16:09.673 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.673 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.674 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.674 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.E4x 00:16:09.674 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.E4x 00:16:09.934 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.si6 ]] 00:16:09.934 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.si6 00:16:09.934 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.934 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.934 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.934 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.si6 00:16:09.934 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.si6 00:16:10.195 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:10.195 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.IES 00:16:10.195 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.195 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.195 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.195 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.IES 00:16:10.195 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.IES 00:16:10.456 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:16:10.456 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:10.456 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:10.456 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:10.456 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:10.456 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:10.456 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:16:10.456 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:10.456 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:10.456 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:10.456 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:10.456 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.456 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.456 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.456 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.456 12:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.456 12:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.456 12:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.456 12:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.717 00:16:10.717 12:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:10.717 12:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:10.717 12:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.978 12:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.978 12:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.978 12:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.978 12:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.978 12:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.978 12:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.978 { 00:16:10.978 "cntlid": 1, 00:16:10.978 "qid": 0, 00:16:10.978 "state": "enabled", 00:16:10.978 "thread": "nvmf_tgt_poll_group_000", 00:16:10.978 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:10.978 "listen_address": { 00:16:10.978 "trtype": "TCP", 00:16:10.978 "adrfam": "IPv4", 00:16:10.978 "traddr": "10.0.0.2", 00:16:10.978 "trsvcid": "4420" 00:16:10.978 }, 00:16:10.978 "peer_address": { 00:16:10.978 "trtype": "TCP", 00:16:10.978 "adrfam": "IPv4", 00:16:10.978 "traddr": "10.0.0.1", 00:16:10.978 "trsvcid": "48126" 00:16:10.978 }, 00:16:10.978 "auth": { 00:16:10.978 "state": "completed", 00:16:10.978 "digest": "sha256", 00:16:10.978 "dhgroup": "null" 00:16:10.978 } 00:16:10.978 } 00:16:10.978 ]' 00:16:10.978 12:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.978 12:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:10.978 12:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.978 12:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:10.978 12:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:11.239 12:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.239 12:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.239 12:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.239 12:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzEyNDg4MmM2ZDMwNGEzNTlmZjQyZjhjMjk3YmVmNzk2ODcwOTc5MjYzYzY5ZWE3hEGojg==: --dhchap-ctrl-secret DHHC-1:03:ZTMwZDlhYmRkMzgwODU1MGRjZDY5YjQ4OWIwMzEzNTcyYjBlYWRmYmRmYWI4MjNkMDJmNDMyYjZlOWViYTU0YZOu+cQ=: 00:16:11.239 12:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzEyNDg4MmM2ZDMwNGEzNTlmZjQyZjhjMjk3YmVmNzk2ODcwOTc5MjYzYzY5ZWE3hEGojg==: --dhchap-ctrl-secret DHHC-1:03:ZTMwZDlhYmRkMzgwODU1MGRjZDY5YjQ4OWIwMzEzNTcyYjBlYWRmYmRmYWI4MjNkMDJmNDMyYjZlOWViYTU0YZOu+cQ=: 00:16:12.180 12:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.180 12:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:12.180 12:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.180 12:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.180 12:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.180 12:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:12.181 12:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:12.181 12:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:12.181 12:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:16:12.181 12:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:12.181 12:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:12.181 12:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:12.181 12:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:12.181 12:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.181 12:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.181 12:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.181 12:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.181 12:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.181 12:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.181 12:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.181 12:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.442 00:16:12.442 12:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:12.442 12:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:12.442 12:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.442 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.442 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.442 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.442 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.702 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.702 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.702 { 00:16:12.702 "cntlid": 3, 00:16:12.702 "qid": 0, 00:16:12.702 "state": "enabled", 00:16:12.702 "thread": "nvmf_tgt_poll_group_000", 00:16:12.702 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:12.702 "listen_address": { 00:16:12.702 "trtype": "TCP", 00:16:12.702 "adrfam": "IPv4", 00:16:12.702 "traddr": "10.0.0.2", 00:16:12.702 "trsvcid": "4420" 00:16:12.702 }, 00:16:12.702 "peer_address": { 00:16:12.702 "trtype": "TCP", 00:16:12.702 "adrfam": "IPv4", 00:16:12.702 "traddr": "10.0.0.1", 00:16:12.702 "trsvcid": "48152" 00:16:12.702 }, 00:16:12.702 "auth": { 00:16:12.702 "state": "completed", 00:16:12.702 "digest": "sha256", 00:16:12.702 "dhgroup": "null" 00:16:12.702 } 00:16:12.702 } 00:16:12.702 ]' 00:16:12.702 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.702 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:12.702 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.702 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:12.702 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.702 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.702 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.702 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.963 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2VmMTZiN2I2OWUzMDRiMjY1YmRmNTFkZWMzMjlkZjMZ7OuR: --dhchap-ctrl-secret DHHC-1:02:YjEwOGQ1OTE1Y2M5ZDdkY2FmYzc3MDkwMjM1ZDYxOGYyYmU4ZTNkZjVkNmJjZjQ1RnP+VA==: 00:16:12.963 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Y2VmMTZiN2I2OWUzMDRiMjY1YmRmNTFkZWMzMjlkZjMZ7OuR: --dhchap-ctrl-secret DHHC-1:02:YjEwOGQ1OTE1Y2M5ZDdkY2FmYzc3MDkwMjM1ZDYxOGYyYmU4ZTNkZjVkNmJjZjQ1RnP+VA==: 00:16:13.533 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.533 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.533 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:13.533 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.533 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.533 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.533 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:13.533 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:13.533 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:13.794 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:13.794 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.794 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:13.794 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:13.794 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:13.794 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.794 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.794 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.794 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.794 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.794 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.794 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.794 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.054 00:16:14.054 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:14.054 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:14.054 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.314 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.314 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.314 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.314 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.314 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.314 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:14.314 { 00:16:14.314 "cntlid": 5, 00:16:14.315 "qid": 0, 00:16:14.315 "state": "enabled", 00:16:14.315 "thread": "nvmf_tgt_poll_group_000", 00:16:14.315 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:14.315 "listen_address": { 00:16:14.315 "trtype": "TCP", 00:16:14.315 "adrfam": "IPv4", 00:16:14.315 "traddr": "10.0.0.2", 00:16:14.315 "trsvcid": "4420" 00:16:14.315 }, 00:16:14.315 "peer_address": { 00:16:14.315 "trtype": "TCP", 00:16:14.315 "adrfam": "IPv4", 00:16:14.315 "traddr": "10.0.0.1", 00:16:14.315 "trsvcid": "42332" 00:16:14.315 }, 00:16:14.315 "auth": { 00:16:14.315 "state": "completed", 00:16:14.315 "digest": "sha256", 00:16:14.315 "dhgroup": "null" 00:16:14.315 } 00:16:14.315 } 00:16:14.315 ]' 00:16:14.315 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.315 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:14.315 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.315 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:14.315 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.315 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.315 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.315 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.575 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzVmNzRhNjQ1MmIxZTc4M2Q3YWU2NWIzNGQ1NTAxNzM3MTQyYTFmYzVkNzFhNmNhpdauuQ==: --dhchap-ctrl-secret DHHC-1:01:Njc2MjYwOThiNzk1M2QyNDI3M2QxNGMzNzZlOWRkNDRMilIz: 00:16:14.575 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YzVmNzRhNjQ1MmIxZTc4M2Q3YWU2NWIzNGQ1NTAxNzM3MTQyYTFmYzVkNzFhNmNhpdauuQ==: --dhchap-ctrl-secret DHHC-1:01:Njc2MjYwOThiNzk1M2QyNDI3M2QxNGMzNzZlOWRkNDRMilIz: 00:16:15.145 12:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.145 12:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:15.145 12:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.145 12:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.145 12:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.145 12:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:15.145 12:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:15.145 12:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:15.406 12:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:15.406 12:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:15.406 12:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:15.406 12:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:15.406 12:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:15.406 12:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.406 12:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:15.406 12:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.406 12:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.406 12:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.406 12:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:15.406 12:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:15.406 12:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:15.666 00:16:15.666 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:15.666 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:15.666 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.666 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.666 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.666 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.666 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.928 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.928 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.928 { 00:16:15.928 "cntlid": 7, 00:16:15.928 "qid": 0, 00:16:15.928 "state": "enabled", 00:16:15.928 "thread": "nvmf_tgt_poll_group_000", 00:16:15.928 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:15.928 "listen_address": { 00:16:15.928 "trtype": "TCP", 00:16:15.928 "adrfam": "IPv4", 00:16:15.928 "traddr": "10.0.0.2", 00:16:15.928 "trsvcid": "4420" 00:16:15.928 }, 00:16:15.928 "peer_address": { 00:16:15.928 "trtype": "TCP", 00:16:15.928 "adrfam": "IPv4", 00:16:15.928 "traddr": "10.0.0.1", 00:16:15.928 "trsvcid": "42364" 00:16:15.928 }, 00:16:15.928 "auth": { 00:16:15.928 "state": "completed", 00:16:15.928 "digest": "sha256", 00:16:15.928 "dhgroup": "null" 00:16:15.928 } 00:16:15.928 } 00:16:15.928 ]' 00:16:15.928 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.928 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:15.928 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:15.928 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:15.928 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:15.928 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.928 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.928 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.191 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjAxMzQ4MjFlNmFjMTJlMDZlYzUxMjE2ZWU1NzU5MTkzYjkxNjRjNWM5N2IyNjhhYmEyMDE4MDljNmEyMGM2ZTVMP7w=: 00:16:16.191 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjAxMzQ4MjFlNmFjMTJlMDZlYzUxMjE2ZWU1NzU5MTkzYjkxNjRjNWM5N2IyNjhhYmEyMDE4MDljNmEyMGM2ZTVMP7w=: 00:16:16.760 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.760 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.760 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:16.760 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.760 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.760 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.760 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:16.760 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.760 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:16.760 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:17.020 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:16:17.020 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:17.020 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:17.020 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:17.020 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:17.020 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.020 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.020 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.020 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.020 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.020 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.020 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.020 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.281 00:16:17.281 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:17.281 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:17.281 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.281 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.281 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.281 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.281 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.281 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.281 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:17.281 { 00:16:17.281 "cntlid": 9, 00:16:17.281 "qid": 0, 00:16:17.281 "state": "enabled", 00:16:17.281 "thread": "nvmf_tgt_poll_group_000", 00:16:17.281 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:17.281 "listen_address": { 00:16:17.281 "trtype": "TCP", 00:16:17.281 "adrfam": "IPv4", 00:16:17.281 "traddr": "10.0.0.2", 00:16:17.281 "trsvcid": "4420" 00:16:17.281 }, 00:16:17.281 "peer_address": { 00:16:17.281 "trtype": "TCP", 00:16:17.281 "adrfam": "IPv4", 00:16:17.281 "traddr": "10.0.0.1", 00:16:17.281 "trsvcid": "42386" 00:16:17.281 }, 00:16:17.281 "auth": { 00:16:17.281 "state": "completed", 00:16:17.281 "digest": "sha256", 00:16:17.281 "dhgroup": "ffdhe2048" 00:16:17.281 } 00:16:17.281 } 00:16:17.281 ]' 00:16:17.281 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:17.542 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:17.542 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:17.542 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:17.542 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:17.542 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.542 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.542 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.802 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzEyNDg4MmM2ZDMwNGEzNTlmZjQyZjhjMjk3YmVmNzk2ODcwOTc5MjYzYzY5ZWE3hEGojg==: --dhchap-ctrl-secret DHHC-1:03:ZTMwZDlhYmRkMzgwODU1MGRjZDY5YjQ4OWIwMzEzNTcyYjBlYWRmYmRmYWI4MjNkMDJmNDMyYjZlOWViYTU0YZOu+cQ=: 00:16:17.802 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzEyNDg4MmM2ZDMwNGEzNTlmZjQyZjhjMjk3YmVmNzk2ODcwOTc5MjYzYzY5ZWE3hEGojg==: --dhchap-ctrl-secret DHHC-1:03:ZTMwZDlhYmRkMzgwODU1MGRjZDY5YjQ4OWIwMzEzNTcyYjBlYWRmYmRmYWI4MjNkMDJmNDMyYjZlOWViYTU0YZOu+cQ=: 00:16:18.372 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.372 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.372 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:18.372 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.373 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.373 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.373 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:18.373 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:18.373 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:18.633 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:16:18.633 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.633 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:18.633 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:18.633 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:18.633 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.633 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.633 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.633 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.633 12:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.633 12:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.633 12:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.633 12:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.633 00:16:18.894 12:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.894 12:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.894 12:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:18.894 12:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.894 12:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.894 12:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.894 12:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.894 12:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.894 12:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.894 { 00:16:18.894 "cntlid": 11, 00:16:18.894 "qid": 0, 00:16:18.894 "state": "enabled", 00:16:18.894 "thread": "nvmf_tgt_poll_group_000", 00:16:18.894 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:18.894 "listen_address": { 00:16:18.894 "trtype": "TCP", 00:16:18.894 "adrfam": "IPv4", 00:16:18.894 "traddr": "10.0.0.2", 00:16:18.894 "trsvcid": "4420" 00:16:18.894 }, 00:16:18.894 "peer_address": { 00:16:18.894 "trtype": "TCP", 00:16:18.894 "adrfam": "IPv4", 00:16:18.894 "traddr": "10.0.0.1", 00:16:18.894 "trsvcid": "42422" 00:16:18.894 }, 00:16:18.894 "auth": { 00:16:18.894 "state": "completed", 00:16:18.894 "digest": "sha256", 00:16:18.894 "dhgroup": "ffdhe2048" 00:16:18.894 } 00:16:18.894 } 00:16:18.894 ]' 00:16:18.894 12:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.894 12:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:18.894 12:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:19.154 12:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:19.155 12:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:19.155 12:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.155 12:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.155 12:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.155 12:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2VmMTZiN2I2OWUzMDRiMjY1YmRmNTFkZWMzMjlkZjMZ7OuR: --dhchap-ctrl-secret DHHC-1:02:YjEwOGQ1OTE1Y2M5ZDdkY2FmYzc3MDkwMjM1ZDYxOGYyYmU4ZTNkZjVkNmJjZjQ1RnP+VA==: 00:16:19.155 12:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Y2VmMTZiN2I2OWUzMDRiMjY1YmRmNTFkZWMzMjlkZjMZ7OuR: --dhchap-ctrl-secret DHHC-1:02:YjEwOGQ1OTE1Y2M5ZDdkY2FmYzc3MDkwMjM1ZDYxOGYyYmU4ZTNkZjVkNmJjZjQ1RnP+VA==: 00:16:20.095 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.095 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.095 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:20.095 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.095 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.095 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.095 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:20.095 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:20.095 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:20.095 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:16:20.095 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.095 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:20.095 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:20.095 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:20.095 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.095 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.095 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.096 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.096 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.096 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.096 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.096 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.357 00:16:20.357 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.357 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.357 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.617 12:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.617 12:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.617 12:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.617 12:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.617 12:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.617 12:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.617 { 00:16:20.617 "cntlid": 13, 00:16:20.617 "qid": 0, 00:16:20.617 "state": "enabled", 00:16:20.617 "thread": "nvmf_tgt_poll_group_000", 00:16:20.617 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:20.617 "listen_address": { 00:16:20.617 "trtype": "TCP", 00:16:20.617 "adrfam": "IPv4", 00:16:20.617 "traddr": "10.0.0.2", 00:16:20.617 "trsvcid": "4420" 00:16:20.617 }, 00:16:20.617 "peer_address": { 00:16:20.617 "trtype": "TCP", 00:16:20.617 "adrfam": "IPv4", 00:16:20.617 "traddr": "10.0.0.1", 00:16:20.617 "trsvcid": "42450" 00:16:20.617 }, 00:16:20.617 "auth": { 00:16:20.617 "state": "completed", 00:16:20.617 "digest": "sha256", 00:16:20.617 "dhgroup": "ffdhe2048" 00:16:20.617 } 00:16:20.617 } 00:16:20.617 ]' 00:16:20.617 12:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.617 12:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:20.617 12:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.617 12:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:20.617 12:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.617 12:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.617 12:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.617 12:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.878 12:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzVmNzRhNjQ1MmIxZTc4M2Q3YWU2NWIzNGQ1NTAxNzM3MTQyYTFmYzVkNzFhNmNhpdauuQ==: --dhchap-ctrl-secret DHHC-1:01:Njc2MjYwOThiNzk1M2QyNDI3M2QxNGMzNzZlOWRkNDRMilIz: 00:16:20.878 12:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YzVmNzRhNjQ1MmIxZTc4M2Q3YWU2NWIzNGQ1NTAxNzM3MTQyYTFmYzVkNzFhNmNhpdauuQ==: --dhchap-ctrl-secret DHHC-1:01:Njc2MjYwOThiNzk1M2QyNDI3M2QxNGMzNzZlOWRkNDRMilIz: 00:16:21.449 12:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.449 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.449 12:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:21.449 12:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.449 12:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.449 12:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.449 12:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.449 12:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:21.449 12:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:21.709 12:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:16:21.709 12:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.709 12:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:21.709 12:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:21.709 12:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:21.709 12:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.709 12:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:21.709 12:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.709 12:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.709 12:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.709 12:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:21.709 12:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:21.709 12:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:21.970 00:16:21.970 12:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.970 12:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.970 12:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.231 12:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.231 12:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.231 12:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.231 12:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.231 12:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.231 12:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:22.231 { 00:16:22.231 "cntlid": 15, 00:16:22.231 "qid": 0, 00:16:22.231 "state": "enabled", 00:16:22.231 "thread": "nvmf_tgt_poll_group_000", 00:16:22.231 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:22.231 "listen_address": { 00:16:22.231 "trtype": "TCP", 00:16:22.231 "adrfam": "IPv4", 00:16:22.231 "traddr": "10.0.0.2", 00:16:22.231 "trsvcid": "4420" 00:16:22.231 }, 00:16:22.231 "peer_address": { 00:16:22.231 "trtype": "TCP", 00:16:22.231 "adrfam": "IPv4", 00:16:22.231 "traddr": "10.0.0.1", 00:16:22.231 "trsvcid": "42482" 00:16:22.231 }, 00:16:22.231 "auth": { 00:16:22.231 "state": "completed", 00:16:22.231 "digest": "sha256", 00:16:22.231 "dhgroup": "ffdhe2048" 00:16:22.231 } 00:16:22.231 } 00:16:22.231 ]' 00:16:22.231 12:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:22.231 12:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:22.231 12:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:22.231 12:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:22.231 12:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:22.231 12:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.231 12:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.231 12:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.506 12:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjAxMzQ4MjFlNmFjMTJlMDZlYzUxMjE2ZWU1NzU5MTkzYjkxNjRjNWM5N2IyNjhhYmEyMDE4MDljNmEyMGM2ZTVMP7w=: 00:16:22.506 12:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjAxMzQ4MjFlNmFjMTJlMDZlYzUxMjE2ZWU1NzU5MTkzYjkxNjRjNWM5N2IyNjhhYmEyMDE4MDljNmEyMGM2ZTVMP7w=: 00:16:23.194 12:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.194 12:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:23.194 12:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.194 12:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.194 12:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.194 12:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:23.194 12:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:23.194 12:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:23.194 12:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:23.481 12:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:16:23.481 12:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.481 12:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:23.481 12:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:23.481 12:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:23.481 12:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.481 12:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.481 12:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.481 12:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.481 12:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.481 12:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.481 12:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.481 12:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.481 00:16:23.481 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.481 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.481 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.742 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.742 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.742 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.742 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.742 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.742 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.742 { 00:16:23.742 "cntlid": 17, 00:16:23.742 "qid": 0, 00:16:23.742 "state": "enabled", 00:16:23.742 "thread": "nvmf_tgt_poll_group_000", 00:16:23.742 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:23.742 "listen_address": { 00:16:23.742 "trtype": "TCP", 00:16:23.742 "adrfam": "IPv4", 00:16:23.742 "traddr": "10.0.0.2", 00:16:23.742 "trsvcid": "4420" 00:16:23.742 }, 00:16:23.742 "peer_address": { 00:16:23.742 "trtype": "TCP", 00:16:23.742 "adrfam": "IPv4", 00:16:23.742 "traddr": "10.0.0.1", 00:16:23.742 "trsvcid": "42510" 00:16:23.742 }, 00:16:23.742 "auth": { 00:16:23.742 "state": "completed", 00:16:23.742 "digest": "sha256", 00:16:23.742 "dhgroup": "ffdhe3072" 00:16:23.742 } 00:16:23.742 } 00:16:23.742 ]' 00:16:23.742 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.742 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:23.742 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:24.002 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:24.002 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:24.002 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.002 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.002 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.262 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzEyNDg4MmM2ZDMwNGEzNTlmZjQyZjhjMjk3YmVmNzk2ODcwOTc5MjYzYzY5ZWE3hEGojg==: --dhchap-ctrl-secret DHHC-1:03:ZTMwZDlhYmRkMzgwODU1MGRjZDY5YjQ4OWIwMzEzNTcyYjBlYWRmYmRmYWI4MjNkMDJmNDMyYjZlOWViYTU0YZOu+cQ=: 00:16:24.262 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzEyNDg4MmM2ZDMwNGEzNTlmZjQyZjhjMjk3YmVmNzk2ODcwOTc5MjYzYzY5ZWE3hEGojg==: --dhchap-ctrl-secret DHHC-1:03:ZTMwZDlhYmRkMzgwODU1MGRjZDY5YjQ4OWIwMzEzNTcyYjBlYWRmYmRmYWI4MjNkMDJmNDMyYjZlOWViYTU0YZOu+cQ=: 00:16:24.834 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.834 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.834 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:24.834 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.834 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.834 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.834 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.834 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:24.834 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:25.094 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:25.094 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.094 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:25.094 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:25.095 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:25.095 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.095 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.095 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.095 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.095 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.095 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.095 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.095 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.095 00:16:25.355 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.355 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.355 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.355 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.355 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.355 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.355 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.355 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.355 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.355 { 00:16:25.355 "cntlid": 19, 00:16:25.355 "qid": 0, 00:16:25.355 "state": "enabled", 00:16:25.355 "thread": "nvmf_tgt_poll_group_000", 00:16:25.355 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:25.355 "listen_address": { 00:16:25.355 "trtype": "TCP", 00:16:25.355 "adrfam": "IPv4", 00:16:25.355 "traddr": "10.0.0.2", 00:16:25.355 "trsvcid": "4420" 00:16:25.355 }, 00:16:25.355 "peer_address": { 00:16:25.355 "trtype": "TCP", 00:16:25.355 "adrfam": "IPv4", 00:16:25.355 "traddr": "10.0.0.1", 00:16:25.355 "trsvcid": "58372" 00:16:25.355 }, 00:16:25.355 "auth": { 00:16:25.355 "state": "completed", 00:16:25.355 "digest": "sha256", 00:16:25.355 "dhgroup": "ffdhe3072" 00:16:25.355 } 00:16:25.355 } 00:16:25.355 ]' 00:16:25.355 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.355 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:25.355 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.616 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:25.616 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.616 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.616 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.616 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.876 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2VmMTZiN2I2OWUzMDRiMjY1YmRmNTFkZWMzMjlkZjMZ7OuR: --dhchap-ctrl-secret DHHC-1:02:YjEwOGQ1OTE1Y2M5ZDdkY2FmYzc3MDkwMjM1ZDYxOGYyYmU4ZTNkZjVkNmJjZjQ1RnP+VA==: 00:16:25.876 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Y2VmMTZiN2I2OWUzMDRiMjY1YmRmNTFkZWMzMjlkZjMZ7OuR: --dhchap-ctrl-secret DHHC-1:02:YjEwOGQ1OTE1Y2M5ZDdkY2FmYzc3MDkwMjM1ZDYxOGYyYmU4ZTNkZjVkNmJjZjQ1RnP+VA==: 00:16:26.448 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.448 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.448 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:26.448 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.448 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.448 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.448 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.448 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:26.448 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:26.708 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:26.708 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.708 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:26.708 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:26.708 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:26.708 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.708 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.708 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.708 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.709 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.709 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.709 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.709 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.709 00:16:26.969 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.969 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.969 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.969 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.969 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.969 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.969 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.969 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.969 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.969 { 00:16:26.969 "cntlid": 21, 00:16:26.969 "qid": 0, 00:16:26.969 "state": "enabled", 00:16:26.969 "thread": "nvmf_tgt_poll_group_000", 00:16:26.969 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:26.969 "listen_address": { 00:16:26.969 "trtype": "TCP", 00:16:26.969 "adrfam": "IPv4", 00:16:26.969 "traddr": "10.0.0.2", 00:16:26.969 "trsvcid": "4420" 00:16:26.969 }, 00:16:26.969 "peer_address": { 00:16:26.969 "trtype": "TCP", 00:16:26.969 "adrfam": "IPv4", 00:16:26.969 "traddr": "10.0.0.1", 00:16:26.969 "trsvcid": "58396" 00:16:26.969 }, 00:16:26.969 "auth": { 00:16:26.969 "state": "completed", 00:16:26.969 "digest": "sha256", 00:16:26.969 "dhgroup": "ffdhe3072" 00:16:26.969 } 00:16:26.969 } 00:16:26.969 ]' 00:16:26.969 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.969 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:26.969 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:27.229 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:27.230 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.230 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.230 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.230 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.230 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzVmNzRhNjQ1MmIxZTc4M2Q3YWU2NWIzNGQ1NTAxNzM3MTQyYTFmYzVkNzFhNmNhpdauuQ==: --dhchap-ctrl-secret DHHC-1:01:Njc2MjYwOThiNzk1M2QyNDI3M2QxNGMzNzZlOWRkNDRMilIz: 00:16:27.230 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YzVmNzRhNjQ1MmIxZTc4M2Q3YWU2NWIzNGQ1NTAxNzM3MTQyYTFmYzVkNzFhNmNhpdauuQ==: --dhchap-ctrl-secret DHHC-1:01:Njc2MjYwOThiNzk1M2QyNDI3M2QxNGMzNzZlOWRkNDRMilIz: 00:16:28.172 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.172 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.172 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:28.172 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.172 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.172 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.172 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:28.172 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:28.172 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:28.172 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:28.172 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:28.172 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:28.172 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:28.172 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:28.172 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.172 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:28.172 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.172 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.172 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.172 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:28.172 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:28.172 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:28.432 00:16:28.432 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.432 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.432 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.692 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.693 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.693 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.693 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.693 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.693 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.693 { 00:16:28.693 "cntlid": 23, 00:16:28.693 "qid": 0, 00:16:28.693 "state": "enabled", 00:16:28.693 "thread": "nvmf_tgt_poll_group_000", 00:16:28.693 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:28.693 "listen_address": { 00:16:28.693 "trtype": "TCP", 00:16:28.693 "adrfam": "IPv4", 00:16:28.693 "traddr": "10.0.0.2", 00:16:28.693 "trsvcid": "4420" 00:16:28.693 }, 00:16:28.693 "peer_address": { 00:16:28.693 "trtype": "TCP", 00:16:28.693 "adrfam": "IPv4", 00:16:28.693 "traddr": "10.0.0.1", 00:16:28.693 "trsvcid": "58430" 00:16:28.693 }, 00:16:28.693 "auth": { 00:16:28.693 "state": "completed", 00:16:28.693 "digest": "sha256", 00:16:28.693 "dhgroup": "ffdhe3072" 00:16:28.693 } 00:16:28.693 } 00:16:28.693 ]' 00:16:28.693 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.693 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:28.693 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.693 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:28.693 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.693 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.693 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.693 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.953 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjAxMzQ4MjFlNmFjMTJlMDZlYzUxMjE2ZWU1NzU5MTkzYjkxNjRjNWM5N2IyNjhhYmEyMDE4MDljNmEyMGM2ZTVMP7w=: 00:16:28.953 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjAxMzQ4MjFlNmFjMTJlMDZlYzUxMjE2ZWU1NzU5MTkzYjkxNjRjNWM5N2IyNjhhYmEyMDE4MDljNmEyMGM2ZTVMP7w=: 00:16:29.524 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.524 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:29.524 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.524 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.524 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.524 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:29.524 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.524 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:29.524 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:29.784 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:29.784 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.784 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:29.784 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:29.784 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:29.784 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.784 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.784 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.784 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.784 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.784 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.784 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.784 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.045 00:16:30.045 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.045 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.045 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.305 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.305 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.305 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.305 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.305 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.305 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.305 { 00:16:30.305 "cntlid": 25, 00:16:30.305 "qid": 0, 00:16:30.305 "state": "enabled", 00:16:30.305 "thread": "nvmf_tgt_poll_group_000", 00:16:30.305 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:30.305 "listen_address": { 00:16:30.305 "trtype": "TCP", 00:16:30.305 "adrfam": "IPv4", 00:16:30.305 "traddr": "10.0.0.2", 00:16:30.305 "trsvcid": "4420" 00:16:30.305 }, 00:16:30.305 "peer_address": { 00:16:30.305 "trtype": "TCP", 00:16:30.305 "adrfam": "IPv4", 00:16:30.305 "traddr": "10.0.0.1", 00:16:30.305 "trsvcid": "58448" 00:16:30.305 }, 00:16:30.305 "auth": { 00:16:30.305 "state": "completed", 00:16:30.305 "digest": "sha256", 00:16:30.305 "dhgroup": "ffdhe4096" 00:16:30.305 } 00:16:30.305 } 00:16:30.305 ]' 00:16:30.305 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.305 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:30.305 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.305 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:30.305 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.305 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.305 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.305 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.565 12:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzEyNDg4MmM2ZDMwNGEzNTlmZjQyZjhjMjk3YmVmNzk2ODcwOTc5MjYzYzY5ZWE3hEGojg==: --dhchap-ctrl-secret DHHC-1:03:ZTMwZDlhYmRkMzgwODU1MGRjZDY5YjQ4OWIwMzEzNTcyYjBlYWRmYmRmYWI4MjNkMDJmNDMyYjZlOWViYTU0YZOu+cQ=: 00:16:30.565 12:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzEyNDg4MmM2ZDMwNGEzNTlmZjQyZjhjMjk3YmVmNzk2ODcwOTc5MjYzYzY5ZWE3hEGojg==: --dhchap-ctrl-secret DHHC-1:03:ZTMwZDlhYmRkMzgwODU1MGRjZDY5YjQ4OWIwMzEzNTcyYjBlYWRmYmRmYWI4MjNkMDJmNDMyYjZlOWViYTU0YZOu+cQ=: 00:16:31.136 12:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.136 12:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:31.136 12:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.136 12:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.136 12:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.136 12:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.136 12:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:31.136 12:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:31.397 12:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:31.397 12:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.397 12:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:31.397 12:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:31.397 12:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:31.397 12:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.397 12:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.397 12:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.397 12:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.397 12:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.397 12:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.397 12:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.397 12:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.658 00:16:31.658 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.658 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.658 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.918 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.918 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.918 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.918 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.918 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.918 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.918 { 00:16:31.918 "cntlid": 27, 00:16:31.918 "qid": 0, 00:16:31.918 "state": "enabled", 00:16:31.918 "thread": "nvmf_tgt_poll_group_000", 00:16:31.918 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:31.918 "listen_address": { 00:16:31.918 "trtype": "TCP", 00:16:31.918 "adrfam": "IPv4", 00:16:31.918 "traddr": "10.0.0.2", 00:16:31.918 "trsvcid": "4420" 00:16:31.918 }, 00:16:31.918 "peer_address": { 00:16:31.918 "trtype": "TCP", 00:16:31.918 "adrfam": "IPv4", 00:16:31.918 "traddr": "10.0.0.1", 00:16:31.918 "trsvcid": "58464" 00:16:31.918 }, 00:16:31.918 "auth": { 00:16:31.918 "state": "completed", 00:16:31.918 "digest": "sha256", 00:16:31.918 "dhgroup": "ffdhe4096" 00:16:31.918 } 00:16:31.918 } 00:16:31.918 ]' 00:16:31.918 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.918 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:31.918 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.918 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:31.918 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.918 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.918 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.918 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.178 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2VmMTZiN2I2OWUzMDRiMjY1YmRmNTFkZWMzMjlkZjMZ7OuR: --dhchap-ctrl-secret DHHC-1:02:YjEwOGQ1OTE1Y2M5ZDdkY2FmYzc3MDkwMjM1ZDYxOGYyYmU4ZTNkZjVkNmJjZjQ1RnP+VA==: 00:16:32.178 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Y2VmMTZiN2I2OWUzMDRiMjY1YmRmNTFkZWMzMjlkZjMZ7OuR: --dhchap-ctrl-secret DHHC-1:02:YjEwOGQ1OTE1Y2M5ZDdkY2FmYzc3MDkwMjM1ZDYxOGYyYmU4ZTNkZjVkNmJjZjQ1RnP+VA==: 00:16:33.121 12:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.121 12:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:33.121 12:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.121 12:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.121 12:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.121 12:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.121 12:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:33.121 12:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:33.121 12:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:33.121 12:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.121 12:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:33.121 12:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:33.121 12:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:33.121 12:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.121 12:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.121 12:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.121 12:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.121 12:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.121 12:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.121 12:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.121 12:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.381 00:16:33.382 12:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.382 12:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.382 12:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.642 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.642 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.642 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.642 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.642 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.642 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.642 { 00:16:33.642 "cntlid": 29, 00:16:33.642 "qid": 0, 00:16:33.642 "state": "enabled", 00:16:33.642 "thread": "nvmf_tgt_poll_group_000", 00:16:33.642 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:33.642 "listen_address": { 00:16:33.642 "trtype": "TCP", 00:16:33.642 "adrfam": "IPv4", 00:16:33.642 "traddr": "10.0.0.2", 00:16:33.642 "trsvcid": "4420" 00:16:33.642 }, 00:16:33.642 "peer_address": { 00:16:33.642 "trtype": "TCP", 00:16:33.642 "adrfam": "IPv4", 00:16:33.642 "traddr": "10.0.0.1", 00:16:33.642 "trsvcid": "58484" 00:16:33.642 }, 00:16:33.642 "auth": { 00:16:33.642 "state": "completed", 00:16:33.642 "digest": "sha256", 00:16:33.642 "dhgroup": "ffdhe4096" 00:16:33.642 } 00:16:33.642 } 00:16:33.642 ]' 00:16:33.642 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.642 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:33.642 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.642 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:33.642 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.642 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.642 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.642 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.903 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzVmNzRhNjQ1MmIxZTc4M2Q3YWU2NWIzNGQ1NTAxNzM3MTQyYTFmYzVkNzFhNmNhpdauuQ==: --dhchap-ctrl-secret DHHC-1:01:Njc2MjYwOThiNzk1M2QyNDI3M2QxNGMzNzZlOWRkNDRMilIz: 00:16:33.903 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YzVmNzRhNjQ1MmIxZTc4M2Q3YWU2NWIzNGQ1NTAxNzM3MTQyYTFmYzVkNzFhNmNhpdauuQ==: --dhchap-ctrl-secret DHHC-1:01:Njc2MjYwOThiNzk1M2QyNDI3M2QxNGMzNzZlOWRkNDRMilIz: 00:16:34.475 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.475 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:34.475 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.475 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.475 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.475 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.475 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:34.475 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:34.737 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:34.737 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.737 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:34.737 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:34.737 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:34.737 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.737 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:34.737 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.737 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.737 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.737 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:34.737 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:34.737 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:34.999 00:16:34.999 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.999 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.999 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.260 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.260 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.260 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.260 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.260 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.260 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:35.260 { 00:16:35.260 "cntlid": 31, 00:16:35.260 "qid": 0, 00:16:35.260 "state": "enabled", 00:16:35.260 "thread": "nvmf_tgt_poll_group_000", 00:16:35.260 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:35.260 "listen_address": { 00:16:35.260 "trtype": "TCP", 00:16:35.260 "adrfam": "IPv4", 00:16:35.260 "traddr": "10.0.0.2", 00:16:35.260 "trsvcid": "4420" 00:16:35.260 }, 00:16:35.260 "peer_address": { 00:16:35.260 "trtype": "TCP", 00:16:35.260 "adrfam": "IPv4", 00:16:35.260 "traddr": "10.0.0.1", 00:16:35.260 "trsvcid": "44120" 00:16:35.260 }, 00:16:35.260 "auth": { 00:16:35.260 "state": "completed", 00:16:35.260 "digest": "sha256", 00:16:35.260 "dhgroup": "ffdhe4096" 00:16:35.260 } 00:16:35.260 } 00:16:35.260 ]' 00:16:35.260 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:35.260 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:35.260 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.260 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:35.260 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.260 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.260 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.260 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.521 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjAxMzQ4MjFlNmFjMTJlMDZlYzUxMjE2ZWU1NzU5MTkzYjkxNjRjNWM5N2IyNjhhYmEyMDE4MDljNmEyMGM2ZTVMP7w=: 00:16:35.521 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjAxMzQ4MjFlNmFjMTJlMDZlYzUxMjE2ZWU1NzU5MTkzYjkxNjRjNWM5N2IyNjhhYmEyMDE4MDljNmEyMGM2ZTVMP7w=: 00:16:36.093 12:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.093 12:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:36.093 12:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.093 12:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.093 12:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.093 12:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:36.093 12:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.093 12:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:36.093 12:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:36.355 12:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:36.355 12:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.355 12:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:36.355 12:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:36.355 12:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:36.355 12:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.355 12:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.355 12:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.355 12:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.355 12:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.355 12:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.355 12:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.355 12:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.616 00:16:36.616 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.616 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.616 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.878 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.878 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.878 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.878 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.878 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.878 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.878 { 00:16:36.878 "cntlid": 33, 00:16:36.878 "qid": 0, 00:16:36.878 "state": "enabled", 00:16:36.878 "thread": "nvmf_tgt_poll_group_000", 00:16:36.878 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:36.878 "listen_address": { 00:16:36.878 "trtype": "TCP", 00:16:36.878 "adrfam": "IPv4", 00:16:36.878 "traddr": "10.0.0.2", 00:16:36.878 "trsvcid": "4420" 00:16:36.878 }, 00:16:36.878 "peer_address": { 00:16:36.878 "trtype": "TCP", 00:16:36.878 "adrfam": "IPv4", 00:16:36.878 "traddr": "10.0.0.1", 00:16:36.878 "trsvcid": "44144" 00:16:36.878 }, 00:16:36.878 "auth": { 00:16:36.878 "state": "completed", 00:16:36.878 "digest": "sha256", 00:16:36.878 "dhgroup": "ffdhe6144" 00:16:36.878 } 00:16:36.878 } 00:16:36.878 ]' 00:16:36.878 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.878 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:36.878 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.878 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:36.878 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.140 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.140 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.140 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.140 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzEyNDg4MmM2ZDMwNGEzNTlmZjQyZjhjMjk3YmVmNzk2ODcwOTc5MjYzYzY5ZWE3hEGojg==: --dhchap-ctrl-secret DHHC-1:03:ZTMwZDlhYmRkMzgwODU1MGRjZDY5YjQ4OWIwMzEzNTcyYjBlYWRmYmRmYWI4MjNkMDJmNDMyYjZlOWViYTU0YZOu+cQ=: 00:16:37.140 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzEyNDg4MmM2ZDMwNGEzNTlmZjQyZjhjMjk3YmVmNzk2ODcwOTc5MjYzYzY5ZWE3hEGojg==: --dhchap-ctrl-secret DHHC-1:03:ZTMwZDlhYmRkMzgwODU1MGRjZDY5YjQ4OWIwMzEzNTcyYjBlYWRmYmRmYWI4MjNkMDJmNDMyYjZlOWViYTU0YZOu+cQ=: 00:16:38.084 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.084 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:38.084 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.084 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.084 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.084 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.084 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:38.084 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:38.084 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:16:38.084 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.084 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:38.084 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:38.084 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:38.084 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.084 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.084 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.084 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.084 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.084 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.084 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.084 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.346 00:16:38.346 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.346 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.346 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.607 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.607 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.607 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.607 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.607 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.607 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.607 { 00:16:38.607 "cntlid": 35, 00:16:38.607 "qid": 0, 00:16:38.607 "state": "enabled", 00:16:38.607 "thread": "nvmf_tgt_poll_group_000", 00:16:38.607 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:38.607 "listen_address": { 00:16:38.607 "trtype": "TCP", 00:16:38.607 "adrfam": "IPv4", 00:16:38.607 "traddr": "10.0.0.2", 00:16:38.607 "trsvcid": "4420" 00:16:38.607 }, 00:16:38.607 "peer_address": { 00:16:38.607 "trtype": "TCP", 00:16:38.607 "adrfam": "IPv4", 00:16:38.607 "traddr": "10.0.0.1", 00:16:38.607 "trsvcid": "44156" 00:16:38.607 }, 00:16:38.607 "auth": { 00:16:38.607 "state": "completed", 00:16:38.607 "digest": "sha256", 00:16:38.607 "dhgroup": "ffdhe6144" 00:16:38.607 } 00:16:38.607 } 00:16:38.607 ]' 00:16:38.607 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.607 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:38.607 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.608 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:38.608 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.868 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.868 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.868 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.868 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2VmMTZiN2I2OWUzMDRiMjY1YmRmNTFkZWMzMjlkZjMZ7OuR: --dhchap-ctrl-secret DHHC-1:02:YjEwOGQ1OTE1Y2M5ZDdkY2FmYzc3MDkwMjM1ZDYxOGYyYmU4ZTNkZjVkNmJjZjQ1RnP+VA==: 00:16:38.868 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Y2VmMTZiN2I2OWUzMDRiMjY1YmRmNTFkZWMzMjlkZjMZ7OuR: --dhchap-ctrl-secret DHHC-1:02:YjEwOGQ1OTE1Y2M5ZDdkY2FmYzc3MDkwMjM1ZDYxOGYyYmU4ZTNkZjVkNmJjZjQ1RnP+VA==: 00:16:39.810 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.810 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.810 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:39.810 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.810 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.810 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.810 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.810 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:39.810 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:39.810 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:39.810 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.810 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:39.810 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:39.810 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:39.810 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.810 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.810 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.810 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.810 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.810 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.810 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.810 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.071 00:16:40.071 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.071 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.071 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.331 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.331 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.331 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.331 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.331 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.331 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.331 { 00:16:40.331 "cntlid": 37, 00:16:40.331 "qid": 0, 00:16:40.331 "state": "enabled", 00:16:40.331 "thread": "nvmf_tgt_poll_group_000", 00:16:40.331 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:40.331 "listen_address": { 00:16:40.331 "trtype": "TCP", 00:16:40.331 "adrfam": "IPv4", 00:16:40.331 "traddr": "10.0.0.2", 00:16:40.331 "trsvcid": "4420" 00:16:40.331 }, 00:16:40.331 "peer_address": { 00:16:40.331 "trtype": "TCP", 00:16:40.331 "adrfam": "IPv4", 00:16:40.331 "traddr": "10.0.0.1", 00:16:40.331 "trsvcid": "44188" 00:16:40.331 }, 00:16:40.331 "auth": { 00:16:40.331 "state": "completed", 00:16:40.331 "digest": "sha256", 00:16:40.331 "dhgroup": "ffdhe6144" 00:16:40.331 } 00:16:40.331 } 00:16:40.331 ]' 00:16:40.331 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.331 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:40.331 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.331 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:40.331 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.331 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.331 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.331 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.592 12:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzVmNzRhNjQ1MmIxZTc4M2Q3YWU2NWIzNGQ1NTAxNzM3MTQyYTFmYzVkNzFhNmNhpdauuQ==: --dhchap-ctrl-secret DHHC-1:01:Njc2MjYwOThiNzk1M2QyNDI3M2QxNGMzNzZlOWRkNDRMilIz: 00:16:40.592 12:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YzVmNzRhNjQ1MmIxZTc4M2Q3YWU2NWIzNGQ1NTAxNzM3MTQyYTFmYzVkNzFhNmNhpdauuQ==: --dhchap-ctrl-secret DHHC-1:01:Njc2MjYwOThiNzk1M2QyNDI3M2QxNGMzNzZlOWRkNDRMilIz: 00:16:41.164 12:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.424 12:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:41.424 12:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.424 12:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.424 12:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.424 12:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.424 12:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:41.424 12:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:41.424 12:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:16:41.424 12:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.424 12:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:41.424 12:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:41.424 12:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:41.424 12:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.424 12:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:41.424 12:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.424 12:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.425 12:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.425 12:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:41.425 12:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:41.425 12:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:41.685 00:16:41.946 12:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.946 12:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.946 12:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.946 12:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.946 12:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.946 12:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.946 12:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.946 12:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.946 12:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.946 { 00:16:41.946 "cntlid": 39, 00:16:41.946 "qid": 0, 00:16:41.946 "state": "enabled", 00:16:41.946 "thread": "nvmf_tgt_poll_group_000", 00:16:41.946 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:41.946 "listen_address": { 00:16:41.946 "trtype": "TCP", 00:16:41.946 "adrfam": "IPv4", 00:16:41.946 "traddr": "10.0.0.2", 00:16:41.946 "trsvcid": "4420" 00:16:41.946 }, 00:16:41.946 "peer_address": { 00:16:41.946 "trtype": "TCP", 00:16:41.946 "adrfam": "IPv4", 00:16:41.946 "traddr": "10.0.0.1", 00:16:41.946 "trsvcid": "44216" 00:16:41.946 }, 00:16:41.946 "auth": { 00:16:41.946 "state": "completed", 00:16:41.946 "digest": "sha256", 00:16:41.946 "dhgroup": "ffdhe6144" 00:16:41.946 } 00:16:41.946 } 00:16:41.946 ]' 00:16:41.946 12:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.946 12:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:41.946 12:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.206 12:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:42.207 12:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.207 12:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.207 12:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.207 12:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.207 12:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjAxMzQ4MjFlNmFjMTJlMDZlYzUxMjE2ZWU1NzU5MTkzYjkxNjRjNWM5N2IyNjhhYmEyMDE4MDljNmEyMGM2ZTVMP7w=: 00:16:42.207 12:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjAxMzQ4MjFlNmFjMTJlMDZlYzUxMjE2ZWU1NzU5MTkzYjkxNjRjNWM5N2IyNjhhYmEyMDE4MDljNmEyMGM2ZTVMP7w=: 00:16:43.146 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.146 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:43.146 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.146 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.146 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.146 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:43.146 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.146 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:43.146 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:43.146 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:16:43.146 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.146 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:43.146 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:43.146 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:43.146 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.146 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.146 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.146 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.146 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.146 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.146 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.146 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.718 00:16:43.718 12:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.718 12:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.718 12:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.980 12:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.980 12:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.980 12:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.980 12:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.980 12:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.980 12:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.980 { 00:16:43.980 "cntlid": 41, 00:16:43.980 "qid": 0, 00:16:43.980 "state": "enabled", 00:16:43.980 "thread": "nvmf_tgt_poll_group_000", 00:16:43.980 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:43.980 "listen_address": { 00:16:43.980 "trtype": "TCP", 00:16:43.980 "adrfam": "IPv4", 00:16:43.980 "traddr": "10.0.0.2", 00:16:43.980 "trsvcid": "4420" 00:16:43.980 }, 00:16:43.980 "peer_address": { 00:16:43.980 "trtype": "TCP", 00:16:43.980 "adrfam": "IPv4", 00:16:43.980 "traddr": "10.0.0.1", 00:16:43.980 "trsvcid": "44240" 00:16:43.980 }, 00:16:43.980 "auth": { 00:16:43.980 "state": "completed", 00:16:43.980 "digest": "sha256", 00:16:43.980 "dhgroup": "ffdhe8192" 00:16:43.980 } 00:16:43.980 } 00:16:43.980 ]' 00:16:43.980 12:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.980 12:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:43.980 12:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.980 12:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:43.980 12:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.980 12:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.980 12:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.980 12:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.241 12:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzEyNDg4MmM2ZDMwNGEzNTlmZjQyZjhjMjk3YmVmNzk2ODcwOTc5MjYzYzY5ZWE3hEGojg==: --dhchap-ctrl-secret DHHC-1:03:ZTMwZDlhYmRkMzgwODU1MGRjZDY5YjQ4OWIwMzEzNTcyYjBlYWRmYmRmYWI4MjNkMDJmNDMyYjZlOWViYTU0YZOu+cQ=: 00:16:44.241 12:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzEyNDg4MmM2ZDMwNGEzNTlmZjQyZjhjMjk3YmVmNzk2ODcwOTc5MjYzYzY5ZWE3hEGojg==: --dhchap-ctrl-secret DHHC-1:03:ZTMwZDlhYmRkMzgwODU1MGRjZDY5YjQ4OWIwMzEzNTcyYjBlYWRmYmRmYWI4MjNkMDJmNDMyYjZlOWViYTU0YZOu+cQ=: 00:16:44.813 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.813 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.813 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:44.813 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.813 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.813 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.813 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.813 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:44.813 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:45.074 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:16:45.074 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.074 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:45.074 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:45.074 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:45.074 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.074 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.074 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.074 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.074 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.074 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.074 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.074 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.645 00:16:45.645 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.645 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.645 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.645 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.645 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.645 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.645 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.645 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.645 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.645 { 00:16:45.645 "cntlid": 43, 00:16:45.645 "qid": 0, 00:16:45.645 "state": "enabled", 00:16:45.645 "thread": "nvmf_tgt_poll_group_000", 00:16:45.645 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:45.645 "listen_address": { 00:16:45.645 "trtype": "TCP", 00:16:45.645 "adrfam": "IPv4", 00:16:45.645 "traddr": "10.0.0.2", 00:16:45.645 "trsvcid": "4420" 00:16:45.645 }, 00:16:45.645 "peer_address": { 00:16:45.645 "trtype": "TCP", 00:16:45.645 "adrfam": "IPv4", 00:16:45.645 "traddr": "10.0.0.1", 00:16:45.645 "trsvcid": "40152" 00:16:45.645 }, 00:16:45.645 "auth": { 00:16:45.645 "state": "completed", 00:16:45.645 "digest": "sha256", 00:16:45.645 "dhgroup": "ffdhe8192" 00:16:45.645 } 00:16:45.645 } 00:16:45.645 ]' 00:16:45.645 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.645 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:45.645 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.906 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:45.906 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.906 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.906 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.906 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.166 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2VmMTZiN2I2OWUzMDRiMjY1YmRmNTFkZWMzMjlkZjMZ7OuR: --dhchap-ctrl-secret DHHC-1:02:YjEwOGQ1OTE1Y2M5ZDdkY2FmYzc3MDkwMjM1ZDYxOGYyYmU4ZTNkZjVkNmJjZjQ1RnP+VA==: 00:16:46.166 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Y2VmMTZiN2I2OWUzMDRiMjY1YmRmNTFkZWMzMjlkZjMZ7OuR: --dhchap-ctrl-secret DHHC-1:02:YjEwOGQ1OTE1Y2M5ZDdkY2FmYzc3MDkwMjM1ZDYxOGYyYmU4ZTNkZjVkNmJjZjQ1RnP+VA==: 00:16:46.738 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.738 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:46.738 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.738 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.738 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.738 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.738 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:46.738 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:46.998 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:16:46.998 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.998 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:46.998 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:46.998 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:46.998 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.998 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.998 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.998 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.998 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.998 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.998 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.998 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.259 00:16:47.259 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.259 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.259 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.521 12:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.521 12:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.521 12:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.521 12:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.521 12:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.521 12:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.521 { 00:16:47.521 "cntlid": 45, 00:16:47.521 "qid": 0, 00:16:47.521 "state": "enabled", 00:16:47.521 "thread": "nvmf_tgt_poll_group_000", 00:16:47.521 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:47.521 "listen_address": { 00:16:47.521 "trtype": "TCP", 00:16:47.521 "adrfam": "IPv4", 00:16:47.521 "traddr": "10.0.0.2", 00:16:47.521 "trsvcid": "4420" 00:16:47.521 }, 00:16:47.521 "peer_address": { 00:16:47.521 "trtype": "TCP", 00:16:47.521 "adrfam": "IPv4", 00:16:47.521 "traddr": "10.0.0.1", 00:16:47.521 "trsvcid": "40172" 00:16:47.521 }, 00:16:47.521 "auth": { 00:16:47.521 "state": "completed", 00:16:47.521 "digest": "sha256", 00:16:47.521 "dhgroup": "ffdhe8192" 00:16:47.521 } 00:16:47.521 } 00:16:47.521 ]' 00:16:47.521 12:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.521 12:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:47.521 12:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.782 12:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:47.782 12:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.782 12:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.782 12:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.782 12:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.782 12:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzVmNzRhNjQ1MmIxZTc4M2Q3YWU2NWIzNGQ1NTAxNzM3MTQyYTFmYzVkNzFhNmNhpdauuQ==: --dhchap-ctrl-secret DHHC-1:01:Njc2MjYwOThiNzk1M2QyNDI3M2QxNGMzNzZlOWRkNDRMilIz: 00:16:47.782 12:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YzVmNzRhNjQ1MmIxZTc4M2Q3YWU2NWIzNGQ1NTAxNzM3MTQyYTFmYzVkNzFhNmNhpdauuQ==: --dhchap-ctrl-secret DHHC-1:01:Njc2MjYwOThiNzk1M2QyNDI3M2QxNGMzNzZlOWRkNDRMilIz: 00:16:48.723 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.723 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.724 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:48.724 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.724 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.724 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.724 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.724 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:48.724 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:48.724 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:16:48.724 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.724 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:48.724 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:48.724 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:48.724 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.724 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:48.724 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.724 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.724 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.724 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:48.724 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:48.724 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:49.294 00:16:49.294 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.294 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.294 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.555 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.555 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.555 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.555 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.555 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.555 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.555 { 00:16:49.555 "cntlid": 47, 00:16:49.555 "qid": 0, 00:16:49.555 "state": "enabled", 00:16:49.555 "thread": "nvmf_tgt_poll_group_000", 00:16:49.555 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:49.555 "listen_address": { 00:16:49.555 "trtype": "TCP", 00:16:49.555 "adrfam": "IPv4", 00:16:49.555 "traddr": "10.0.0.2", 00:16:49.555 "trsvcid": "4420" 00:16:49.555 }, 00:16:49.555 "peer_address": { 00:16:49.555 "trtype": "TCP", 00:16:49.555 "adrfam": "IPv4", 00:16:49.555 "traddr": "10.0.0.1", 00:16:49.555 "trsvcid": "40188" 00:16:49.555 }, 00:16:49.555 "auth": { 00:16:49.555 "state": "completed", 00:16:49.555 "digest": "sha256", 00:16:49.555 "dhgroup": "ffdhe8192" 00:16:49.555 } 00:16:49.555 } 00:16:49.555 ]' 00:16:49.555 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.555 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:49.555 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.555 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:49.555 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.555 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.555 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.555 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.815 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjAxMzQ4MjFlNmFjMTJlMDZlYzUxMjE2ZWU1NzU5MTkzYjkxNjRjNWM5N2IyNjhhYmEyMDE4MDljNmEyMGM2ZTVMP7w=: 00:16:49.815 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjAxMzQ4MjFlNmFjMTJlMDZlYzUxMjE2ZWU1NzU5MTkzYjkxNjRjNWM5N2IyNjhhYmEyMDE4MDljNmEyMGM2ZTVMP7w=: 00:16:50.386 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.386 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:50.386 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.386 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.386 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.386 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:50.386 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:50.386 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.386 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:50.386 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:50.646 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:16:50.646 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.646 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:50.646 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:50.646 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:50.646 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.646 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.646 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.646 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.646 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.646 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.646 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.646 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.646 00:16:50.906 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.906 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.906 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.906 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.906 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.906 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.906 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.906 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.906 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.906 { 00:16:50.906 "cntlid": 49, 00:16:50.906 "qid": 0, 00:16:50.906 "state": "enabled", 00:16:50.906 "thread": "nvmf_tgt_poll_group_000", 00:16:50.906 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:50.906 "listen_address": { 00:16:50.906 "trtype": "TCP", 00:16:50.906 "adrfam": "IPv4", 00:16:50.906 "traddr": "10.0.0.2", 00:16:50.906 "trsvcid": "4420" 00:16:50.906 }, 00:16:50.906 "peer_address": { 00:16:50.906 "trtype": "TCP", 00:16:50.906 "adrfam": "IPv4", 00:16:50.906 "traddr": "10.0.0.1", 00:16:50.906 "trsvcid": "40212" 00:16:50.906 }, 00:16:50.906 "auth": { 00:16:50.906 "state": "completed", 00:16:50.906 "digest": "sha384", 00:16:50.906 "dhgroup": "null" 00:16:50.906 } 00:16:50.906 } 00:16:50.906 ]' 00:16:50.906 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.167 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:51.167 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.167 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:51.167 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.167 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.167 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.167 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.427 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzEyNDg4MmM2ZDMwNGEzNTlmZjQyZjhjMjk3YmVmNzk2ODcwOTc5MjYzYzY5ZWE3hEGojg==: --dhchap-ctrl-secret DHHC-1:03:ZTMwZDlhYmRkMzgwODU1MGRjZDY5YjQ4OWIwMzEzNTcyYjBlYWRmYmRmYWI4MjNkMDJmNDMyYjZlOWViYTU0YZOu+cQ=: 00:16:51.427 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzEyNDg4MmM2ZDMwNGEzNTlmZjQyZjhjMjk3YmVmNzk2ODcwOTc5MjYzYzY5ZWE3hEGojg==: --dhchap-ctrl-secret DHHC-1:03:ZTMwZDlhYmRkMzgwODU1MGRjZDY5YjQ4OWIwMzEzNTcyYjBlYWRmYmRmYWI4MjNkMDJmNDMyYjZlOWViYTU0YZOu+cQ=: 00:16:52.006 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.006 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:52.006 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.006 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.006 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.006 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.006 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:52.006 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:52.006 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:16:52.006 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.006 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:52.006 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:52.006 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:52.006 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.006 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.006 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.007 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.268 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.268 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.268 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.268 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.268 00:16:52.268 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.268 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.268 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.527 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.527 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.527 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.527 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.527 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.527 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.527 { 00:16:52.527 "cntlid": 51, 00:16:52.527 "qid": 0, 00:16:52.527 "state": "enabled", 00:16:52.527 "thread": "nvmf_tgt_poll_group_000", 00:16:52.527 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:52.527 "listen_address": { 00:16:52.528 "trtype": "TCP", 00:16:52.528 "adrfam": "IPv4", 00:16:52.528 "traddr": "10.0.0.2", 00:16:52.528 "trsvcid": "4420" 00:16:52.528 }, 00:16:52.528 "peer_address": { 00:16:52.528 "trtype": "TCP", 00:16:52.528 "adrfam": "IPv4", 00:16:52.528 "traddr": "10.0.0.1", 00:16:52.528 "trsvcid": "40244" 00:16:52.528 }, 00:16:52.528 "auth": { 00:16:52.528 "state": "completed", 00:16:52.528 "digest": "sha384", 00:16:52.528 "dhgroup": "null" 00:16:52.528 } 00:16:52.528 } 00:16:52.528 ]' 00:16:52.528 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.528 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:52.528 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.528 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:52.528 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.787 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.787 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.787 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.787 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2VmMTZiN2I2OWUzMDRiMjY1YmRmNTFkZWMzMjlkZjMZ7OuR: --dhchap-ctrl-secret DHHC-1:02:YjEwOGQ1OTE1Y2M5ZDdkY2FmYzc3MDkwMjM1ZDYxOGYyYmU4ZTNkZjVkNmJjZjQ1RnP+VA==: 00:16:52.787 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Y2VmMTZiN2I2OWUzMDRiMjY1YmRmNTFkZWMzMjlkZjMZ7OuR: --dhchap-ctrl-secret DHHC-1:02:YjEwOGQ1OTE1Y2M5ZDdkY2FmYzc3MDkwMjM1ZDYxOGYyYmU4ZTNkZjVkNmJjZjQ1RnP+VA==: 00:16:53.358 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.358 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:53.618 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.618 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.618 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.618 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.618 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:53.618 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:53.618 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:16:53.618 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.618 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:53.618 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:53.618 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:53.619 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.619 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.619 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.619 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.619 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.619 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.619 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.619 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.878 00:16:53.878 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.878 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.878 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.139 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.139 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.139 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.139 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.139 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.139 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.139 { 00:16:54.139 "cntlid": 53, 00:16:54.139 "qid": 0, 00:16:54.139 "state": "enabled", 00:16:54.139 "thread": "nvmf_tgt_poll_group_000", 00:16:54.139 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:54.139 "listen_address": { 00:16:54.139 "trtype": "TCP", 00:16:54.139 "adrfam": "IPv4", 00:16:54.139 "traddr": "10.0.0.2", 00:16:54.139 "trsvcid": "4420" 00:16:54.139 }, 00:16:54.139 "peer_address": { 00:16:54.139 "trtype": "TCP", 00:16:54.139 "adrfam": "IPv4", 00:16:54.139 "traddr": "10.0.0.1", 00:16:54.139 "trsvcid": "40270" 00:16:54.139 }, 00:16:54.139 "auth": { 00:16:54.139 "state": "completed", 00:16:54.139 "digest": "sha384", 00:16:54.139 "dhgroup": "null" 00:16:54.139 } 00:16:54.139 } 00:16:54.139 ]' 00:16:54.139 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.139 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:54.139 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.139 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:54.139 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.139 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.139 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.139 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.399 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzVmNzRhNjQ1MmIxZTc4M2Q3YWU2NWIzNGQ1NTAxNzM3MTQyYTFmYzVkNzFhNmNhpdauuQ==: --dhchap-ctrl-secret DHHC-1:01:Njc2MjYwOThiNzk1M2QyNDI3M2QxNGMzNzZlOWRkNDRMilIz: 00:16:54.399 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YzVmNzRhNjQ1MmIxZTc4M2Q3YWU2NWIzNGQ1NTAxNzM3MTQyYTFmYzVkNzFhNmNhpdauuQ==: --dhchap-ctrl-secret DHHC-1:01:Njc2MjYwOThiNzk1M2QyNDI3M2QxNGMzNzZlOWRkNDRMilIz: 00:16:55.340 12:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.340 12:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:55.340 12:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.340 12:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.340 12:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.340 12:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.341 12:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:55.341 12:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:55.341 12:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:16:55.341 12:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.341 12:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:55.341 12:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:55.341 12:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:55.341 12:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.341 12:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:55.341 12:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.341 12:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.341 12:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.341 12:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:55.341 12:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:55.341 12:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:55.601 00:16:55.601 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.601 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.601 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.601 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.601 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.601 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.601 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.862 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.862 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.862 { 00:16:55.862 "cntlid": 55, 00:16:55.862 "qid": 0, 00:16:55.862 "state": "enabled", 00:16:55.862 "thread": "nvmf_tgt_poll_group_000", 00:16:55.862 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:55.862 "listen_address": { 00:16:55.862 "trtype": "TCP", 00:16:55.862 "adrfam": "IPv4", 00:16:55.862 "traddr": "10.0.0.2", 00:16:55.862 "trsvcid": "4420" 00:16:55.862 }, 00:16:55.862 "peer_address": { 00:16:55.862 "trtype": "TCP", 00:16:55.862 "adrfam": "IPv4", 00:16:55.862 "traddr": "10.0.0.1", 00:16:55.862 "trsvcid": "52690" 00:16:55.862 }, 00:16:55.862 "auth": { 00:16:55.862 "state": "completed", 00:16:55.862 "digest": "sha384", 00:16:55.862 "dhgroup": "null" 00:16:55.862 } 00:16:55.862 } 00:16:55.862 ]' 00:16:55.862 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.862 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:55.862 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.862 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:55.862 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.862 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.862 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.862 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.122 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjAxMzQ4MjFlNmFjMTJlMDZlYzUxMjE2ZWU1NzU5MTkzYjkxNjRjNWM5N2IyNjhhYmEyMDE4MDljNmEyMGM2ZTVMP7w=: 00:16:56.122 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjAxMzQ4MjFlNmFjMTJlMDZlYzUxMjE2ZWU1NzU5MTkzYjkxNjRjNWM5N2IyNjhhYmEyMDE4MDljNmEyMGM2ZTVMP7w=: 00:16:56.693 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.693 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.693 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:56.693 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.693 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.693 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.693 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:56.693 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.693 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:56.693 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:56.954 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:16:56.954 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.954 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:56.954 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:56.954 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:56.954 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.954 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.954 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.954 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.954 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.954 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.954 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.954 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.215 00:16:57.215 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.215 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.215 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.475 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.475 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.475 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.475 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.475 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.475 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.475 { 00:16:57.475 "cntlid": 57, 00:16:57.475 "qid": 0, 00:16:57.475 "state": "enabled", 00:16:57.475 "thread": "nvmf_tgt_poll_group_000", 00:16:57.475 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:57.475 "listen_address": { 00:16:57.475 "trtype": "TCP", 00:16:57.475 "adrfam": "IPv4", 00:16:57.475 "traddr": "10.0.0.2", 00:16:57.475 "trsvcid": "4420" 00:16:57.475 }, 00:16:57.475 "peer_address": { 00:16:57.475 "trtype": "TCP", 00:16:57.475 "adrfam": "IPv4", 00:16:57.475 "traddr": "10.0.0.1", 00:16:57.475 "trsvcid": "52726" 00:16:57.475 }, 00:16:57.475 "auth": { 00:16:57.475 "state": "completed", 00:16:57.475 "digest": "sha384", 00:16:57.475 "dhgroup": "ffdhe2048" 00:16:57.475 } 00:16:57.475 } 00:16:57.475 ]' 00:16:57.475 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.475 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:57.475 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.475 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:57.475 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.475 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.475 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.475 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.736 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzEyNDg4MmM2ZDMwNGEzNTlmZjQyZjhjMjk3YmVmNzk2ODcwOTc5MjYzYzY5ZWE3hEGojg==: --dhchap-ctrl-secret DHHC-1:03:ZTMwZDlhYmRkMzgwODU1MGRjZDY5YjQ4OWIwMzEzNTcyYjBlYWRmYmRmYWI4MjNkMDJmNDMyYjZlOWViYTU0YZOu+cQ=: 00:16:57.736 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzEyNDg4MmM2ZDMwNGEzNTlmZjQyZjhjMjk3YmVmNzk2ODcwOTc5MjYzYzY5ZWE3hEGojg==: --dhchap-ctrl-secret DHHC-1:03:ZTMwZDlhYmRkMzgwODU1MGRjZDY5YjQ4OWIwMzEzNTcyYjBlYWRmYmRmYWI4MjNkMDJmNDMyYjZlOWViYTU0YZOu+cQ=: 00:16:58.306 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.306 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.306 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:58.306 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.306 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.306 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.306 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.306 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:58.306 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:58.566 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:16:58.566 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.566 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:58.566 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:58.566 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:58.566 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.566 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.566 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.566 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.566 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.566 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.566 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.566 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.827 00:16:58.827 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.827 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.827 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.087 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.087 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.088 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.088 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.088 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.088 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.088 { 00:16:59.088 "cntlid": 59, 00:16:59.088 "qid": 0, 00:16:59.088 "state": "enabled", 00:16:59.088 "thread": "nvmf_tgt_poll_group_000", 00:16:59.088 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:59.088 "listen_address": { 00:16:59.088 "trtype": "TCP", 00:16:59.088 "adrfam": "IPv4", 00:16:59.088 "traddr": "10.0.0.2", 00:16:59.088 "trsvcid": "4420" 00:16:59.088 }, 00:16:59.088 "peer_address": { 00:16:59.088 "trtype": "TCP", 00:16:59.088 "adrfam": "IPv4", 00:16:59.088 "traddr": "10.0.0.1", 00:16:59.088 "trsvcid": "52758" 00:16:59.088 }, 00:16:59.088 "auth": { 00:16:59.088 "state": "completed", 00:16:59.088 "digest": "sha384", 00:16:59.088 "dhgroup": "ffdhe2048" 00:16:59.088 } 00:16:59.088 } 00:16:59.088 ]' 00:16:59.088 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.088 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:59.088 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.088 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:59.088 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.088 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.088 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.088 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.349 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2VmMTZiN2I2OWUzMDRiMjY1YmRmNTFkZWMzMjlkZjMZ7OuR: --dhchap-ctrl-secret DHHC-1:02:YjEwOGQ1OTE1Y2M5ZDdkY2FmYzc3MDkwMjM1ZDYxOGYyYmU4ZTNkZjVkNmJjZjQ1RnP+VA==: 00:16:59.349 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Y2VmMTZiN2I2OWUzMDRiMjY1YmRmNTFkZWMzMjlkZjMZ7OuR: --dhchap-ctrl-secret DHHC-1:02:YjEwOGQ1OTE1Y2M5ZDdkY2FmYzc3MDkwMjM1ZDYxOGYyYmU4ZTNkZjVkNmJjZjQ1RnP+VA==: 00:16:59.919 12:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.919 12:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:59.919 12:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.919 12:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.919 12:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.919 12:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.919 12:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:59.919 12:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:00.180 12:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:17:00.180 12:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.180 12:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:00.180 12:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:00.180 12:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:00.180 12:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.180 12:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.180 12:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.180 12:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.180 12:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.180 12:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.180 12:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.180 12:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.439 00:17:00.439 12:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.439 12:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.439 12:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.699 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.699 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.699 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.699 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.699 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.699 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.699 { 00:17:00.699 "cntlid": 61, 00:17:00.699 "qid": 0, 00:17:00.699 "state": "enabled", 00:17:00.699 "thread": "nvmf_tgt_poll_group_000", 00:17:00.699 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:00.699 "listen_address": { 00:17:00.699 "trtype": "TCP", 00:17:00.699 "adrfam": "IPv4", 00:17:00.699 "traddr": "10.0.0.2", 00:17:00.699 "trsvcid": "4420" 00:17:00.699 }, 00:17:00.699 "peer_address": { 00:17:00.699 "trtype": "TCP", 00:17:00.699 "adrfam": "IPv4", 00:17:00.699 "traddr": "10.0.0.1", 00:17:00.699 "trsvcid": "52784" 00:17:00.699 }, 00:17:00.699 "auth": { 00:17:00.699 "state": "completed", 00:17:00.700 "digest": "sha384", 00:17:00.700 "dhgroup": "ffdhe2048" 00:17:00.700 } 00:17:00.700 } 00:17:00.700 ]' 00:17:00.700 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.700 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:00.700 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.700 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:00.700 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.700 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.700 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.700 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.041 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzVmNzRhNjQ1MmIxZTc4M2Q3YWU2NWIzNGQ1NTAxNzM3MTQyYTFmYzVkNzFhNmNhpdauuQ==: --dhchap-ctrl-secret DHHC-1:01:Njc2MjYwOThiNzk1M2QyNDI3M2QxNGMzNzZlOWRkNDRMilIz: 00:17:01.041 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YzVmNzRhNjQ1MmIxZTc4M2Q3YWU2NWIzNGQ1NTAxNzM3MTQyYTFmYzVkNzFhNmNhpdauuQ==: --dhchap-ctrl-secret DHHC-1:01:Njc2MjYwOThiNzk1M2QyNDI3M2QxNGMzNzZlOWRkNDRMilIz: 00:17:01.688 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.688 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:01.688 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.688 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.688 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.688 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:01.688 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:01.688 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:01.688 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:17:01.688 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:01.688 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:01.688 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:01.688 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:01.688 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.688 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:01.688 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.688 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.688 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.688 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:01.688 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:01.688 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:01.947 00:17:01.947 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.947 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.947 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.207 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.207 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.207 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.207 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.207 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.207 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.207 { 00:17:02.207 "cntlid": 63, 00:17:02.207 "qid": 0, 00:17:02.207 "state": "enabled", 00:17:02.207 "thread": "nvmf_tgt_poll_group_000", 00:17:02.207 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:02.207 "listen_address": { 00:17:02.207 "trtype": "TCP", 00:17:02.207 "adrfam": "IPv4", 00:17:02.207 "traddr": "10.0.0.2", 00:17:02.207 "trsvcid": "4420" 00:17:02.207 }, 00:17:02.207 "peer_address": { 00:17:02.207 "trtype": "TCP", 00:17:02.207 "adrfam": "IPv4", 00:17:02.207 "traddr": "10.0.0.1", 00:17:02.207 "trsvcid": "52808" 00:17:02.207 }, 00:17:02.207 "auth": { 00:17:02.207 "state": "completed", 00:17:02.207 "digest": "sha384", 00:17:02.207 "dhgroup": "ffdhe2048" 00:17:02.207 } 00:17:02.207 } 00:17:02.207 ]' 00:17:02.207 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.207 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:02.207 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:02.207 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:02.207 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:02.466 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.466 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.466 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.466 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjAxMzQ4MjFlNmFjMTJlMDZlYzUxMjE2ZWU1NzU5MTkzYjkxNjRjNWM5N2IyNjhhYmEyMDE4MDljNmEyMGM2ZTVMP7w=: 00:17:02.466 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjAxMzQ4MjFlNmFjMTJlMDZlYzUxMjE2ZWU1NzU5MTkzYjkxNjRjNWM5N2IyNjhhYmEyMDE4MDljNmEyMGM2ZTVMP7w=: 00:17:03.410 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.410 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.410 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:03.410 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.410 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.410 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.410 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:03.410 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.410 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:03.410 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:03.410 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:17:03.410 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.410 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:03.410 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:03.410 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:03.410 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.410 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.410 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.410 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.410 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.410 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.410 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.410 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.671 00:17:03.671 12:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.671 12:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.671 12:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.932 12:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.932 12:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.932 12:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.932 12:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.932 12:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.932 12:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.932 { 00:17:03.932 "cntlid": 65, 00:17:03.932 "qid": 0, 00:17:03.932 "state": "enabled", 00:17:03.932 "thread": "nvmf_tgt_poll_group_000", 00:17:03.932 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:03.932 "listen_address": { 00:17:03.932 "trtype": "TCP", 00:17:03.932 "adrfam": "IPv4", 00:17:03.932 "traddr": "10.0.0.2", 00:17:03.932 "trsvcid": "4420" 00:17:03.932 }, 00:17:03.932 "peer_address": { 00:17:03.932 "trtype": "TCP", 00:17:03.932 "adrfam": "IPv4", 00:17:03.932 "traddr": "10.0.0.1", 00:17:03.932 "trsvcid": "52836" 00:17:03.932 }, 00:17:03.932 "auth": { 00:17:03.932 "state": "completed", 00:17:03.932 "digest": "sha384", 00:17:03.932 "dhgroup": "ffdhe3072" 00:17:03.932 } 00:17:03.932 } 00:17:03.932 ]' 00:17:03.932 12:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.932 12:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:03.932 12:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.932 12:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:03.932 12:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.932 12:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.932 12:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.932 12:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.193 12:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzEyNDg4MmM2ZDMwNGEzNTlmZjQyZjhjMjk3YmVmNzk2ODcwOTc5MjYzYzY5ZWE3hEGojg==: --dhchap-ctrl-secret DHHC-1:03:ZTMwZDlhYmRkMzgwODU1MGRjZDY5YjQ4OWIwMzEzNTcyYjBlYWRmYmRmYWI4MjNkMDJmNDMyYjZlOWViYTU0YZOu+cQ=: 00:17:04.193 12:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzEyNDg4MmM2ZDMwNGEzNTlmZjQyZjhjMjk3YmVmNzk2ODcwOTc5MjYzYzY5ZWE3hEGojg==: --dhchap-ctrl-secret DHHC-1:03:ZTMwZDlhYmRkMzgwODU1MGRjZDY5YjQ4OWIwMzEzNTcyYjBlYWRmYmRmYWI4MjNkMDJmNDMyYjZlOWViYTU0YZOu+cQ=: 00:17:04.763 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.763 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.763 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:04.763 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.763 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.763 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.763 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:04.763 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:04.763 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:05.024 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:17:05.024 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.024 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:05.024 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:05.024 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:05.024 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.024 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.025 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.025 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.025 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.025 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.025 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.025 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.285 00:17:05.285 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.285 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.285 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.546 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.546 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.546 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.546 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.546 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.546 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:05.546 { 00:17:05.546 "cntlid": 67, 00:17:05.546 "qid": 0, 00:17:05.546 "state": "enabled", 00:17:05.546 "thread": "nvmf_tgt_poll_group_000", 00:17:05.546 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:05.546 "listen_address": { 00:17:05.546 "trtype": "TCP", 00:17:05.546 "adrfam": "IPv4", 00:17:05.547 "traddr": "10.0.0.2", 00:17:05.547 "trsvcid": "4420" 00:17:05.547 }, 00:17:05.547 "peer_address": { 00:17:05.547 "trtype": "TCP", 00:17:05.547 "adrfam": "IPv4", 00:17:05.547 "traddr": "10.0.0.1", 00:17:05.547 "trsvcid": "36214" 00:17:05.547 }, 00:17:05.547 "auth": { 00:17:05.547 "state": "completed", 00:17:05.547 "digest": "sha384", 00:17:05.547 "dhgroup": "ffdhe3072" 00:17:05.547 } 00:17:05.547 } 00:17:05.547 ]' 00:17:05.547 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:05.547 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:05.547 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:05.547 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:05.547 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.547 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.547 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.547 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.807 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2VmMTZiN2I2OWUzMDRiMjY1YmRmNTFkZWMzMjlkZjMZ7OuR: --dhchap-ctrl-secret DHHC-1:02:YjEwOGQ1OTE1Y2M5ZDdkY2FmYzc3MDkwMjM1ZDYxOGYyYmU4ZTNkZjVkNmJjZjQ1RnP+VA==: 00:17:05.807 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Y2VmMTZiN2I2OWUzMDRiMjY1YmRmNTFkZWMzMjlkZjMZ7OuR: --dhchap-ctrl-secret DHHC-1:02:YjEwOGQ1OTE1Y2M5ZDdkY2FmYzc3MDkwMjM1ZDYxOGYyYmU4ZTNkZjVkNmJjZjQ1RnP+VA==: 00:17:06.377 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.377 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.377 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:06.377 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.378 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.378 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.378 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:06.378 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:06.378 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:06.638 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:17:06.638 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.638 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:06.638 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:06.638 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:06.638 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.638 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.638 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.638 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.638 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.638 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.638 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.638 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.899 00:17:06.899 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.899 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.899 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.160 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.160 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.160 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.160 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.160 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.160 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.160 { 00:17:07.160 "cntlid": 69, 00:17:07.160 "qid": 0, 00:17:07.160 "state": "enabled", 00:17:07.160 "thread": "nvmf_tgt_poll_group_000", 00:17:07.160 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:07.160 "listen_address": { 00:17:07.160 "trtype": "TCP", 00:17:07.160 "adrfam": "IPv4", 00:17:07.160 "traddr": "10.0.0.2", 00:17:07.160 "trsvcid": "4420" 00:17:07.160 }, 00:17:07.160 "peer_address": { 00:17:07.160 "trtype": "TCP", 00:17:07.160 "adrfam": "IPv4", 00:17:07.160 "traddr": "10.0.0.1", 00:17:07.160 "trsvcid": "36232" 00:17:07.160 }, 00:17:07.160 "auth": { 00:17:07.160 "state": "completed", 00:17:07.160 "digest": "sha384", 00:17:07.160 "dhgroup": "ffdhe3072" 00:17:07.160 } 00:17:07.160 } 00:17:07.160 ]' 00:17:07.160 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.160 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:07.160 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.160 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:07.160 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.160 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.160 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.160 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.421 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzVmNzRhNjQ1MmIxZTc4M2Q3YWU2NWIzNGQ1NTAxNzM3MTQyYTFmYzVkNzFhNmNhpdauuQ==: --dhchap-ctrl-secret DHHC-1:01:Njc2MjYwOThiNzk1M2QyNDI3M2QxNGMzNzZlOWRkNDRMilIz: 00:17:07.421 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YzVmNzRhNjQ1MmIxZTc4M2Q3YWU2NWIzNGQ1NTAxNzM3MTQyYTFmYzVkNzFhNmNhpdauuQ==: --dhchap-ctrl-secret DHHC-1:01:Njc2MjYwOThiNzk1M2QyNDI3M2QxNGMzNzZlOWRkNDRMilIz: 00:17:07.993 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.993 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.993 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:07.993 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.993 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.993 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.993 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.993 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:07.993 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:08.253 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:17:08.253 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:08.253 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:08.253 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:08.253 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:08.253 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.253 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:08.253 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.253 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.253 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.253 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:08.253 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:08.253 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:08.514 00:17:08.514 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.514 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.514 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.775 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.775 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.775 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.775 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.775 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.775 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:08.775 { 00:17:08.775 "cntlid": 71, 00:17:08.775 "qid": 0, 00:17:08.775 "state": "enabled", 00:17:08.775 "thread": "nvmf_tgt_poll_group_000", 00:17:08.775 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:08.775 "listen_address": { 00:17:08.775 "trtype": "TCP", 00:17:08.775 "adrfam": "IPv4", 00:17:08.775 "traddr": "10.0.0.2", 00:17:08.775 "trsvcid": "4420" 00:17:08.775 }, 00:17:08.775 "peer_address": { 00:17:08.775 "trtype": "TCP", 00:17:08.775 "adrfam": "IPv4", 00:17:08.775 "traddr": "10.0.0.1", 00:17:08.775 "trsvcid": "36256" 00:17:08.775 }, 00:17:08.775 "auth": { 00:17:08.775 "state": "completed", 00:17:08.775 "digest": "sha384", 00:17:08.775 "dhgroup": "ffdhe3072" 00:17:08.775 } 00:17:08.775 } 00:17:08.775 ]' 00:17:08.775 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.775 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:08.775 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.775 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:08.775 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:08.775 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.775 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.775 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.035 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjAxMzQ4MjFlNmFjMTJlMDZlYzUxMjE2ZWU1NzU5MTkzYjkxNjRjNWM5N2IyNjhhYmEyMDE4MDljNmEyMGM2ZTVMP7w=: 00:17:09.036 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjAxMzQ4MjFlNmFjMTJlMDZlYzUxMjE2ZWU1NzU5MTkzYjkxNjRjNWM5N2IyNjhhYmEyMDE4MDljNmEyMGM2ZTVMP7w=: 00:17:09.607 12:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.607 12:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:09.607 12:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.607 12:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.607 12:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.607 12:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:09.607 12:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.607 12:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:09.607 12:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:09.868 12:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:09.868 12:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.868 12:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:09.868 12:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:09.868 12:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:09.868 12:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.868 12:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.868 12:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.868 12:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.868 12:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.868 12:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.868 12:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.868 12:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.127 00:17:10.127 12:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.127 12:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.128 12:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.388 12:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.388 12:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.388 12:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.388 12:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.388 12:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.388 12:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.388 { 00:17:10.388 "cntlid": 73, 00:17:10.388 "qid": 0, 00:17:10.388 "state": "enabled", 00:17:10.388 "thread": "nvmf_tgt_poll_group_000", 00:17:10.388 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:10.388 "listen_address": { 00:17:10.388 "trtype": "TCP", 00:17:10.388 "adrfam": "IPv4", 00:17:10.388 "traddr": "10.0.0.2", 00:17:10.388 "trsvcid": "4420" 00:17:10.388 }, 00:17:10.388 "peer_address": { 00:17:10.388 "trtype": "TCP", 00:17:10.388 "adrfam": "IPv4", 00:17:10.388 "traddr": "10.0.0.1", 00:17:10.388 "trsvcid": "36282" 00:17:10.388 }, 00:17:10.388 "auth": { 00:17:10.388 "state": "completed", 00:17:10.388 "digest": "sha384", 00:17:10.388 "dhgroup": "ffdhe4096" 00:17:10.388 } 00:17:10.388 } 00:17:10.388 ]' 00:17:10.388 12:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.388 12:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:10.388 12:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.388 12:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:10.388 12:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.388 12:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.388 12:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.388 12:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.648 12:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzEyNDg4MmM2ZDMwNGEzNTlmZjQyZjhjMjk3YmVmNzk2ODcwOTc5MjYzYzY5ZWE3hEGojg==: --dhchap-ctrl-secret DHHC-1:03:ZTMwZDlhYmRkMzgwODU1MGRjZDY5YjQ4OWIwMzEzNTcyYjBlYWRmYmRmYWI4MjNkMDJmNDMyYjZlOWViYTU0YZOu+cQ=: 00:17:10.648 12:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzEyNDg4MmM2ZDMwNGEzNTlmZjQyZjhjMjk3YmVmNzk2ODcwOTc5MjYzYzY5ZWE3hEGojg==: --dhchap-ctrl-secret DHHC-1:03:ZTMwZDlhYmRkMzgwODU1MGRjZDY5YjQ4OWIwMzEzNTcyYjBlYWRmYmRmYWI4MjNkMDJmNDMyYjZlOWViYTU0YZOu+cQ=: 00:17:11.218 12:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.218 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.218 12:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:11.218 12:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.218 12:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.218 12:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.218 12:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.218 12:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:11.218 12:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:11.479 12:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:17:11.479 12:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.479 12:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:11.479 12:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:11.479 12:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:11.479 12:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.479 12:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.479 12:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.479 12:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.479 12:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.479 12:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.479 12:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.479 12:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.739 00:17:11.739 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.739 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.739 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.000 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.000 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.000 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.000 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.000 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.000 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.000 { 00:17:12.000 "cntlid": 75, 00:17:12.000 "qid": 0, 00:17:12.000 "state": "enabled", 00:17:12.000 "thread": "nvmf_tgt_poll_group_000", 00:17:12.000 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:12.000 "listen_address": { 00:17:12.000 "trtype": "TCP", 00:17:12.000 "adrfam": "IPv4", 00:17:12.000 "traddr": "10.0.0.2", 00:17:12.000 "trsvcid": "4420" 00:17:12.000 }, 00:17:12.000 "peer_address": { 00:17:12.000 "trtype": "TCP", 00:17:12.000 "adrfam": "IPv4", 00:17:12.000 "traddr": "10.0.0.1", 00:17:12.000 "trsvcid": "36328" 00:17:12.000 }, 00:17:12.000 "auth": { 00:17:12.000 "state": "completed", 00:17:12.000 "digest": "sha384", 00:17:12.000 "dhgroup": "ffdhe4096" 00:17:12.000 } 00:17:12.000 } 00:17:12.000 ]' 00:17:12.000 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.000 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:12.000 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.000 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:12.000 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.000 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.000 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.000 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.273 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2VmMTZiN2I2OWUzMDRiMjY1YmRmNTFkZWMzMjlkZjMZ7OuR: --dhchap-ctrl-secret DHHC-1:02:YjEwOGQ1OTE1Y2M5ZDdkY2FmYzc3MDkwMjM1ZDYxOGYyYmU4ZTNkZjVkNmJjZjQ1RnP+VA==: 00:17:12.273 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Y2VmMTZiN2I2OWUzMDRiMjY1YmRmNTFkZWMzMjlkZjMZ7OuR: --dhchap-ctrl-secret DHHC-1:02:YjEwOGQ1OTE1Y2M5ZDdkY2FmYzc3MDkwMjM1ZDYxOGYyYmU4ZTNkZjVkNmJjZjQ1RnP+VA==: 00:17:12.853 12:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.853 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.853 12:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:12.853 12:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.853 12:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.853 12:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.853 12:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.854 12:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:12.854 12:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:13.114 12:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:13.114 12:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.114 12:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:13.114 12:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:13.114 12:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:13.114 12:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.114 12:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.114 12:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.114 12:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.114 12:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.114 12:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.114 12:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.114 12:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.375 00:17:13.375 12:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.375 12:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.375 12:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.635 12:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.636 12:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.636 12:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.636 12:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.636 12:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.636 12:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.636 { 00:17:13.636 "cntlid": 77, 00:17:13.636 "qid": 0, 00:17:13.636 "state": "enabled", 00:17:13.636 "thread": "nvmf_tgt_poll_group_000", 00:17:13.636 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:13.636 "listen_address": { 00:17:13.636 "trtype": "TCP", 00:17:13.636 "adrfam": "IPv4", 00:17:13.636 "traddr": "10.0.0.2", 00:17:13.636 "trsvcid": "4420" 00:17:13.636 }, 00:17:13.636 "peer_address": { 00:17:13.636 "trtype": "TCP", 00:17:13.636 "adrfam": "IPv4", 00:17:13.636 "traddr": "10.0.0.1", 00:17:13.636 "trsvcid": "36348" 00:17:13.636 }, 00:17:13.636 "auth": { 00:17:13.636 "state": "completed", 00:17:13.636 "digest": "sha384", 00:17:13.636 "dhgroup": "ffdhe4096" 00:17:13.636 } 00:17:13.636 } 00:17:13.636 ]' 00:17:13.636 12:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.636 12:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:13.636 12:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.636 12:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:13.636 12:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.636 12:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.636 12:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.636 12:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.896 12:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzVmNzRhNjQ1MmIxZTc4M2Q3YWU2NWIzNGQ1NTAxNzM3MTQyYTFmYzVkNzFhNmNhpdauuQ==: --dhchap-ctrl-secret DHHC-1:01:Njc2MjYwOThiNzk1M2QyNDI3M2QxNGMzNzZlOWRkNDRMilIz: 00:17:13.896 12:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YzVmNzRhNjQ1MmIxZTc4M2Q3YWU2NWIzNGQ1NTAxNzM3MTQyYTFmYzVkNzFhNmNhpdauuQ==: --dhchap-ctrl-secret DHHC-1:01:Njc2MjYwOThiNzk1M2QyNDI3M2QxNGMzNzZlOWRkNDRMilIz: 00:17:14.466 12:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.466 12:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:14.466 12:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.466 12:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.466 12:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.466 12:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.466 12:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:14.466 12:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:14.727 12:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:14.727 12:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.727 12:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:14.727 12:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:14.727 12:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:14.727 12:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.727 12:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:14.727 12:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.727 12:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.727 12:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.727 12:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:14.727 12:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:14.727 12:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:14.987 00:17:14.987 12:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.987 12:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.987 12:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.247 12:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.247 12:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.247 12:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.247 12:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.247 12:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.247 12:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.247 { 00:17:15.247 "cntlid": 79, 00:17:15.247 "qid": 0, 00:17:15.247 "state": "enabled", 00:17:15.247 "thread": "nvmf_tgt_poll_group_000", 00:17:15.247 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:15.247 "listen_address": { 00:17:15.247 "trtype": "TCP", 00:17:15.247 "adrfam": "IPv4", 00:17:15.247 "traddr": "10.0.0.2", 00:17:15.247 "trsvcid": "4420" 00:17:15.247 }, 00:17:15.247 "peer_address": { 00:17:15.247 "trtype": "TCP", 00:17:15.247 "adrfam": "IPv4", 00:17:15.247 "traddr": "10.0.0.1", 00:17:15.247 "trsvcid": "49948" 00:17:15.247 }, 00:17:15.247 "auth": { 00:17:15.247 "state": "completed", 00:17:15.247 "digest": "sha384", 00:17:15.247 "dhgroup": "ffdhe4096" 00:17:15.247 } 00:17:15.247 } 00:17:15.247 ]' 00:17:15.247 12:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.247 12:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:15.247 12:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.247 12:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:15.247 12:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.247 12:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.247 12:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.247 12:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.508 12:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjAxMzQ4MjFlNmFjMTJlMDZlYzUxMjE2ZWU1NzU5MTkzYjkxNjRjNWM5N2IyNjhhYmEyMDE4MDljNmEyMGM2ZTVMP7w=: 00:17:15.508 12:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjAxMzQ4MjFlNmFjMTJlMDZlYzUxMjE2ZWU1NzU5MTkzYjkxNjRjNWM5N2IyNjhhYmEyMDE4MDljNmEyMGM2ZTVMP7w=: 00:17:16.078 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.078 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:16.078 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.078 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.078 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.078 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:16.078 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.078 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:16.078 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:16.340 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:16.340 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.340 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:16.340 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:16.340 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:16.340 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.340 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.340 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.340 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.340 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.340 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.340 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.340 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.601 00:17:16.601 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.601 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.601 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.861 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.861 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.861 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.861 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.861 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.862 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:16.862 { 00:17:16.862 "cntlid": 81, 00:17:16.862 "qid": 0, 00:17:16.862 "state": "enabled", 00:17:16.862 "thread": "nvmf_tgt_poll_group_000", 00:17:16.862 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:16.862 "listen_address": { 00:17:16.862 "trtype": "TCP", 00:17:16.862 "adrfam": "IPv4", 00:17:16.862 "traddr": "10.0.0.2", 00:17:16.862 "trsvcid": "4420" 00:17:16.862 }, 00:17:16.862 "peer_address": { 00:17:16.862 "trtype": "TCP", 00:17:16.862 "adrfam": "IPv4", 00:17:16.862 "traddr": "10.0.0.1", 00:17:16.862 "trsvcid": "49982" 00:17:16.862 }, 00:17:16.862 "auth": { 00:17:16.862 "state": "completed", 00:17:16.862 "digest": "sha384", 00:17:16.862 "dhgroup": "ffdhe6144" 00:17:16.862 } 00:17:16.862 } 00:17:16.862 ]' 00:17:16.862 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:16.862 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:16.862 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:16.862 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:16.862 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.122 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.122 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.122 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.122 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzEyNDg4MmM2ZDMwNGEzNTlmZjQyZjhjMjk3YmVmNzk2ODcwOTc5MjYzYzY5ZWE3hEGojg==: --dhchap-ctrl-secret DHHC-1:03:ZTMwZDlhYmRkMzgwODU1MGRjZDY5YjQ4OWIwMzEzNTcyYjBlYWRmYmRmYWI4MjNkMDJmNDMyYjZlOWViYTU0YZOu+cQ=: 00:17:17.122 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzEyNDg4MmM2ZDMwNGEzNTlmZjQyZjhjMjk3YmVmNzk2ODcwOTc5MjYzYzY5ZWE3hEGojg==: --dhchap-ctrl-secret DHHC-1:03:ZTMwZDlhYmRkMzgwODU1MGRjZDY5YjQ4OWIwMzEzNTcyYjBlYWRmYmRmYWI4MjNkMDJmNDMyYjZlOWViYTU0YZOu+cQ=: 00:17:17.694 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.955 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.955 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:17.955 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.955 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.955 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.955 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:17.955 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:17.955 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:17.955 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:17:17.955 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:17.955 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:17.955 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:17.955 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:17.955 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.955 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.955 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.955 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.955 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.955 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.955 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.955 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.523 00:17:18.523 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.523 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.523 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.523 12:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.523 12:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.523 12:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.523 12:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.523 12:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.523 12:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.523 { 00:17:18.523 "cntlid": 83, 00:17:18.523 "qid": 0, 00:17:18.523 "state": "enabled", 00:17:18.523 "thread": "nvmf_tgt_poll_group_000", 00:17:18.523 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:18.523 "listen_address": { 00:17:18.523 "trtype": "TCP", 00:17:18.523 "adrfam": "IPv4", 00:17:18.523 "traddr": "10.0.0.2", 00:17:18.523 "trsvcid": "4420" 00:17:18.523 }, 00:17:18.523 "peer_address": { 00:17:18.523 "trtype": "TCP", 00:17:18.523 "adrfam": "IPv4", 00:17:18.523 "traddr": "10.0.0.1", 00:17:18.523 "trsvcid": "50012" 00:17:18.523 }, 00:17:18.523 "auth": { 00:17:18.523 "state": "completed", 00:17:18.523 "digest": "sha384", 00:17:18.523 "dhgroup": "ffdhe6144" 00:17:18.523 } 00:17:18.523 } 00:17:18.523 ]' 00:17:18.523 12:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.523 12:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:18.523 12:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.784 12:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:18.784 12:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.784 12:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.784 12:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.784 12:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.784 12:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2VmMTZiN2I2OWUzMDRiMjY1YmRmNTFkZWMzMjlkZjMZ7OuR: --dhchap-ctrl-secret DHHC-1:02:YjEwOGQ1OTE1Y2M5ZDdkY2FmYzc3MDkwMjM1ZDYxOGYyYmU4ZTNkZjVkNmJjZjQ1RnP+VA==: 00:17:18.784 12:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Y2VmMTZiN2I2OWUzMDRiMjY1YmRmNTFkZWMzMjlkZjMZ7OuR: --dhchap-ctrl-secret DHHC-1:02:YjEwOGQ1OTE1Y2M5ZDdkY2FmYzc3MDkwMjM1ZDYxOGYyYmU4ZTNkZjVkNmJjZjQ1RnP+VA==: 00:17:19.724 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.724 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:19.724 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.724 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.724 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.724 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.724 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:19.724 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:19.724 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:19.724 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.724 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:19.724 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:19.724 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:19.724 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.724 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.724 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.724 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.724 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.724 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.724 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.724 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.984 00:17:19.984 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.984 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.984 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.245 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.245 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.245 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.245 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.245 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.245 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.245 { 00:17:20.245 "cntlid": 85, 00:17:20.245 "qid": 0, 00:17:20.245 "state": "enabled", 00:17:20.245 "thread": "nvmf_tgt_poll_group_000", 00:17:20.245 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:20.245 "listen_address": { 00:17:20.245 "trtype": "TCP", 00:17:20.245 "adrfam": "IPv4", 00:17:20.245 "traddr": "10.0.0.2", 00:17:20.245 "trsvcid": "4420" 00:17:20.245 }, 00:17:20.245 "peer_address": { 00:17:20.245 "trtype": "TCP", 00:17:20.245 "adrfam": "IPv4", 00:17:20.245 "traddr": "10.0.0.1", 00:17:20.245 "trsvcid": "50028" 00:17:20.245 }, 00:17:20.245 "auth": { 00:17:20.245 "state": "completed", 00:17:20.245 "digest": "sha384", 00:17:20.245 "dhgroup": "ffdhe6144" 00:17:20.245 } 00:17:20.245 } 00:17:20.245 ]' 00:17:20.245 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.245 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:20.245 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.506 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:20.506 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.506 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.506 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.506 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.506 12:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzVmNzRhNjQ1MmIxZTc4M2Q3YWU2NWIzNGQ1NTAxNzM3MTQyYTFmYzVkNzFhNmNhpdauuQ==: --dhchap-ctrl-secret DHHC-1:01:Njc2MjYwOThiNzk1M2QyNDI3M2QxNGMzNzZlOWRkNDRMilIz: 00:17:20.506 12:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YzVmNzRhNjQ1MmIxZTc4M2Q3YWU2NWIzNGQ1NTAxNzM3MTQyYTFmYzVkNzFhNmNhpdauuQ==: --dhchap-ctrl-secret DHHC-1:01:Njc2MjYwOThiNzk1M2QyNDI3M2QxNGMzNzZlOWRkNDRMilIz: 00:17:21.448 12:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.448 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.448 12:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:21.448 12:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.448 12:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.448 12:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.448 12:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.448 12:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:21.448 12:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:21.448 12:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:17:21.448 12:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.448 12:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:21.448 12:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:21.448 12:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:21.448 12:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.448 12:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:21.448 12:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.448 12:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.448 12:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.448 12:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:21.448 12:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:21.448 12:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:21.710 00:17:21.710 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:21.710 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:21.710 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.972 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.972 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.972 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.972 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.972 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.972 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.972 { 00:17:21.972 "cntlid": 87, 00:17:21.972 "qid": 0, 00:17:21.972 "state": "enabled", 00:17:21.972 "thread": "nvmf_tgt_poll_group_000", 00:17:21.972 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:21.972 "listen_address": { 00:17:21.972 "trtype": "TCP", 00:17:21.972 "adrfam": "IPv4", 00:17:21.972 "traddr": "10.0.0.2", 00:17:21.972 "trsvcid": "4420" 00:17:21.972 }, 00:17:21.972 "peer_address": { 00:17:21.972 "trtype": "TCP", 00:17:21.972 "adrfam": "IPv4", 00:17:21.972 "traddr": "10.0.0.1", 00:17:21.972 "trsvcid": "50054" 00:17:21.972 }, 00:17:21.972 "auth": { 00:17:21.972 "state": "completed", 00:17:21.972 "digest": "sha384", 00:17:21.972 "dhgroup": "ffdhe6144" 00:17:21.972 } 00:17:21.972 } 00:17:21.972 ]' 00:17:21.972 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.972 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:21.972 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.233 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:22.233 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.233 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.233 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.233 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.233 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjAxMzQ4MjFlNmFjMTJlMDZlYzUxMjE2ZWU1NzU5MTkzYjkxNjRjNWM5N2IyNjhhYmEyMDE4MDljNmEyMGM2ZTVMP7w=: 00:17:22.233 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjAxMzQ4MjFlNmFjMTJlMDZlYzUxMjE2ZWU1NzU5MTkzYjkxNjRjNWM5N2IyNjhhYmEyMDE4MDljNmEyMGM2ZTVMP7w=: 00:17:23.174 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.174 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.174 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:23.174 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.174 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.174 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.174 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:23.174 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.174 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:23.174 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:23.174 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:23.174 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.174 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:23.174 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:23.174 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:23.174 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.174 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.174 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.174 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.174 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.174 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.174 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.174 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.745 00:17:23.745 12:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.745 12:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.745 12:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.745 12:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.745 12:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.745 12:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.745 12:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.745 12:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.745 12:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.745 { 00:17:23.745 "cntlid": 89, 00:17:23.745 "qid": 0, 00:17:23.745 "state": "enabled", 00:17:23.745 "thread": "nvmf_tgt_poll_group_000", 00:17:23.745 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:23.745 "listen_address": { 00:17:23.745 "trtype": "TCP", 00:17:23.745 "adrfam": "IPv4", 00:17:23.745 "traddr": "10.0.0.2", 00:17:23.745 "trsvcid": "4420" 00:17:23.745 }, 00:17:23.745 "peer_address": { 00:17:23.745 "trtype": "TCP", 00:17:23.745 "adrfam": "IPv4", 00:17:23.745 "traddr": "10.0.0.1", 00:17:23.745 "trsvcid": "50080" 00:17:23.745 }, 00:17:23.745 "auth": { 00:17:23.745 "state": "completed", 00:17:23.745 "digest": "sha384", 00:17:23.745 "dhgroup": "ffdhe8192" 00:17:23.745 } 00:17:23.745 } 00:17:23.745 ]' 00:17:23.745 12:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.006 12:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:24.006 12:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.006 12:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:24.006 12:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.006 12:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.006 12:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.006 12:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.266 12:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzEyNDg4MmM2ZDMwNGEzNTlmZjQyZjhjMjk3YmVmNzk2ODcwOTc5MjYzYzY5ZWE3hEGojg==: --dhchap-ctrl-secret DHHC-1:03:ZTMwZDlhYmRkMzgwODU1MGRjZDY5YjQ4OWIwMzEzNTcyYjBlYWRmYmRmYWI4MjNkMDJmNDMyYjZlOWViYTU0YZOu+cQ=: 00:17:24.267 12:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzEyNDg4MmM2ZDMwNGEzNTlmZjQyZjhjMjk3YmVmNzk2ODcwOTc5MjYzYzY5ZWE3hEGojg==: --dhchap-ctrl-secret DHHC-1:03:ZTMwZDlhYmRkMzgwODU1MGRjZDY5YjQ4OWIwMzEzNTcyYjBlYWRmYmRmYWI4MjNkMDJmNDMyYjZlOWViYTU0YZOu+cQ=: 00:17:24.836 12:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.836 12:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:24.836 12:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.836 12:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.836 12:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.836 12:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.836 12:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:24.836 12:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:25.095 12:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:25.095 12:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.095 12:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:25.095 12:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:25.095 12:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:25.095 12:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.095 12:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.095 12:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.095 12:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.095 12:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.095 12:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.095 12:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.095 12:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.356 00:17:25.616 12:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:25.616 12:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.616 12:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:25.616 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.616 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.616 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.616 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.616 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.616 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.616 { 00:17:25.616 "cntlid": 91, 00:17:25.616 "qid": 0, 00:17:25.616 "state": "enabled", 00:17:25.616 "thread": "nvmf_tgt_poll_group_000", 00:17:25.616 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:25.616 "listen_address": { 00:17:25.616 "trtype": "TCP", 00:17:25.616 "adrfam": "IPv4", 00:17:25.616 "traddr": "10.0.0.2", 00:17:25.616 "trsvcid": "4420" 00:17:25.616 }, 00:17:25.616 "peer_address": { 00:17:25.616 "trtype": "TCP", 00:17:25.616 "adrfam": "IPv4", 00:17:25.616 "traddr": "10.0.0.1", 00:17:25.616 "trsvcid": "55152" 00:17:25.616 }, 00:17:25.616 "auth": { 00:17:25.616 "state": "completed", 00:17:25.616 "digest": "sha384", 00:17:25.616 "dhgroup": "ffdhe8192" 00:17:25.616 } 00:17:25.616 } 00:17:25.616 ]' 00:17:25.616 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.616 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:25.616 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:25.876 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:25.876 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:25.876 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.876 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.876 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.876 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2VmMTZiN2I2OWUzMDRiMjY1YmRmNTFkZWMzMjlkZjMZ7OuR: --dhchap-ctrl-secret DHHC-1:02:YjEwOGQ1OTE1Y2M5ZDdkY2FmYzc3MDkwMjM1ZDYxOGYyYmU4ZTNkZjVkNmJjZjQ1RnP+VA==: 00:17:25.876 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Y2VmMTZiN2I2OWUzMDRiMjY1YmRmNTFkZWMzMjlkZjMZ7OuR: --dhchap-ctrl-secret DHHC-1:02:YjEwOGQ1OTE1Y2M5ZDdkY2FmYzc3MDkwMjM1ZDYxOGYyYmU4ZTNkZjVkNmJjZjQ1RnP+VA==: 00:17:26.817 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.817 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:26.817 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.817 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.817 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.817 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:26.817 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:26.817 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:26.817 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:26.817 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:26.817 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:26.817 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:26.817 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:26.817 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.817 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.817 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.817 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.817 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.817 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.817 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.817 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.389 00:17:27.389 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:27.389 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:27.389 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.389 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.389 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.389 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.389 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.389 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.389 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.389 { 00:17:27.389 "cntlid": 93, 00:17:27.389 "qid": 0, 00:17:27.389 "state": "enabled", 00:17:27.389 "thread": "nvmf_tgt_poll_group_000", 00:17:27.389 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:27.389 "listen_address": { 00:17:27.389 "trtype": "TCP", 00:17:27.389 "adrfam": "IPv4", 00:17:27.389 "traddr": "10.0.0.2", 00:17:27.389 "trsvcid": "4420" 00:17:27.389 }, 00:17:27.389 "peer_address": { 00:17:27.389 "trtype": "TCP", 00:17:27.389 "adrfam": "IPv4", 00:17:27.389 "traddr": "10.0.0.1", 00:17:27.389 "trsvcid": "55184" 00:17:27.389 }, 00:17:27.389 "auth": { 00:17:27.389 "state": "completed", 00:17:27.389 "digest": "sha384", 00:17:27.389 "dhgroup": "ffdhe8192" 00:17:27.389 } 00:17:27.389 } 00:17:27.389 ]' 00:17:27.389 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:27.649 12:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:27.649 12:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:27.649 12:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:27.649 12:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:27.649 12:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.649 12:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.649 12:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.909 12:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzVmNzRhNjQ1MmIxZTc4M2Q3YWU2NWIzNGQ1NTAxNzM3MTQyYTFmYzVkNzFhNmNhpdauuQ==: --dhchap-ctrl-secret DHHC-1:01:Njc2MjYwOThiNzk1M2QyNDI3M2QxNGMzNzZlOWRkNDRMilIz: 00:17:27.909 12:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YzVmNzRhNjQ1MmIxZTc4M2Q3YWU2NWIzNGQ1NTAxNzM3MTQyYTFmYzVkNzFhNmNhpdauuQ==: --dhchap-ctrl-secret DHHC-1:01:Njc2MjYwOThiNzk1M2QyNDI3M2QxNGMzNzZlOWRkNDRMilIz: 00:17:28.480 12:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.480 12:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:28.480 12:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.480 12:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.480 12:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.480 12:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:28.480 12:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:28.480 12:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:28.740 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:28.740 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:28.740 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:28.740 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:28.740 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:28.740 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.740 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:28.740 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.740 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.741 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.741 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:28.741 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:28.741 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:29.001 00:17:29.261 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:29.261 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:29.261 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.261 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.261 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.261 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.261 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.261 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.261 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:29.261 { 00:17:29.261 "cntlid": 95, 00:17:29.261 "qid": 0, 00:17:29.261 "state": "enabled", 00:17:29.261 "thread": "nvmf_tgt_poll_group_000", 00:17:29.261 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:29.261 "listen_address": { 00:17:29.261 "trtype": "TCP", 00:17:29.261 "adrfam": "IPv4", 00:17:29.261 "traddr": "10.0.0.2", 00:17:29.261 "trsvcid": "4420" 00:17:29.261 }, 00:17:29.261 "peer_address": { 00:17:29.261 "trtype": "TCP", 00:17:29.261 "adrfam": "IPv4", 00:17:29.261 "traddr": "10.0.0.1", 00:17:29.261 "trsvcid": "55214" 00:17:29.261 }, 00:17:29.261 "auth": { 00:17:29.261 "state": "completed", 00:17:29.261 "digest": "sha384", 00:17:29.261 "dhgroup": "ffdhe8192" 00:17:29.261 } 00:17:29.261 } 00:17:29.261 ]' 00:17:29.261 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:29.261 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:29.261 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:29.522 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:29.522 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:29.522 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.522 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.522 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.782 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjAxMzQ4MjFlNmFjMTJlMDZlYzUxMjE2ZWU1NzU5MTkzYjkxNjRjNWM5N2IyNjhhYmEyMDE4MDljNmEyMGM2ZTVMP7w=: 00:17:29.782 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjAxMzQ4MjFlNmFjMTJlMDZlYzUxMjE2ZWU1NzU5MTkzYjkxNjRjNWM5N2IyNjhhYmEyMDE4MDljNmEyMGM2ZTVMP7w=: 00:17:30.352 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.352 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.352 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:30.352 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.352 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.352 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.352 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:30.352 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:30.352 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.352 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:30.352 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:30.613 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:30.614 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:30.614 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:30.614 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:30.614 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:30.614 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.614 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.614 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.614 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.614 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.614 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.614 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.614 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.614 00:17:30.614 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.614 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.614 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.874 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.874 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.874 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.874 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.874 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.874 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.874 { 00:17:30.874 "cntlid": 97, 00:17:30.874 "qid": 0, 00:17:30.874 "state": "enabled", 00:17:30.874 "thread": "nvmf_tgt_poll_group_000", 00:17:30.874 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:30.874 "listen_address": { 00:17:30.874 "trtype": "TCP", 00:17:30.874 "adrfam": "IPv4", 00:17:30.874 "traddr": "10.0.0.2", 00:17:30.874 "trsvcid": "4420" 00:17:30.874 }, 00:17:30.874 "peer_address": { 00:17:30.874 "trtype": "TCP", 00:17:30.874 "adrfam": "IPv4", 00:17:30.874 "traddr": "10.0.0.1", 00:17:30.874 "trsvcid": "55248" 00:17:30.874 }, 00:17:30.874 "auth": { 00:17:30.874 "state": "completed", 00:17:30.874 "digest": "sha512", 00:17:30.874 "dhgroup": "null" 00:17:30.874 } 00:17:30.874 } 00:17:30.874 ]' 00:17:30.874 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.874 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:30.874 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:31.135 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:31.135 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:31.135 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.135 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.135 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.135 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzEyNDg4MmM2ZDMwNGEzNTlmZjQyZjhjMjk3YmVmNzk2ODcwOTc5MjYzYzY5ZWE3hEGojg==: --dhchap-ctrl-secret DHHC-1:03:ZTMwZDlhYmRkMzgwODU1MGRjZDY5YjQ4OWIwMzEzNTcyYjBlYWRmYmRmYWI4MjNkMDJmNDMyYjZlOWViYTU0YZOu+cQ=: 00:17:31.135 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzEyNDg4MmM2ZDMwNGEzNTlmZjQyZjhjMjk3YmVmNzk2ODcwOTc5MjYzYzY5ZWE3hEGojg==: --dhchap-ctrl-secret DHHC-1:03:ZTMwZDlhYmRkMzgwODU1MGRjZDY5YjQ4OWIwMzEzNTcyYjBlYWRmYmRmYWI4MjNkMDJmNDMyYjZlOWViYTU0YZOu+cQ=: 00:17:32.076 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.076 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:32.076 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.076 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.076 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.076 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.076 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:32.077 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:32.077 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:32.077 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.077 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:32.077 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:32.077 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:32.077 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.077 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.077 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.077 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.077 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.077 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.077 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.077 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.343 00:17:32.343 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.343 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.343 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.602 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.602 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.602 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.602 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.602 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.602 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.602 { 00:17:32.602 "cntlid": 99, 00:17:32.602 "qid": 0, 00:17:32.602 "state": "enabled", 00:17:32.602 "thread": "nvmf_tgt_poll_group_000", 00:17:32.602 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:32.602 "listen_address": { 00:17:32.602 "trtype": "TCP", 00:17:32.602 "adrfam": "IPv4", 00:17:32.602 "traddr": "10.0.0.2", 00:17:32.602 "trsvcid": "4420" 00:17:32.602 }, 00:17:32.602 "peer_address": { 00:17:32.602 "trtype": "TCP", 00:17:32.602 "adrfam": "IPv4", 00:17:32.602 "traddr": "10.0.0.1", 00:17:32.602 "trsvcid": "55280" 00:17:32.602 }, 00:17:32.602 "auth": { 00:17:32.602 "state": "completed", 00:17:32.602 "digest": "sha512", 00:17:32.602 "dhgroup": "null" 00:17:32.602 } 00:17:32.602 } 00:17:32.602 ]' 00:17:32.602 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.602 12:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:32.602 12:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.602 12:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:32.602 12:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.603 12:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.603 12:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.603 12:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.942 12:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2VmMTZiN2I2OWUzMDRiMjY1YmRmNTFkZWMzMjlkZjMZ7OuR: --dhchap-ctrl-secret DHHC-1:02:YjEwOGQ1OTE1Y2M5ZDdkY2FmYzc3MDkwMjM1ZDYxOGYyYmU4ZTNkZjVkNmJjZjQ1RnP+VA==: 00:17:32.942 12:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Y2VmMTZiN2I2OWUzMDRiMjY1YmRmNTFkZWMzMjlkZjMZ7OuR: --dhchap-ctrl-secret DHHC-1:02:YjEwOGQ1OTE1Y2M5ZDdkY2FmYzc3MDkwMjM1ZDYxOGYyYmU4ZTNkZjVkNmJjZjQ1RnP+VA==: 00:17:33.513 12:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.513 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.513 12:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:33.513 12:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.513 12:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.513 12:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.513 12:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:33.513 12:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:33.513 12:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:33.774 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:33.774 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.774 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:33.774 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:33.774 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:33.774 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.774 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.774 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.774 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.774 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.774 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.774 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.774 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.035 00:17:34.035 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:34.035 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:34.035 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.035 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.035 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.035 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.035 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.035 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.035 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:34.035 { 00:17:34.035 "cntlid": 101, 00:17:34.035 "qid": 0, 00:17:34.035 "state": "enabled", 00:17:34.035 "thread": "nvmf_tgt_poll_group_000", 00:17:34.035 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:34.035 "listen_address": { 00:17:34.035 "trtype": "TCP", 00:17:34.035 "adrfam": "IPv4", 00:17:34.035 "traddr": "10.0.0.2", 00:17:34.035 "trsvcid": "4420" 00:17:34.035 }, 00:17:34.035 "peer_address": { 00:17:34.035 "trtype": "TCP", 00:17:34.035 "adrfam": "IPv4", 00:17:34.035 "traddr": "10.0.0.1", 00:17:34.035 "trsvcid": "55310" 00:17:34.035 }, 00:17:34.035 "auth": { 00:17:34.035 "state": "completed", 00:17:34.035 "digest": "sha512", 00:17:34.035 "dhgroup": "null" 00:17:34.035 } 00:17:34.035 } 00:17:34.035 ]' 00:17:34.035 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.295 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:34.295 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.295 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:34.295 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.295 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.295 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.295 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.556 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzVmNzRhNjQ1MmIxZTc4M2Q3YWU2NWIzNGQ1NTAxNzM3MTQyYTFmYzVkNzFhNmNhpdauuQ==: --dhchap-ctrl-secret DHHC-1:01:Njc2MjYwOThiNzk1M2QyNDI3M2QxNGMzNzZlOWRkNDRMilIz: 00:17:34.556 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YzVmNzRhNjQ1MmIxZTc4M2Q3YWU2NWIzNGQ1NTAxNzM3MTQyYTFmYzVkNzFhNmNhpdauuQ==: --dhchap-ctrl-secret DHHC-1:01:Njc2MjYwOThiNzk1M2QyNDI3M2QxNGMzNzZlOWRkNDRMilIz: 00:17:35.128 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.128 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:35.128 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.128 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.128 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.128 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.128 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:35.128 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:35.388 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:35.388 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.388 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:35.388 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:35.388 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:35.388 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.388 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:35.388 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.388 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.388 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.388 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:35.388 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:35.388 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:35.648 00:17:35.648 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.648 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.648 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.648 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.649 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.649 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.649 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.909 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.909 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.909 { 00:17:35.909 "cntlid": 103, 00:17:35.909 "qid": 0, 00:17:35.909 "state": "enabled", 00:17:35.909 "thread": "nvmf_tgt_poll_group_000", 00:17:35.909 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:35.909 "listen_address": { 00:17:35.909 "trtype": "TCP", 00:17:35.909 "adrfam": "IPv4", 00:17:35.909 "traddr": "10.0.0.2", 00:17:35.909 "trsvcid": "4420" 00:17:35.909 }, 00:17:35.909 "peer_address": { 00:17:35.909 "trtype": "TCP", 00:17:35.909 "adrfam": "IPv4", 00:17:35.909 "traddr": "10.0.0.1", 00:17:35.909 "trsvcid": "38426" 00:17:35.909 }, 00:17:35.909 "auth": { 00:17:35.909 "state": "completed", 00:17:35.909 "digest": "sha512", 00:17:35.909 "dhgroup": "null" 00:17:35.909 } 00:17:35.909 } 00:17:35.909 ]' 00:17:35.909 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.909 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:35.909 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.909 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:35.909 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.909 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.909 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.910 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.170 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjAxMzQ4MjFlNmFjMTJlMDZlYzUxMjE2ZWU1NzU5MTkzYjkxNjRjNWM5N2IyNjhhYmEyMDE4MDljNmEyMGM2ZTVMP7w=: 00:17:36.170 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjAxMzQ4MjFlNmFjMTJlMDZlYzUxMjE2ZWU1NzU5MTkzYjkxNjRjNWM5N2IyNjhhYmEyMDE4MDljNmEyMGM2ZTVMP7w=: 00:17:36.742 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.742 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.742 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:36.742 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.742 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.742 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.742 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:36.742 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:36.742 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:36.742 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:37.003 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:37.003 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:37.003 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:37.003 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:37.003 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:37.003 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.003 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.003 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.003 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.003 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.003 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.003 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.003 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.264 00:17:37.264 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.264 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.264 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.265 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.265 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.265 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.265 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.265 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.265 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:37.265 { 00:17:37.265 "cntlid": 105, 00:17:37.265 "qid": 0, 00:17:37.265 "state": "enabled", 00:17:37.265 "thread": "nvmf_tgt_poll_group_000", 00:17:37.265 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:37.265 "listen_address": { 00:17:37.265 "trtype": "TCP", 00:17:37.265 "adrfam": "IPv4", 00:17:37.265 "traddr": "10.0.0.2", 00:17:37.265 "trsvcid": "4420" 00:17:37.265 }, 00:17:37.265 "peer_address": { 00:17:37.265 "trtype": "TCP", 00:17:37.265 "adrfam": "IPv4", 00:17:37.265 "traddr": "10.0.0.1", 00:17:37.265 "trsvcid": "38434" 00:17:37.265 }, 00:17:37.265 "auth": { 00:17:37.265 "state": "completed", 00:17:37.265 "digest": "sha512", 00:17:37.265 "dhgroup": "ffdhe2048" 00:17:37.265 } 00:17:37.265 } 00:17:37.265 ]' 00:17:37.265 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:37.526 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:37.526 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:37.526 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:37.526 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.526 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.526 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.526 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.786 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzEyNDg4MmM2ZDMwNGEzNTlmZjQyZjhjMjk3YmVmNzk2ODcwOTc5MjYzYzY5ZWE3hEGojg==: --dhchap-ctrl-secret DHHC-1:03:ZTMwZDlhYmRkMzgwODU1MGRjZDY5YjQ4OWIwMzEzNTcyYjBlYWRmYmRmYWI4MjNkMDJmNDMyYjZlOWViYTU0YZOu+cQ=: 00:17:37.787 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzEyNDg4MmM2ZDMwNGEzNTlmZjQyZjhjMjk3YmVmNzk2ODcwOTc5MjYzYzY5ZWE3hEGojg==: --dhchap-ctrl-secret DHHC-1:03:ZTMwZDlhYmRkMzgwODU1MGRjZDY5YjQ4OWIwMzEzNTcyYjBlYWRmYmRmYWI4MjNkMDJmNDMyYjZlOWViYTU0YZOu+cQ=: 00:17:38.358 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.358 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:38.358 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.358 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.358 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.358 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:38.358 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:38.358 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:38.618 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:38.618 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:38.618 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:38.618 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:38.618 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:38.618 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.618 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.618 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.618 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.618 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.618 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.618 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.618 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.878 00:17:38.878 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:38.878 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:38.878 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.878 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.878 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.878 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.878 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.878 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.878 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.878 { 00:17:38.878 "cntlid": 107, 00:17:38.878 "qid": 0, 00:17:38.878 "state": "enabled", 00:17:38.878 "thread": "nvmf_tgt_poll_group_000", 00:17:38.878 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:38.878 "listen_address": { 00:17:38.878 "trtype": "TCP", 00:17:38.878 "adrfam": "IPv4", 00:17:38.878 "traddr": "10.0.0.2", 00:17:38.878 "trsvcid": "4420" 00:17:38.878 }, 00:17:38.878 "peer_address": { 00:17:38.878 "trtype": "TCP", 00:17:38.878 "adrfam": "IPv4", 00:17:38.878 "traddr": "10.0.0.1", 00:17:38.878 "trsvcid": "38464" 00:17:38.878 }, 00:17:38.878 "auth": { 00:17:38.878 "state": "completed", 00:17:38.878 "digest": "sha512", 00:17:38.878 "dhgroup": "ffdhe2048" 00:17:38.878 } 00:17:38.878 } 00:17:38.878 ]' 00:17:38.878 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.139 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:39.139 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.139 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:39.139 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:39.139 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.139 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.139 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.451 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2VmMTZiN2I2OWUzMDRiMjY1YmRmNTFkZWMzMjlkZjMZ7OuR: --dhchap-ctrl-secret DHHC-1:02:YjEwOGQ1OTE1Y2M5ZDdkY2FmYzc3MDkwMjM1ZDYxOGYyYmU4ZTNkZjVkNmJjZjQ1RnP+VA==: 00:17:39.451 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Y2VmMTZiN2I2OWUzMDRiMjY1YmRmNTFkZWMzMjlkZjMZ7OuR: --dhchap-ctrl-secret DHHC-1:02:YjEwOGQ1OTE1Y2M5ZDdkY2FmYzc3MDkwMjM1ZDYxOGYyYmU4ZTNkZjVkNmJjZjQ1RnP+VA==: 00:17:40.116 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.116 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:40.116 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.116 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.116 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.116 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:40.116 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:40.116 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:40.116 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:17:40.116 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.116 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:40.116 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:40.116 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:40.116 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.116 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.116 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.116 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.116 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.116 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.116 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.116 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.377 00:17:40.377 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:40.377 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:40.377 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.638 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.638 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.638 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.638 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.638 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.638 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:40.638 { 00:17:40.638 "cntlid": 109, 00:17:40.638 "qid": 0, 00:17:40.638 "state": "enabled", 00:17:40.638 "thread": "nvmf_tgt_poll_group_000", 00:17:40.638 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:40.638 "listen_address": { 00:17:40.638 "trtype": "TCP", 00:17:40.638 "adrfam": "IPv4", 00:17:40.638 "traddr": "10.0.0.2", 00:17:40.638 "trsvcid": "4420" 00:17:40.638 }, 00:17:40.638 "peer_address": { 00:17:40.638 "trtype": "TCP", 00:17:40.638 "adrfam": "IPv4", 00:17:40.638 "traddr": "10.0.0.1", 00:17:40.638 "trsvcid": "38490" 00:17:40.638 }, 00:17:40.638 "auth": { 00:17:40.638 "state": "completed", 00:17:40.638 "digest": "sha512", 00:17:40.638 "dhgroup": "ffdhe2048" 00:17:40.638 } 00:17:40.638 } 00:17:40.638 ]' 00:17:40.638 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:40.638 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:40.638 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:40.638 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:40.638 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:40.638 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.638 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.638 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.899 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzVmNzRhNjQ1MmIxZTc4M2Q3YWU2NWIzNGQ1NTAxNzM3MTQyYTFmYzVkNzFhNmNhpdauuQ==: --dhchap-ctrl-secret DHHC-1:01:Njc2MjYwOThiNzk1M2QyNDI3M2QxNGMzNzZlOWRkNDRMilIz: 00:17:40.899 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YzVmNzRhNjQ1MmIxZTc4M2Q3YWU2NWIzNGQ1NTAxNzM3MTQyYTFmYzVkNzFhNmNhpdauuQ==: --dhchap-ctrl-secret DHHC-1:01:Njc2MjYwOThiNzk1M2QyNDI3M2QxNGMzNzZlOWRkNDRMilIz: 00:17:41.470 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.470 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:41.470 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.470 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.470 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.470 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:41.470 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:41.470 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:41.731 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:41.731 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:41.731 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:41.731 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:41.731 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:41.731 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.731 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:41.731 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.731 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.731 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.731 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:41.731 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:41.731 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:41.991 00:17:41.991 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.991 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.991 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.251 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.251 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.251 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.251 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.251 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.251 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:42.251 { 00:17:42.251 "cntlid": 111, 00:17:42.251 "qid": 0, 00:17:42.251 "state": "enabled", 00:17:42.251 "thread": "nvmf_tgt_poll_group_000", 00:17:42.251 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:42.251 "listen_address": { 00:17:42.251 "trtype": "TCP", 00:17:42.251 "adrfam": "IPv4", 00:17:42.251 "traddr": "10.0.0.2", 00:17:42.251 "trsvcid": "4420" 00:17:42.251 }, 00:17:42.251 "peer_address": { 00:17:42.251 "trtype": "TCP", 00:17:42.251 "adrfam": "IPv4", 00:17:42.251 "traddr": "10.0.0.1", 00:17:42.251 "trsvcid": "38526" 00:17:42.251 }, 00:17:42.251 "auth": { 00:17:42.251 "state": "completed", 00:17:42.251 "digest": "sha512", 00:17:42.251 "dhgroup": "ffdhe2048" 00:17:42.251 } 00:17:42.251 } 00:17:42.251 ]' 00:17:42.251 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:42.251 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:42.251 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:42.251 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:42.251 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:42.251 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.251 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.251 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.510 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjAxMzQ4MjFlNmFjMTJlMDZlYzUxMjE2ZWU1NzU5MTkzYjkxNjRjNWM5N2IyNjhhYmEyMDE4MDljNmEyMGM2ZTVMP7w=: 00:17:42.510 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjAxMzQ4MjFlNmFjMTJlMDZlYzUxMjE2ZWU1NzU5MTkzYjkxNjRjNWM5N2IyNjhhYmEyMDE4MDljNmEyMGM2ZTVMP7w=: 00:17:43.081 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.081 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:43.081 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.081 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.081 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.081 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:43.081 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:43.081 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:43.081 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:43.342 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:43.342 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:43.342 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:43.342 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:43.342 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:43.342 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.342 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.342 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.342 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.342 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.342 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.342 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.342 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.603 00:17:43.603 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:43.603 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:43.603 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.863 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.863 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.863 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.863 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.863 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.863 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:43.863 { 00:17:43.863 "cntlid": 113, 00:17:43.863 "qid": 0, 00:17:43.863 "state": "enabled", 00:17:43.863 "thread": "nvmf_tgt_poll_group_000", 00:17:43.863 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:43.863 "listen_address": { 00:17:43.863 "trtype": "TCP", 00:17:43.863 "adrfam": "IPv4", 00:17:43.863 "traddr": "10.0.0.2", 00:17:43.863 "trsvcid": "4420" 00:17:43.863 }, 00:17:43.863 "peer_address": { 00:17:43.863 "trtype": "TCP", 00:17:43.863 "adrfam": "IPv4", 00:17:43.863 "traddr": "10.0.0.1", 00:17:43.863 "trsvcid": "38568" 00:17:43.863 }, 00:17:43.863 "auth": { 00:17:43.863 "state": "completed", 00:17:43.863 "digest": "sha512", 00:17:43.863 "dhgroup": "ffdhe3072" 00:17:43.863 } 00:17:43.863 } 00:17:43.863 ]' 00:17:43.863 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:43.863 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:43.864 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:43.864 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:43.864 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:43.864 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.864 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.864 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.124 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzEyNDg4MmM2ZDMwNGEzNTlmZjQyZjhjMjk3YmVmNzk2ODcwOTc5MjYzYzY5ZWE3hEGojg==: --dhchap-ctrl-secret DHHC-1:03:ZTMwZDlhYmRkMzgwODU1MGRjZDY5YjQ4OWIwMzEzNTcyYjBlYWRmYmRmYWI4MjNkMDJmNDMyYjZlOWViYTU0YZOu+cQ=: 00:17:44.124 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzEyNDg4MmM2ZDMwNGEzNTlmZjQyZjhjMjk3YmVmNzk2ODcwOTc5MjYzYzY5ZWE3hEGojg==: --dhchap-ctrl-secret DHHC-1:03:ZTMwZDlhYmRkMzgwODU1MGRjZDY5YjQ4OWIwMzEzNTcyYjBlYWRmYmRmYWI4MjNkMDJmNDMyYjZlOWViYTU0YZOu+cQ=: 00:17:44.696 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.696 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.696 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:44.696 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.696 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.696 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.958 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:44.958 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:44.958 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:44.958 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:17:44.958 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:44.958 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:44.958 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:44.958 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:44.958 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.958 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.958 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.958 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.958 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.958 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.958 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.958 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.218 00:17:45.218 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:45.218 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:45.218 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.479 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.479 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.479 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.479 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.479 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.479 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:45.479 { 00:17:45.479 "cntlid": 115, 00:17:45.479 "qid": 0, 00:17:45.479 "state": "enabled", 00:17:45.479 "thread": "nvmf_tgt_poll_group_000", 00:17:45.479 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:45.479 "listen_address": { 00:17:45.479 "trtype": "TCP", 00:17:45.479 "adrfam": "IPv4", 00:17:45.479 "traddr": "10.0.0.2", 00:17:45.479 "trsvcid": "4420" 00:17:45.479 }, 00:17:45.479 "peer_address": { 00:17:45.479 "trtype": "TCP", 00:17:45.479 "adrfam": "IPv4", 00:17:45.479 "traddr": "10.0.0.1", 00:17:45.479 "trsvcid": "40246" 00:17:45.479 }, 00:17:45.479 "auth": { 00:17:45.479 "state": "completed", 00:17:45.479 "digest": "sha512", 00:17:45.479 "dhgroup": "ffdhe3072" 00:17:45.479 } 00:17:45.479 } 00:17:45.479 ]' 00:17:45.479 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:45.479 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:45.479 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:45.479 12:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:45.479 12:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:45.740 12:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.740 12:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.740 12:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.740 12:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2VmMTZiN2I2OWUzMDRiMjY1YmRmNTFkZWMzMjlkZjMZ7OuR: --dhchap-ctrl-secret DHHC-1:02:YjEwOGQ1OTE1Y2M5ZDdkY2FmYzc3MDkwMjM1ZDYxOGYyYmU4ZTNkZjVkNmJjZjQ1RnP+VA==: 00:17:45.740 12:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Y2VmMTZiN2I2OWUzMDRiMjY1YmRmNTFkZWMzMjlkZjMZ7OuR: --dhchap-ctrl-secret DHHC-1:02:YjEwOGQ1OTE1Y2M5ZDdkY2FmYzc3MDkwMjM1ZDYxOGYyYmU4ZTNkZjVkNmJjZjQ1RnP+VA==: 00:17:46.682 12:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.682 12:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:46.682 12:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.682 12:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.682 12:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.682 12:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:46.682 12:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:46.682 12:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:46.682 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:17:46.682 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:46.682 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:46.682 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:46.682 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:46.682 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.682 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.682 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.682 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.682 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.682 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.682 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.682 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.944 00:17:46.944 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:46.944 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:46.944 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.205 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.205 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.205 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.205 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.205 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.205 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:47.205 { 00:17:47.205 "cntlid": 117, 00:17:47.205 "qid": 0, 00:17:47.205 "state": "enabled", 00:17:47.205 "thread": "nvmf_tgt_poll_group_000", 00:17:47.205 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:47.205 "listen_address": { 00:17:47.205 "trtype": "TCP", 00:17:47.205 "adrfam": "IPv4", 00:17:47.205 "traddr": "10.0.0.2", 00:17:47.205 "trsvcid": "4420" 00:17:47.205 }, 00:17:47.205 "peer_address": { 00:17:47.205 "trtype": "TCP", 00:17:47.205 "adrfam": "IPv4", 00:17:47.205 "traddr": "10.0.0.1", 00:17:47.205 "trsvcid": "40274" 00:17:47.205 }, 00:17:47.205 "auth": { 00:17:47.205 "state": "completed", 00:17:47.205 "digest": "sha512", 00:17:47.205 "dhgroup": "ffdhe3072" 00:17:47.205 } 00:17:47.205 } 00:17:47.205 ]' 00:17:47.205 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:47.205 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:47.205 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:47.205 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:47.205 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:47.205 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.205 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.205 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.466 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzVmNzRhNjQ1MmIxZTc4M2Q3YWU2NWIzNGQ1NTAxNzM3MTQyYTFmYzVkNzFhNmNhpdauuQ==: --dhchap-ctrl-secret DHHC-1:01:Njc2MjYwOThiNzk1M2QyNDI3M2QxNGMzNzZlOWRkNDRMilIz: 00:17:47.466 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YzVmNzRhNjQ1MmIxZTc4M2Q3YWU2NWIzNGQ1NTAxNzM3MTQyYTFmYzVkNzFhNmNhpdauuQ==: --dhchap-ctrl-secret DHHC-1:01:Njc2MjYwOThiNzk1M2QyNDI3M2QxNGMzNzZlOWRkNDRMilIz: 00:17:48.040 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.040 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:48.040 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.040 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.040 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.040 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:48.040 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:48.040 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:48.301 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:17:48.301 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:48.301 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:48.301 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:48.301 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:48.301 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.301 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:48.301 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.301 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.301 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.301 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:48.301 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:48.301 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:48.562 00:17:48.562 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:48.562 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:48.562 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.823 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.823 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.823 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.823 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.823 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.823 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:48.823 { 00:17:48.823 "cntlid": 119, 00:17:48.823 "qid": 0, 00:17:48.823 "state": "enabled", 00:17:48.823 "thread": "nvmf_tgt_poll_group_000", 00:17:48.823 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:48.823 "listen_address": { 00:17:48.823 "trtype": "TCP", 00:17:48.823 "adrfam": "IPv4", 00:17:48.823 "traddr": "10.0.0.2", 00:17:48.823 "trsvcid": "4420" 00:17:48.823 }, 00:17:48.823 "peer_address": { 00:17:48.823 "trtype": "TCP", 00:17:48.823 "adrfam": "IPv4", 00:17:48.823 "traddr": "10.0.0.1", 00:17:48.823 "trsvcid": "40316" 00:17:48.823 }, 00:17:48.823 "auth": { 00:17:48.823 "state": "completed", 00:17:48.823 "digest": "sha512", 00:17:48.823 "dhgroup": "ffdhe3072" 00:17:48.823 } 00:17:48.823 } 00:17:48.823 ]' 00:17:48.823 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:48.823 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:48.823 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:48.823 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:48.823 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:48.823 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.823 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.823 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.083 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjAxMzQ4MjFlNmFjMTJlMDZlYzUxMjE2ZWU1NzU5MTkzYjkxNjRjNWM5N2IyNjhhYmEyMDE4MDljNmEyMGM2ZTVMP7w=: 00:17:49.083 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjAxMzQ4MjFlNmFjMTJlMDZlYzUxMjE2ZWU1NzU5MTkzYjkxNjRjNWM5N2IyNjhhYmEyMDE4MDljNmEyMGM2ZTVMP7w=: 00:17:49.657 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.657 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.657 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:49.657 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.657 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.657 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.657 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:49.657 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:49.657 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:49.657 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:49.918 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:17:49.918 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.918 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:49.918 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:49.918 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:49.918 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.918 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.918 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.918 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.918 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.918 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.918 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.918 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.180 00:17:50.180 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.180 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.180 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.440 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.440 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.440 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.440 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.440 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.440 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.440 { 00:17:50.440 "cntlid": 121, 00:17:50.440 "qid": 0, 00:17:50.440 "state": "enabled", 00:17:50.440 "thread": "nvmf_tgt_poll_group_000", 00:17:50.440 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:50.440 "listen_address": { 00:17:50.440 "trtype": "TCP", 00:17:50.440 "adrfam": "IPv4", 00:17:50.440 "traddr": "10.0.0.2", 00:17:50.440 "trsvcid": "4420" 00:17:50.440 }, 00:17:50.440 "peer_address": { 00:17:50.440 "trtype": "TCP", 00:17:50.440 "adrfam": "IPv4", 00:17:50.440 "traddr": "10.0.0.1", 00:17:50.440 "trsvcid": "40356" 00:17:50.440 }, 00:17:50.440 "auth": { 00:17:50.440 "state": "completed", 00:17:50.440 "digest": "sha512", 00:17:50.440 "dhgroup": "ffdhe4096" 00:17:50.440 } 00:17:50.440 } 00:17:50.440 ]' 00:17:50.440 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.440 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:50.440 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.440 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:50.440 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:50.440 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.440 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.440 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.701 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzEyNDg4MmM2ZDMwNGEzNTlmZjQyZjhjMjk3YmVmNzk2ODcwOTc5MjYzYzY5ZWE3hEGojg==: --dhchap-ctrl-secret DHHC-1:03:ZTMwZDlhYmRkMzgwODU1MGRjZDY5YjQ4OWIwMzEzNTcyYjBlYWRmYmRmYWI4MjNkMDJmNDMyYjZlOWViYTU0YZOu+cQ=: 00:17:50.701 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzEyNDg4MmM2ZDMwNGEzNTlmZjQyZjhjMjk3YmVmNzk2ODcwOTc5MjYzYzY5ZWE3hEGojg==: --dhchap-ctrl-secret DHHC-1:03:ZTMwZDlhYmRkMzgwODU1MGRjZDY5YjQ4OWIwMzEzNTcyYjBlYWRmYmRmYWI4MjNkMDJmNDMyYjZlOWViYTU0YZOu+cQ=: 00:17:51.272 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.273 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:51.273 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.273 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.273 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.273 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:51.273 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:51.273 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:51.534 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:17:51.534 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:51.534 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:51.534 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:51.534 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:51.534 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.534 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.534 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.534 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.534 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.534 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.534 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.534 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.795 00:17:51.795 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:51.795 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:51.795 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.054 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.054 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.054 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.054 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.054 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.054 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.054 { 00:17:52.054 "cntlid": 123, 00:17:52.054 "qid": 0, 00:17:52.054 "state": "enabled", 00:17:52.054 "thread": "nvmf_tgt_poll_group_000", 00:17:52.054 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:52.054 "listen_address": { 00:17:52.054 "trtype": "TCP", 00:17:52.054 "adrfam": "IPv4", 00:17:52.054 "traddr": "10.0.0.2", 00:17:52.054 "trsvcid": "4420" 00:17:52.054 }, 00:17:52.054 "peer_address": { 00:17:52.054 "trtype": "TCP", 00:17:52.054 "adrfam": "IPv4", 00:17:52.054 "traddr": "10.0.0.1", 00:17:52.054 "trsvcid": "40392" 00:17:52.054 }, 00:17:52.054 "auth": { 00:17:52.054 "state": "completed", 00:17:52.054 "digest": "sha512", 00:17:52.054 "dhgroup": "ffdhe4096" 00:17:52.054 } 00:17:52.054 } 00:17:52.054 ]' 00:17:52.054 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:52.054 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:52.054 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:52.054 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:52.054 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:52.054 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.054 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.054 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.317 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2VmMTZiN2I2OWUzMDRiMjY1YmRmNTFkZWMzMjlkZjMZ7OuR: --dhchap-ctrl-secret DHHC-1:02:YjEwOGQ1OTE1Y2M5ZDdkY2FmYzc3MDkwMjM1ZDYxOGYyYmU4ZTNkZjVkNmJjZjQ1RnP+VA==: 00:17:52.317 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Y2VmMTZiN2I2OWUzMDRiMjY1YmRmNTFkZWMzMjlkZjMZ7OuR: --dhchap-ctrl-secret DHHC-1:02:YjEwOGQ1OTE1Y2M5ZDdkY2FmYzc3MDkwMjM1ZDYxOGYyYmU4ZTNkZjVkNmJjZjQ1RnP+VA==: 00:17:52.891 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.153 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.153 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:53.153 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.153 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.153 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.153 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.153 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:53.153 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:53.153 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:17:53.153 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:53.153 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:53.153 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:53.153 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:53.153 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.153 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.153 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.153 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.153 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.153 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.153 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.153 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.414 00:17:53.414 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:53.414 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:53.414 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.675 12:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.675 12:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.675 12:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.675 12:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.675 12:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.675 12:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:53.675 { 00:17:53.675 "cntlid": 125, 00:17:53.675 "qid": 0, 00:17:53.675 "state": "enabled", 00:17:53.675 "thread": "nvmf_tgt_poll_group_000", 00:17:53.675 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:53.675 "listen_address": { 00:17:53.675 "trtype": "TCP", 00:17:53.675 "adrfam": "IPv4", 00:17:53.675 "traddr": "10.0.0.2", 00:17:53.675 "trsvcid": "4420" 00:17:53.675 }, 00:17:53.675 "peer_address": { 00:17:53.675 "trtype": "TCP", 00:17:53.675 "adrfam": "IPv4", 00:17:53.675 "traddr": "10.0.0.1", 00:17:53.675 "trsvcid": "40416" 00:17:53.675 }, 00:17:53.675 "auth": { 00:17:53.675 "state": "completed", 00:17:53.675 "digest": "sha512", 00:17:53.675 "dhgroup": "ffdhe4096" 00:17:53.675 } 00:17:53.675 } 00:17:53.675 ]' 00:17:53.675 12:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:53.675 12:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:53.675 12:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:53.675 12:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:53.675 12:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:53.675 12:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.675 12:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.675 12:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.935 12:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzVmNzRhNjQ1MmIxZTc4M2Q3YWU2NWIzNGQ1NTAxNzM3MTQyYTFmYzVkNzFhNmNhpdauuQ==: --dhchap-ctrl-secret DHHC-1:01:Njc2MjYwOThiNzk1M2QyNDI3M2QxNGMzNzZlOWRkNDRMilIz: 00:17:53.935 12:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YzVmNzRhNjQ1MmIxZTc4M2Q3YWU2NWIzNGQ1NTAxNzM3MTQyYTFmYzVkNzFhNmNhpdauuQ==: --dhchap-ctrl-secret DHHC-1:01:Njc2MjYwOThiNzk1M2QyNDI3M2QxNGMzNzZlOWRkNDRMilIz: 00:17:54.506 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.766 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:54.766 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.766 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.766 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.766 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:54.766 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:54.766 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:54.766 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:17:54.766 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:54.766 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:54.766 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:54.766 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:54.766 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.766 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:54.766 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.767 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.767 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.767 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:54.767 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:54.767 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:55.027 00:17:55.027 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:55.027 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.027 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.287 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.287 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.287 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.287 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.287 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.287 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.287 { 00:17:55.287 "cntlid": 127, 00:17:55.287 "qid": 0, 00:17:55.287 "state": "enabled", 00:17:55.287 "thread": "nvmf_tgt_poll_group_000", 00:17:55.287 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:55.287 "listen_address": { 00:17:55.287 "trtype": "TCP", 00:17:55.287 "adrfam": "IPv4", 00:17:55.287 "traddr": "10.0.0.2", 00:17:55.287 "trsvcid": "4420" 00:17:55.287 }, 00:17:55.287 "peer_address": { 00:17:55.287 "trtype": "TCP", 00:17:55.287 "adrfam": "IPv4", 00:17:55.287 "traddr": "10.0.0.1", 00:17:55.287 "trsvcid": "54630" 00:17:55.287 }, 00:17:55.287 "auth": { 00:17:55.287 "state": "completed", 00:17:55.287 "digest": "sha512", 00:17:55.287 "dhgroup": "ffdhe4096" 00:17:55.287 } 00:17:55.287 } 00:17:55.287 ]' 00:17:55.287 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.287 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:55.287 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.287 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:55.287 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.547 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.547 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.547 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.547 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjAxMzQ4MjFlNmFjMTJlMDZlYzUxMjE2ZWU1NzU5MTkzYjkxNjRjNWM5N2IyNjhhYmEyMDE4MDljNmEyMGM2ZTVMP7w=: 00:17:55.547 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjAxMzQ4MjFlNmFjMTJlMDZlYzUxMjE2ZWU1NzU5MTkzYjkxNjRjNWM5N2IyNjhhYmEyMDE4MDljNmEyMGM2ZTVMP7w=: 00:17:56.118 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.118 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.118 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:56.118 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.118 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.379 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.379 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:56.379 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:56.379 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:56.379 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:56.379 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:17:56.379 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:56.379 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:56.379 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:56.379 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:56.379 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.379 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.379 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.379 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.379 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.379 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.379 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.379 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.951 00:17:56.951 12:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.951 12:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.951 12:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.951 12:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.951 12:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.951 12:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.951 12:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.951 12:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.951 12:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:56.951 { 00:17:56.951 "cntlid": 129, 00:17:56.951 "qid": 0, 00:17:56.951 "state": "enabled", 00:17:56.951 "thread": "nvmf_tgt_poll_group_000", 00:17:56.951 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:56.951 "listen_address": { 00:17:56.951 "trtype": "TCP", 00:17:56.951 "adrfam": "IPv4", 00:17:56.951 "traddr": "10.0.0.2", 00:17:56.951 "trsvcid": "4420" 00:17:56.951 }, 00:17:56.951 "peer_address": { 00:17:56.951 "trtype": "TCP", 00:17:56.951 "adrfam": "IPv4", 00:17:56.951 "traddr": "10.0.0.1", 00:17:56.951 "trsvcid": "54654" 00:17:56.951 }, 00:17:56.951 "auth": { 00:17:56.951 "state": "completed", 00:17:56.951 "digest": "sha512", 00:17:56.951 "dhgroup": "ffdhe6144" 00:17:56.951 } 00:17:56.951 } 00:17:56.951 ]' 00:17:56.951 12:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:56.951 12:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:56.951 12:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:57.212 12:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:57.212 12:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.212 12:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.212 12:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.212 12:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.212 12:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzEyNDg4MmM2ZDMwNGEzNTlmZjQyZjhjMjk3YmVmNzk2ODcwOTc5MjYzYzY5ZWE3hEGojg==: --dhchap-ctrl-secret DHHC-1:03:ZTMwZDlhYmRkMzgwODU1MGRjZDY5YjQ4OWIwMzEzNTcyYjBlYWRmYmRmYWI4MjNkMDJmNDMyYjZlOWViYTU0YZOu+cQ=: 00:17:57.212 12:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzEyNDg4MmM2ZDMwNGEzNTlmZjQyZjhjMjk3YmVmNzk2ODcwOTc5MjYzYzY5ZWE3hEGojg==: --dhchap-ctrl-secret DHHC-1:03:ZTMwZDlhYmRkMzgwODU1MGRjZDY5YjQ4OWIwMzEzNTcyYjBlYWRmYmRmYWI4MjNkMDJmNDMyYjZlOWViYTU0YZOu+cQ=: 00:17:58.155 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.155 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.155 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:58.155 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.155 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.155 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.155 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.155 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:58.155 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:58.155 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:17:58.155 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.155 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:58.155 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:58.155 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:58.155 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.155 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.155 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.155 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.155 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.155 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.155 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.155 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.416 00:17:58.416 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.416 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:58.416 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.677 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.677 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.677 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.677 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.677 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.677 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:58.677 { 00:17:58.677 "cntlid": 131, 00:17:58.677 "qid": 0, 00:17:58.677 "state": "enabled", 00:17:58.677 "thread": "nvmf_tgt_poll_group_000", 00:17:58.677 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:58.677 "listen_address": { 00:17:58.677 "trtype": "TCP", 00:17:58.677 "adrfam": "IPv4", 00:17:58.677 "traddr": "10.0.0.2", 00:17:58.677 "trsvcid": "4420" 00:17:58.677 }, 00:17:58.677 "peer_address": { 00:17:58.677 "trtype": "TCP", 00:17:58.677 "adrfam": "IPv4", 00:17:58.677 "traddr": "10.0.0.1", 00:17:58.677 "trsvcid": "54684" 00:17:58.677 }, 00:17:58.677 "auth": { 00:17:58.677 "state": "completed", 00:17:58.677 "digest": "sha512", 00:17:58.677 "dhgroup": "ffdhe6144" 00:17:58.677 } 00:17:58.677 } 00:17:58.677 ]' 00:17:58.677 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:58.677 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:58.677 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:58.677 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:58.677 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:58.939 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.939 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.939 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.939 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2VmMTZiN2I2OWUzMDRiMjY1YmRmNTFkZWMzMjlkZjMZ7OuR: --dhchap-ctrl-secret DHHC-1:02:YjEwOGQ1OTE1Y2M5ZDdkY2FmYzc3MDkwMjM1ZDYxOGYyYmU4ZTNkZjVkNmJjZjQ1RnP+VA==: 00:17:58.939 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Y2VmMTZiN2I2OWUzMDRiMjY1YmRmNTFkZWMzMjlkZjMZ7OuR: --dhchap-ctrl-secret DHHC-1:02:YjEwOGQ1OTE1Y2M5ZDdkY2FmYzc3MDkwMjM1ZDYxOGYyYmU4ZTNkZjVkNmJjZjQ1RnP+VA==: 00:17:59.880 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.880 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:59.880 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.880 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.880 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.880 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:59.880 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:59.880 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:59.880 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:17:59.880 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:59.880 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:59.880 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:59.880 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:59.880 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.881 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.881 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.881 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.881 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.881 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.881 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.881 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.141 00:18:00.141 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:00.141 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:00.141 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.402 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.402 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.402 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.402 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.402 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.402 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:00.402 { 00:18:00.402 "cntlid": 133, 00:18:00.402 "qid": 0, 00:18:00.402 "state": "enabled", 00:18:00.402 "thread": "nvmf_tgt_poll_group_000", 00:18:00.402 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:00.402 "listen_address": { 00:18:00.402 "trtype": "TCP", 00:18:00.402 "adrfam": "IPv4", 00:18:00.402 "traddr": "10.0.0.2", 00:18:00.402 "trsvcid": "4420" 00:18:00.402 }, 00:18:00.402 "peer_address": { 00:18:00.402 "trtype": "TCP", 00:18:00.402 "adrfam": "IPv4", 00:18:00.402 "traddr": "10.0.0.1", 00:18:00.402 "trsvcid": "54708" 00:18:00.402 }, 00:18:00.402 "auth": { 00:18:00.402 "state": "completed", 00:18:00.402 "digest": "sha512", 00:18:00.402 "dhgroup": "ffdhe6144" 00:18:00.402 } 00:18:00.402 } 00:18:00.402 ]' 00:18:00.402 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:00.402 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:00.402 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:00.402 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:00.402 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:00.663 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.663 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.663 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.663 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzVmNzRhNjQ1MmIxZTc4M2Q3YWU2NWIzNGQ1NTAxNzM3MTQyYTFmYzVkNzFhNmNhpdauuQ==: --dhchap-ctrl-secret DHHC-1:01:Njc2MjYwOThiNzk1M2QyNDI3M2QxNGMzNzZlOWRkNDRMilIz: 00:18:00.663 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YzVmNzRhNjQ1MmIxZTc4M2Q3YWU2NWIzNGQ1NTAxNzM3MTQyYTFmYzVkNzFhNmNhpdauuQ==: --dhchap-ctrl-secret DHHC-1:01:Njc2MjYwOThiNzk1M2QyNDI3M2QxNGMzNzZlOWRkNDRMilIz: 00:18:01.606 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.606 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:01.606 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.607 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.607 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.607 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:01.607 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:01.607 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:01.607 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:01.607 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:01.607 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:01.607 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:01.607 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:01.607 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.607 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:01.607 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.607 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.607 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.607 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:01.607 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:01.607 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:01.868 00:18:01.868 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:01.868 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:01.868 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.129 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.129 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.129 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.129 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.129 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.129 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:02.129 { 00:18:02.129 "cntlid": 135, 00:18:02.129 "qid": 0, 00:18:02.129 "state": "enabled", 00:18:02.129 "thread": "nvmf_tgt_poll_group_000", 00:18:02.129 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:02.129 "listen_address": { 00:18:02.129 "trtype": "TCP", 00:18:02.129 "adrfam": "IPv4", 00:18:02.129 "traddr": "10.0.0.2", 00:18:02.129 "trsvcid": "4420" 00:18:02.129 }, 00:18:02.129 "peer_address": { 00:18:02.129 "trtype": "TCP", 00:18:02.129 "adrfam": "IPv4", 00:18:02.129 "traddr": "10.0.0.1", 00:18:02.129 "trsvcid": "54742" 00:18:02.129 }, 00:18:02.129 "auth": { 00:18:02.129 "state": "completed", 00:18:02.129 "digest": "sha512", 00:18:02.129 "dhgroup": "ffdhe6144" 00:18:02.129 } 00:18:02.129 } 00:18:02.129 ]' 00:18:02.129 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:02.129 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:02.129 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:02.129 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:02.129 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:02.389 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.390 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.390 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.390 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjAxMzQ4MjFlNmFjMTJlMDZlYzUxMjE2ZWU1NzU5MTkzYjkxNjRjNWM5N2IyNjhhYmEyMDE4MDljNmEyMGM2ZTVMP7w=: 00:18:02.390 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjAxMzQ4MjFlNmFjMTJlMDZlYzUxMjE2ZWU1NzU5MTkzYjkxNjRjNWM5N2IyNjhhYmEyMDE4MDljNmEyMGM2ZTVMP7w=: 00:18:03.331 12:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.331 12:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:03.331 12:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.331 12:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.331 12:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.331 12:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:03.331 12:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:03.331 12:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:03.331 12:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:03.331 12:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:03.331 12:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:03.331 12:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:03.331 12:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:03.331 12:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:03.331 12:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.331 12:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.331 12:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.331 12:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.331 12:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.331 12:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.332 12:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.332 12:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.903 00:18:03.903 12:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:03.903 12:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:03.903 12:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.903 12:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.903 12:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.903 12:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.903 12:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.903 12:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.903 12:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:03.903 { 00:18:03.903 "cntlid": 137, 00:18:03.903 "qid": 0, 00:18:03.903 "state": "enabled", 00:18:03.903 "thread": "nvmf_tgt_poll_group_000", 00:18:03.903 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:03.903 "listen_address": { 00:18:03.903 "trtype": "TCP", 00:18:03.903 "adrfam": "IPv4", 00:18:03.903 "traddr": "10.0.0.2", 00:18:03.903 "trsvcid": "4420" 00:18:03.903 }, 00:18:03.903 "peer_address": { 00:18:03.903 "trtype": "TCP", 00:18:03.903 "adrfam": "IPv4", 00:18:03.903 "traddr": "10.0.0.1", 00:18:03.903 "trsvcid": "54776" 00:18:03.903 }, 00:18:03.903 "auth": { 00:18:03.903 "state": "completed", 00:18:03.903 "digest": "sha512", 00:18:03.903 "dhgroup": "ffdhe8192" 00:18:03.903 } 00:18:03.903 } 00:18:03.903 ]' 00:18:03.903 12:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:04.164 12:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:04.164 12:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:04.164 12:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:04.164 12:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:04.164 12:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.164 12:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.164 12:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.424 12:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzEyNDg4MmM2ZDMwNGEzNTlmZjQyZjhjMjk3YmVmNzk2ODcwOTc5MjYzYzY5ZWE3hEGojg==: --dhchap-ctrl-secret DHHC-1:03:ZTMwZDlhYmRkMzgwODU1MGRjZDY5YjQ4OWIwMzEzNTcyYjBlYWRmYmRmYWI4MjNkMDJmNDMyYjZlOWViYTU0YZOu+cQ=: 00:18:04.424 12:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzEyNDg4MmM2ZDMwNGEzNTlmZjQyZjhjMjk3YmVmNzk2ODcwOTc5MjYzYzY5ZWE3hEGojg==: --dhchap-ctrl-secret DHHC-1:03:ZTMwZDlhYmRkMzgwODU1MGRjZDY5YjQ4OWIwMzEzNTcyYjBlYWRmYmRmYWI4MjNkMDJmNDMyYjZlOWViYTU0YZOu+cQ=: 00:18:04.995 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.995 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.995 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:04.995 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.995 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.995 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.995 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:04.995 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:04.995 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:05.257 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:18:05.257 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:05.257 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:05.257 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:05.257 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:05.257 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.257 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.257 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.257 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.257 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.257 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.257 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.257 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.517 00:18:05.777 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:05.777 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:05.777 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.777 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.777 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.777 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.777 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.777 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.777 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:05.777 { 00:18:05.777 "cntlid": 139, 00:18:05.777 "qid": 0, 00:18:05.777 "state": "enabled", 00:18:05.778 "thread": "nvmf_tgt_poll_group_000", 00:18:05.778 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:05.778 "listen_address": { 00:18:05.778 "trtype": "TCP", 00:18:05.778 "adrfam": "IPv4", 00:18:05.778 "traddr": "10.0.0.2", 00:18:05.778 "trsvcid": "4420" 00:18:05.778 }, 00:18:05.778 "peer_address": { 00:18:05.778 "trtype": "TCP", 00:18:05.778 "adrfam": "IPv4", 00:18:05.778 "traddr": "10.0.0.1", 00:18:05.778 "trsvcid": "50782" 00:18:05.778 }, 00:18:05.778 "auth": { 00:18:05.778 "state": "completed", 00:18:05.778 "digest": "sha512", 00:18:05.778 "dhgroup": "ffdhe8192" 00:18:05.778 } 00:18:05.778 } 00:18:05.778 ]' 00:18:05.778 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:05.778 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:06.038 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:06.038 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:06.038 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:06.038 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.038 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.038 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.298 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2VmMTZiN2I2OWUzMDRiMjY1YmRmNTFkZWMzMjlkZjMZ7OuR: --dhchap-ctrl-secret DHHC-1:02:YjEwOGQ1OTE1Y2M5ZDdkY2FmYzc3MDkwMjM1ZDYxOGYyYmU4ZTNkZjVkNmJjZjQ1RnP+VA==: 00:18:06.298 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Y2VmMTZiN2I2OWUzMDRiMjY1YmRmNTFkZWMzMjlkZjMZ7OuR: --dhchap-ctrl-secret DHHC-1:02:YjEwOGQ1OTE1Y2M5ZDdkY2FmYzc3MDkwMjM1ZDYxOGYyYmU4ZTNkZjVkNmJjZjQ1RnP+VA==: 00:18:06.869 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.869 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.869 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:06.869 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.869 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.869 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.869 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:06.869 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:06.869 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:07.129 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:07.129 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:07.129 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:07.129 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:07.129 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:07.129 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.129 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.129 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.129 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.129 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.129 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.129 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.129 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.389 00:18:07.650 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:07.650 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:07.650 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.650 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.650 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.650 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.650 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.650 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.650 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:07.650 { 00:18:07.650 "cntlid": 141, 00:18:07.650 "qid": 0, 00:18:07.650 "state": "enabled", 00:18:07.650 "thread": "nvmf_tgt_poll_group_000", 00:18:07.650 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:07.650 "listen_address": { 00:18:07.650 "trtype": "TCP", 00:18:07.650 "adrfam": "IPv4", 00:18:07.650 "traddr": "10.0.0.2", 00:18:07.650 "trsvcid": "4420" 00:18:07.650 }, 00:18:07.650 "peer_address": { 00:18:07.650 "trtype": "TCP", 00:18:07.650 "adrfam": "IPv4", 00:18:07.650 "traddr": "10.0.0.1", 00:18:07.650 "trsvcid": "50812" 00:18:07.650 }, 00:18:07.650 "auth": { 00:18:07.650 "state": "completed", 00:18:07.650 "digest": "sha512", 00:18:07.650 "dhgroup": "ffdhe8192" 00:18:07.650 } 00:18:07.650 } 00:18:07.650 ]' 00:18:07.650 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:07.650 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:07.650 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:07.910 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:07.910 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:07.910 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.910 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.910 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.911 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzVmNzRhNjQ1MmIxZTc4M2Q3YWU2NWIzNGQ1NTAxNzM3MTQyYTFmYzVkNzFhNmNhpdauuQ==: --dhchap-ctrl-secret DHHC-1:01:Njc2MjYwOThiNzk1M2QyNDI3M2QxNGMzNzZlOWRkNDRMilIz: 00:18:07.911 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YzVmNzRhNjQ1MmIxZTc4M2Q3YWU2NWIzNGQ1NTAxNzM3MTQyYTFmYzVkNzFhNmNhpdauuQ==: --dhchap-ctrl-secret DHHC-1:01:Njc2MjYwOThiNzk1M2QyNDI3M2QxNGMzNzZlOWRkNDRMilIz: 00:18:08.853 12:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.853 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.853 12:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:08.853 12:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.853 12:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.853 12:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.853 12:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:08.853 12:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:08.853 12:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:08.853 12:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:08.853 12:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:08.853 12:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:08.853 12:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:08.853 12:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:08.853 12:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.853 12:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:08.853 12:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.853 12:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.853 12:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.853 12:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:08.853 12:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:08.853 12:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:09.422 00:18:09.422 12:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.422 12:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.422 12:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.683 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.683 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.683 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.683 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.683 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.683 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.683 { 00:18:09.683 "cntlid": 143, 00:18:09.683 "qid": 0, 00:18:09.683 "state": "enabled", 00:18:09.683 "thread": "nvmf_tgt_poll_group_000", 00:18:09.683 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:09.683 "listen_address": { 00:18:09.683 "trtype": "TCP", 00:18:09.683 "adrfam": "IPv4", 00:18:09.683 "traddr": "10.0.0.2", 00:18:09.683 "trsvcid": "4420" 00:18:09.683 }, 00:18:09.683 "peer_address": { 00:18:09.683 "trtype": "TCP", 00:18:09.683 "adrfam": "IPv4", 00:18:09.683 "traddr": "10.0.0.1", 00:18:09.683 "trsvcid": "50834" 00:18:09.683 }, 00:18:09.683 "auth": { 00:18:09.683 "state": "completed", 00:18:09.683 "digest": "sha512", 00:18:09.683 "dhgroup": "ffdhe8192" 00:18:09.683 } 00:18:09.683 } 00:18:09.683 ]' 00:18:09.683 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:09.683 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:09.683 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:09.683 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:09.683 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:09.683 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.683 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.683 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.944 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjAxMzQ4MjFlNmFjMTJlMDZlYzUxMjE2ZWU1NzU5MTkzYjkxNjRjNWM5N2IyNjhhYmEyMDE4MDljNmEyMGM2ZTVMP7w=: 00:18:09.944 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjAxMzQ4MjFlNmFjMTJlMDZlYzUxMjE2ZWU1NzU5MTkzYjkxNjRjNWM5N2IyNjhhYmEyMDE4MDljNmEyMGM2ZTVMP7w=: 00:18:10.517 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.517 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.517 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:10.517 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.517 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.517 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.517 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:10.517 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:10.517 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:10.517 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:10.517 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:10.517 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:10.777 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:10.777 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:10.777 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:10.777 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:10.777 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:10.777 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.777 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.777 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.777 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.777 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.777 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.777 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.777 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.349 00:18:11.349 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:11.349 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:11.349 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.349 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.349 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.349 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.349 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.349 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.349 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:11.349 { 00:18:11.349 "cntlid": 145, 00:18:11.349 "qid": 0, 00:18:11.349 "state": "enabled", 00:18:11.349 "thread": "nvmf_tgt_poll_group_000", 00:18:11.349 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:11.349 "listen_address": { 00:18:11.349 "trtype": "TCP", 00:18:11.349 "adrfam": "IPv4", 00:18:11.349 "traddr": "10.0.0.2", 00:18:11.349 "trsvcid": "4420" 00:18:11.349 }, 00:18:11.349 "peer_address": { 00:18:11.349 "trtype": "TCP", 00:18:11.349 "adrfam": "IPv4", 00:18:11.349 "traddr": "10.0.0.1", 00:18:11.349 "trsvcid": "50866" 00:18:11.349 }, 00:18:11.349 "auth": { 00:18:11.349 "state": "completed", 00:18:11.349 "digest": "sha512", 00:18:11.349 "dhgroup": "ffdhe8192" 00:18:11.349 } 00:18:11.349 } 00:18:11.349 ]' 00:18:11.349 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:11.349 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:11.349 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.611 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:11.611 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.611 12:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.611 12:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.611 12:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.611 12:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzEyNDg4MmM2ZDMwNGEzNTlmZjQyZjhjMjk3YmVmNzk2ODcwOTc5MjYzYzY5ZWE3hEGojg==: --dhchap-ctrl-secret DHHC-1:03:ZTMwZDlhYmRkMzgwODU1MGRjZDY5YjQ4OWIwMzEzNTcyYjBlYWRmYmRmYWI4MjNkMDJmNDMyYjZlOWViYTU0YZOu+cQ=: 00:18:11.611 12:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzEyNDg4MmM2ZDMwNGEzNTlmZjQyZjhjMjk3YmVmNzk2ODcwOTc5MjYzYzY5ZWE3hEGojg==: --dhchap-ctrl-secret DHHC-1:03:ZTMwZDlhYmRkMzgwODU1MGRjZDY5YjQ4OWIwMzEzNTcyYjBlYWRmYmRmYWI4MjNkMDJmNDMyYjZlOWViYTU0YZOu+cQ=: 00:18:12.553 12:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.553 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.553 12:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:12.553 12:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.553 12:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.553 12:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.553 12:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:12.553 12:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.553 12:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.553 12:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.553 12:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:12.553 12:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:12.553 12:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:12.553 12:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:12.553 12:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:12.553 12:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:12.553 12:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:12.553 12:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:12.553 12:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:12.553 12:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:12.815 request: 00:18:12.815 { 00:18:12.815 "name": "nvme0", 00:18:12.815 "trtype": "tcp", 00:18:12.815 "traddr": "10.0.0.2", 00:18:12.815 "adrfam": "ipv4", 00:18:12.815 "trsvcid": "4420", 00:18:12.815 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:12.815 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:12.815 "prchk_reftag": false, 00:18:12.815 "prchk_guard": false, 00:18:12.815 "hdgst": false, 00:18:12.815 "ddgst": false, 00:18:12.815 "dhchap_key": "key2", 00:18:12.815 "allow_unrecognized_csi": false, 00:18:12.815 "method": "bdev_nvme_attach_controller", 00:18:12.815 "req_id": 1 00:18:12.815 } 00:18:12.815 Got JSON-RPC error response 00:18:12.815 response: 00:18:12.815 { 00:18:12.815 "code": -5, 00:18:12.815 "message": "Input/output error" 00:18:12.815 } 00:18:12.815 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:12.815 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:12.815 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:12.815 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:12.815 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:12.815 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.815 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.815 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.815 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.815 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.815 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.815 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.815 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:12.815 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:12.815 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:12.815 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:12.815 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:12.815 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:12.815 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:12.815 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:12.815 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:12.815 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:13.387 request: 00:18:13.387 { 00:18:13.387 "name": "nvme0", 00:18:13.387 "trtype": "tcp", 00:18:13.387 "traddr": "10.0.0.2", 00:18:13.387 "adrfam": "ipv4", 00:18:13.387 "trsvcid": "4420", 00:18:13.387 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:13.387 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:13.387 "prchk_reftag": false, 00:18:13.387 "prchk_guard": false, 00:18:13.387 "hdgst": false, 00:18:13.387 "ddgst": false, 00:18:13.387 "dhchap_key": "key1", 00:18:13.387 "dhchap_ctrlr_key": "ckey2", 00:18:13.387 "allow_unrecognized_csi": false, 00:18:13.387 "method": "bdev_nvme_attach_controller", 00:18:13.387 "req_id": 1 00:18:13.387 } 00:18:13.387 Got JSON-RPC error response 00:18:13.387 response: 00:18:13.387 { 00:18:13.387 "code": -5, 00:18:13.387 "message": "Input/output error" 00:18:13.387 } 00:18:13.387 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:13.387 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:13.387 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:13.387 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:13.387 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:13.387 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.387 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.387 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.387 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:13.387 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.387 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.387 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.387 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.387 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:13.387 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.387 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:13.387 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:13.387 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:13.387 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:13.387 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.387 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.387 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.961 request: 00:18:13.961 { 00:18:13.961 "name": "nvme0", 00:18:13.961 "trtype": "tcp", 00:18:13.961 "traddr": "10.0.0.2", 00:18:13.961 "adrfam": "ipv4", 00:18:13.961 "trsvcid": "4420", 00:18:13.961 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:13.961 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:13.961 "prchk_reftag": false, 00:18:13.961 "prchk_guard": false, 00:18:13.961 "hdgst": false, 00:18:13.961 "ddgst": false, 00:18:13.961 "dhchap_key": "key1", 00:18:13.961 "dhchap_ctrlr_key": "ckey1", 00:18:13.961 "allow_unrecognized_csi": false, 00:18:13.961 "method": "bdev_nvme_attach_controller", 00:18:13.961 "req_id": 1 00:18:13.961 } 00:18:13.961 Got JSON-RPC error response 00:18:13.961 response: 00:18:13.961 { 00:18:13.961 "code": -5, 00:18:13.961 "message": "Input/output error" 00:18:13.961 } 00:18:13.961 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:13.961 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:13.961 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:13.961 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:13.961 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:13.961 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.961 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.961 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.961 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 948578 00:18:13.962 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 948578 ']' 00:18:13.962 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 948578 00:18:13.962 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:13.962 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:13.962 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 948578 00:18:13.962 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:13.962 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:13.962 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 948578' 00:18:13.962 killing process with pid 948578 00:18:13.962 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 948578 00:18:13.962 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 948578 00:18:13.962 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:13.962 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:13.962 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:13.962 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.962 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=974268 00:18:13.962 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:13.962 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 974268 00:18:13.962 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 974268 ']' 00:18:13.962 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.962 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:13.962 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.962 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:13.962 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.905 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:14.905 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:14.905 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:14.905 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:14.905 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.905 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:14.905 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:14.905 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 974268 00:18:14.905 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 974268 ']' 00:18:14.905 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.905 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:14.905 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.905 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:14.905 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.167 null0 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Q90 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.crV ]] 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.crV 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.L8B 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.biw ]] 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.biw 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.E4x 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.si6 ]] 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.si6 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.IES 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:15.167 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:16.111 nvme0n1 00:18:16.111 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:16.111 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:16.111 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.111 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.111 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.111 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.111 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.111 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.111 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:16.111 { 00:18:16.111 "cntlid": 1, 00:18:16.111 "qid": 0, 00:18:16.111 "state": "enabled", 00:18:16.111 "thread": "nvmf_tgt_poll_group_000", 00:18:16.111 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:16.111 "listen_address": { 00:18:16.111 "trtype": "TCP", 00:18:16.111 "adrfam": "IPv4", 00:18:16.111 "traddr": "10.0.0.2", 00:18:16.111 "trsvcid": "4420" 00:18:16.111 }, 00:18:16.111 "peer_address": { 00:18:16.111 "trtype": "TCP", 00:18:16.111 "adrfam": "IPv4", 00:18:16.111 "traddr": "10.0.0.1", 00:18:16.111 "trsvcid": "42294" 00:18:16.111 }, 00:18:16.111 "auth": { 00:18:16.111 "state": "completed", 00:18:16.111 "digest": "sha512", 00:18:16.111 "dhgroup": "ffdhe8192" 00:18:16.111 } 00:18:16.111 } 00:18:16.111 ]' 00:18:16.111 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:16.373 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:16.373 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:16.373 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:16.373 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:16.373 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.373 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.373 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.633 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjAxMzQ4MjFlNmFjMTJlMDZlYzUxMjE2ZWU1NzU5MTkzYjkxNjRjNWM5N2IyNjhhYmEyMDE4MDljNmEyMGM2ZTVMP7w=: 00:18:16.634 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjAxMzQ4MjFlNmFjMTJlMDZlYzUxMjE2ZWU1NzU5MTkzYjkxNjRjNWM5N2IyNjhhYmEyMDE4MDljNmEyMGM2ZTVMP7w=: 00:18:17.204 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.204 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:17.204 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.205 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.205 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.205 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:17.205 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.205 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.205 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.205 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:17.205 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:17.465 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:17.465 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:17.465 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:17.465 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:17.465 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:17.465 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:17.465 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:17.465 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:17.465 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:17.465 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:17.465 request: 00:18:17.465 { 00:18:17.465 "name": "nvme0", 00:18:17.465 "trtype": "tcp", 00:18:17.465 "traddr": "10.0.0.2", 00:18:17.465 "adrfam": "ipv4", 00:18:17.465 "trsvcid": "4420", 00:18:17.465 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:17.465 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:17.465 "prchk_reftag": false, 00:18:17.465 "prchk_guard": false, 00:18:17.465 "hdgst": false, 00:18:17.465 "ddgst": false, 00:18:17.465 "dhchap_key": "key3", 00:18:17.465 "allow_unrecognized_csi": false, 00:18:17.465 "method": "bdev_nvme_attach_controller", 00:18:17.465 "req_id": 1 00:18:17.465 } 00:18:17.465 Got JSON-RPC error response 00:18:17.465 response: 00:18:17.465 { 00:18:17.465 "code": -5, 00:18:17.465 "message": "Input/output error" 00:18:17.465 } 00:18:17.465 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:17.465 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:17.465 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:17.465 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:17.465 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:17.465 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:17.465 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:17.465 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:17.726 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:17.726 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:17.726 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:17.726 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:17.726 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:17.726 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:17.726 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:17.726 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:17.726 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:17.726 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:17.986 request: 00:18:17.986 { 00:18:17.986 "name": "nvme0", 00:18:17.986 "trtype": "tcp", 00:18:17.986 "traddr": "10.0.0.2", 00:18:17.986 "adrfam": "ipv4", 00:18:17.986 "trsvcid": "4420", 00:18:17.986 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:17.986 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:17.986 "prchk_reftag": false, 00:18:17.986 "prchk_guard": false, 00:18:17.986 "hdgst": false, 00:18:17.986 "ddgst": false, 00:18:17.986 "dhchap_key": "key3", 00:18:17.986 "allow_unrecognized_csi": false, 00:18:17.986 "method": "bdev_nvme_attach_controller", 00:18:17.986 "req_id": 1 00:18:17.986 } 00:18:17.986 Got JSON-RPC error response 00:18:17.986 response: 00:18:17.986 { 00:18:17.986 "code": -5, 00:18:17.986 "message": "Input/output error" 00:18:17.986 } 00:18:17.986 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:17.986 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:17.986 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:17.986 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:17.986 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:17.986 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:17.986 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:17.986 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:17.986 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:17.986 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:18.246 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:18.246 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.246 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.246 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.246 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:18.246 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.246 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.246 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.246 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:18.246 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:18.246 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:18.246 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:18.246 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:18.246 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:18.246 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:18.246 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:18.246 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:18.246 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:18.515 request: 00:18:18.515 { 00:18:18.515 "name": "nvme0", 00:18:18.515 "trtype": "tcp", 00:18:18.515 "traddr": "10.0.0.2", 00:18:18.515 "adrfam": "ipv4", 00:18:18.515 "trsvcid": "4420", 00:18:18.515 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:18.515 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:18.515 "prchk_reftag": false, 00:18:18.515 "prchk_guard": false, 00:18:18.515 "hdgst": false, 00:18:18.515 "ddgst": false, 00:18:18.515 "dhchap_key": "key0", 00:18:18.515 "dhchap_ctrlr_key": "key1", 00:18:18.515 "allow_unrecognized_csi": false, 00:18:18.515 "method": "bdev_nvme_attach_controller", 00:18:18.515 "req_id": 1 00:18:18.515 } 00:18:18.515 Got JSON-RPC error response 00:18:18.515 response: 00:18:18.515 { 00:18:18.515 "code": -5, 00:18:18.515 "message": "Input/output error" 00:18:18.515 } 00:18:18.515 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:18.515 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:18.515 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:18.515 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:18.515 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:18.515 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:18.515 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:18.810 nvme0n1 00:18:18.810 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:18.810 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:18.810 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.810 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.810 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.810 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.095 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:19.095 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.095 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.095 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.095 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:19.095 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:19.095 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:19.731 nvme0n1 00:18:19.731 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:19.731 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:19.731 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.992 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.992 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:19.992 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.992 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.992 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.992 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:19.992 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:19.992 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.253 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.253 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YzVmNzRhNjQ1MmIxZTc4M2Q3YWU2NWIzNGQ1NTAxNzM3MTQyYTFmYzVkNzFhNmNhpdauuQ==: --dhchap-ctrl-secret DHHC-1:03:NjAxMzQ4MjFlNmFjMTJlMDZlYzUxMjE2ZWU1NzU5MTkzYjkxNjRjNWM5N2IyNjhhYmEyMDE4MDljNmEyMGM2ZTVMP7w=: 00:18:20.253 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YzVmNzRhNjQ1MmIxZTc4M2Q3YWU2NWIzNGQ1NTAxNzM3MTQyYTFmYzVkNzFhNmNhpdauuQ==: --dhchap-ctrl-secret DHHC-1:03:NjAxMzQ4MjFlNmFjMTJlMDZlYzUxMjE2ZWU1NzU5MTkzYjkxNjRjNWM5N2IyNjhhYmEyMDE4MDljNmEyMGM2ZTVMP7w=: 00:18:20.825 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:20.825 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:20.825 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:20.825 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:20.825 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:20.825 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:20.825 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:20.825 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.826 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.087 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:21.087 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:21.087 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:21.087 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:21.087 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:21.087 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:21.087 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:21.087 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:21.087 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:21.087 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:21.348 request: 00:18:21.348 { 00:18:21.348 "name": "nvme0", 00:18:21.348 "trtype": "tcp", 00:18:21.348 "traddr": "10.0.0.2", 00:18:21.348 "adrfam": "ipv4", 00:18:21.348 "trsvcid": "4420", 00:18:21.348 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:21.348 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:21.348 "prchk_reftag": false, 00:18:21.348 "prchk_guard": false, 00:18:21.348 "hdgst": false, 00:18:21.348 "ddgst": false, 00:18:21.348 "dhchap_key": "key1", 00:18:21.348 "allow_unrecognized_csi": false, 00:18:21.348 "method": "bdev_nvme_attach_controller", 00:18:21.348 "req_id": 1 00:18:21.348 } 00:18:21.348 Got JSON-RPC error response 00:18:21.348 response: 00:18:21.348 { 00:18:21.348 "code": -5, 00:18:21.348 "message": "Input/output error" 00:18:21.348 } 00:18:21.348 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:21.348 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:21.348 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:21.348 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:21.348 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:21.348 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:21.348 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:22.291 nvme0n1 00:18:22.291 12:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:18:22.291 12:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:18:22.291 12:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.291 12:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.291 12:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.291 12:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.553 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:22.553 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.553 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.553 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.553 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:18:22.553 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:22.553 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:22.815 nvme0n1 00:18:22.815 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:18:22.815 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:18:22.815 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.077 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.077 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.077 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.077 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:23.077 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.077 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.077 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.077 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:Y2VmMTZiN2I2OWUzMDRiMjY1YmRmNTFkZWMzMjlkZjMZ7OuR: '' 2s 00:18:23.077 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:23.077 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:23.077 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:Y2VmMTZiN2I2OWUzMDRiMjY1YmRmNTFkZWMzMjlkZjMZ7OuR: 00:18:23.077 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:18:23.077 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:23.077 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:23.077 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:Y2VmMTZiN2I2OWUzMDRiMjY1YmRmNTFkZWMzMjlkZjMZ7OuR: ]] 00:18:23.077 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:Y2VmMTZiN2I2OWUzMDRiMjY1YmRmNTFkZWMzMjlkZjMZ7OuR: 00:18:23.077 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:18:23.078 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:23.078 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:25.623 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:25.623 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:18:25.623 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:18:25.623 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:18:25.623 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:18:25.623 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:18:25.623 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:18:25.623 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:25.623 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.623 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.623 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.623 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YzVmNzRhNjQ1MmIxZTc4M2Q3YWU2NWIzNGQ1NTAxNzM3MTQyYTFmYzVkNzFhNmNhpdauuQ==: 2s 00:18:25.623 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:25.623 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:25.623 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:25.623 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YzVmNzRhNjQ1MmIxZTc4M2Q3YWU2NWIzNGQ1NTAxNzM3MTQyYTFmYzVkNzFhNmNhpdauuQ==: 00:18:25.623 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:25.624 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:25.624 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:25.624 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YzVmNzRhNjQ1MmIxZTc4M2Q3YWU2NWIzNGQ1NTAxNzM3MTQyYTFmYzVkNzFhNmNhpdauuQ==: ]] 00:18:25.624 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YzVmNzRhNjQ1MmIxZTc4M2Q3YWU2NWIzNGQ1NTAxNzM3MTQyYTFmYzVkNzFhNmNhpdauuQ==: 00:18:25.624 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:25.624 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:27.541 12:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:27.541 12:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:18:27.541 12:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:18:27.541 12:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:18:27.541 12:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:18:27.541 12:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:18:27.541 12:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:18:27.541 12:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.541 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.541 12:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:27.541 12:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.541 12:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.541 12:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.541 12:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:27.541 12:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:27.541 12:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:28.113 nvme0n1 00:18:28.113 12:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:28.113 12:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.113 12:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.113 12:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.113 12:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:28.113 12:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:28.684 12:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:28.684 12:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:28.684 12:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.684 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.684 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:28.684 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.684 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.684 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.684 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:28.684 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:28.944 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:28.944 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:28.944 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.944 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.944 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:28.944 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.944 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.944 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.944 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:28.944 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:28.944 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:28.944 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:28.944 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:28.944 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:28.944 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:28.944 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:28.944 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:29.514 request: 00:18:29.514 { 00:18:29.514 "name": "nvme0", 00:18:29.514 "dhchap_key": "key1", 00:18:29.514 "dhchap_ctrlr_key": "key3", 00:18:29.514 "method": "bdev_nvme_set_keys", 00:18:29.514 "req_id": 1 00:18:29.514 } 00:18:29.514 Got JSON-RPC error response 00:18:29.514 response: 00:18:29.514 { 00:18:29.514 "code": -13, 00:18:29.514 "message": "Permission denied" 00:18:29.514 } 00:18:29.514 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:29.514 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:29.514 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:29.514 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:29.514 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:29.514 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:29.514 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.774 12:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:29.774 12:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:30.713 12:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:30.713 12:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:30.713 12:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.974 12:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:30.974 12:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:30.974 12:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.974 12:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.974 12:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.974 12:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:30.974 12:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:30.974 12:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:31.545 nvme0n1 00:18:31.545 12:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:31.545 12:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.545 12:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.545 12:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.545 12:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:31.545 12:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:31.545 12:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:31.545 12:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:31.545 12:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:31.545 12:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:31.545 12:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:31.545 12:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:31.545 12:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:32.115 request: 00:18:32.115 { 00:18:32.115 "name": "nvme0", 00:18:32.115 "dhchap_key": "key2", 00:18:32.115 "dhchap_ctrlr_key": "key0", 00:18:32.115 "method": "bdev_nvme_set_keys", 00:18:32.115 "req_id": 1 00:18:32.115 } 00:18:32.115 Got JSON-RPC error response 00:18:32.115 response: 00:18:32.115 { 00:18:32.115 "code": -13, 00:18:32.115 "message": "Permission denied" 00:18:32.115 } 00:18:32.115 12:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:32.115 12:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:32.115 12:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:32.115 12:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:32.115 12:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:32.115 12:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:32.115 12:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.379 12:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:32.379 12:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:33.328 12:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:33.328 12:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:33.328 12:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.589 12:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:33.589 12:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:33.589 12:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:33.589 12:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 948694 00:18:33.589 12:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 948694 ']' 00:18:33.589 12:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 948694 00:18:33.589 12:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:33.589 12:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:33.589 12:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 948694 00:18:33.589 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:33.589 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:33.589 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 948694' 00:18:33.589 killing process with pid 948694 00:18:33.589 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 948694 00:18:33.589 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 948694 00:18:33.850 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:33.851 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:33.851 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:33.851 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:33.851 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:33.851 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:33.851 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:33.851 rmmod nvme_tcp 00:18:33.851 rmmod nvme_fabrics 00:18:33.851 rmmod nvme_keyring 00:18:33.851 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:33.851 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:33.851 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:33.851 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@515 -- # '[' -n 974268 ']' 00:18:33.851 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # killprocess 974268 00:18:33.851 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 974268 ']' 00:18:33.851 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 974268 00:18:33.851 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:33.851 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:33.851 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 974268 00:18:33.851 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:33.851 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:33.851 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 974268' 00:18:33.851 killing process with pid 974268 00:18:33.851 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 974268 00:18:33.851 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 974268 00:18:33.851 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:33.851 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:18:33.851 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:18:33.851 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:33.851 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-save 00:18:33.851 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:18:33.851 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-restore 00:18:33.851 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:33.851 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:33.851 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:33.851 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:33.851 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Q90 /tmp/spdk.key-sha256.L8B /tmp/spdk.key-sha384.E4x /tmp/spdk.key-sha512.IES /tmp/spdk.key-sha512.crV /tmp/spdk.key-sha384.biw /tmp/spdk.key-sha256.si6 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:36.400 00:18:36.400 real 2m36.313s 00:18:36.400 user 5m52.043s 00:18:36.400 sys 0m24.711s 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.400 ************************************ 00:18:36.400 END TEST nvmf_auth_target 00:18:36.400 ************************************ 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:36.400 ************************************ 00:18:36.400 START TEST nvmf_bdevio_no_huge 00:18:36.400 ************************************ 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:36.400 * Looking for test storage... 00:18:36.400 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:36.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.400 --rc genhtml_branch_coverage=1 00:18:36.400 --rc genhtml_function_coverage=1 00:18:36.400 --rc genhtml_legend=1 00:18:36.400 --rc geninfo_all_blocks=1 00:18:36.400 --rc geninfo_unexecuted_blocks=1 00:18:36.400 00:18:36.400 ' 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:36.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.400 --rc genhtml_branch_coverage=1 00:18:36.400 --rc genhtml_function_coverage=1 00:18:36.400 --rc genhtml_legend=1 00:18:36.400 --rc geninfo_all_blocks=1 00:18:36.400 --rc geninfo_unexecuted_blocks=1 00:18:36.400 00:18:36.400 ' 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:36.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.400 --rc genhtml_branch_coverage=1 00:18:36.400 --rc genhtml_function_coverage=1 00:18:36.400 --rc genhtml_legend=1 00:18:36.400 --rc geninfo_all_blocks=1 00:18:36.400 --rc geninfo_unexecuted_blocks=1 00:18:36.400 00:18:36.400 ' 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:36.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.400 --rc genhtml_branch_coverage=1 00:18:36.400 --rc genhtml_function_coverage=1 00:18:36.400 --rc genhtml_legend=1 00:18:36.400 --rc geninfo_all_blocks=1 00:18:36.400 --rc geninfo_unexecuted_blocks=1 00:18:36.400 00:18:36.400 ' 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:36.400 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.401 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.401 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.401 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:36.401 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.401 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:36.401 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:36.401 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:36.401 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:36.401 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:36.401 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:36.401 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:36.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:36.401 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:36.401 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:36.401 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:36.401 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:36.401 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:36.401 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:36.401 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:36.401 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:36.401 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:36.401 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:36.401 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:36.401 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:36.401 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:36.401 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:36.401 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:36.401 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:36.401 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:18:36.401 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:44.546 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:44.546 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:44.546 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:44.546 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:44.547 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:44.547 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:44.547 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:44.547 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:44.547 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:44.547 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:44.547 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:44.547 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:44.547 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:44.547 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:44.547 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:44.547 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:18:44.547 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # is_hw=yes 00:18:44.547 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:18:44.547 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:18:44.547 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:18:44.547 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:44.547 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:44.547 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:44.547 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:44.547 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:44.547 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:44.547 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:44.547 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:44.547 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:44.547 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:44.547 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:44.547 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:44.547 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:44.547 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:44.547 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:44.547 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:44.547 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:44.547 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:44.547 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:44.547 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:44.547 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:44.547 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:44.547 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:44.547 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:44.547 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:18:44.547 00:18:44.547 --- 10.0.0.2 ping statistics --- 00:18:44.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:44.547 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:18:44.547 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:44.547 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:44.547 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:18:44.547 00:18:44.547 --- 10.0.0.1 ping statistics --- 00:18:44.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:44.547 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:18:44.547 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:44.547 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # return 0 00:18:44.547 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:44.547 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:44.547 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:18:44.547 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:18:44.547 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:44.547 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:18:44.547 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:18:44.547 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:44.547 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:44.547 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:44.547 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:44.547 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # nvmfpid=982917 00:18:44.547 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # waitforlisten 982917 00:18:44.547 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:44.547 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 982917 ']' 00:18:44.547 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.547 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:44.547 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:44.547 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:44.547 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:44.547 [2024-10-21 12:03:20.409551] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:18:44.547 [2024-10-21 12:03:20.409625] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:44.547 [2024-10-21 12:03:20.506789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:44.547 [2024-10-21 12:03:20.567086] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:44.547 [2024-10-21 12:03:20.567136] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:44.547 [2024-10-21 12:03:20.567145] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:44.547 [2024-10-21 12:03:20.567152] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:44.547 [2024-10-21 12:03:20.567159] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:44.547 [2024-10-21 12:03:20.568683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:44.547 [2024-10-21 12:03:20.568833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:18:44.547 [2024-10-21 12:03:20.568991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:44.547 [2024-10-21 12:03:20.568991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:18:44.809 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:44.809 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:18:44.809 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:44.809 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:44.809 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:44.809 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:44.809 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:44.809 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.809 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:44.809 [2024-10-21 12:03:21.289479] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:44.809 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.809 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:44.809 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.809 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:44.809 Malloc0 00:18:44.809 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.809 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:44.809 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.809 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:44.809 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.809 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:44.809 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.809 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:44.809 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.809 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:44.809 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.809 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:44.809 [2024-10-21 12:03:21.343565] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:44.809 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.809 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:44.809 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:44.809 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # config=() 00:18:44.809 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # local subsystem config 00:18:44.809 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:44.809 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:44.809 { 00:18:44.809 "params": { 00:18:44.809 "name": "Nvme$subsystem", 00:18:44.809 "trtype": "$TEST_TRANSPORT", 00:18:44.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:44.809 "adrfam": "ipv4", 00:18:44.809 "trsvcid": "$NVMF_PORT", 00:18:44.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:44.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:44.809 "hdgst": ${hdgst:-false}, 00:18:44.809 "ddgst": ${ddgst:-false} 00:18:44.809 }, 00:18:44.809 "method": "bdev_nvme_attach_controller" 00:18:44.809 } 00:18:44.809 EOF 00:18:44.809 )") 00:18:44.809 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # cat 00:18:44.809 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # jq . 00:18:44.809 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@583 -- # IFS=, 00:18:44.809 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:18:44.809 "params": { 00:18:44.809 "name": "Nvme1", 00:18:44.809 "trtype": "tcp", 00:18:44.809 "traddr": "10.0.0.2", 00:18:44.809 "adrfam": "ipv4", 00:18:44.809 "trsvcid": "4420", 00:18:44.809 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:44.809 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:44.809 "hdgst": false, 00:18:44.809 "ddgst": false 00:18:44.809 }, 00:18:44.809 "method": "bdev_nvme_attach_controller" 00:18:44.809 }' 00:18:44.809 [2024-10-21 12:03:21.403552] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:18:44.809 [2024-10-21 12:03:21.403621] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid983182 ] 00:18:45.075 [2024-10-21 12:03:21.490888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:45.075 [2024-10-21 12:03:21.551296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:45.075 [2024-10-21 12:03:21.551462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:45.075 [2024-10-21 12:03:21.551595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.337 I/O targets: 00:18:45.337 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:45.337 00:18:45.337 00:18:45.337 CUnit - A unit testing framework for C - Version 2.1-3 00:18:45.337 http://cunit.sourceforge.net/ 00:18:45.337 00:18:45.337 00:18:45.337 Suite: bdevio tests on: Nvme1n1 00:18:45.597 Test: blockdev write read block ...passed 00:18:45.597 Test: blockdev write zeroes read block ...passed 00:18:45.597 Test: blockdev write zeroes read no split ...passed 00:18:45.597 Test: blockdev write zeroes read split ...passed 00:18:45.597 Test: blockdev write zeroes read split partial ...passed 00:18:45.597 Test: blockdev reset ...[2024-10-21 12:03:22.119072] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:45.597 [2024-10-21 12:03:22.119176] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d7dd60 (9): Bad file descriptor 00:18:45.597 [2024-10-21 12:03:22.173744] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:45.597 passed 00:18:45.858 Test: blockdev write read 8 blocks ...passed 00:18:45.858 Test: blockdev write read size > 128k ...passed 00:18:45.858 Test: blockdev write read invalid size ...passed 00:18:45.858 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:45.858 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:45.858 Test: blockdev write read max offset ...passed 00:18:45.858 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:45.858 Test: blockdev writev readv 8 blocks ...passed 00:18:45.858 Test: blockdev writev readv 30 x 1block ...passed 00:18:45.858 Test: blockdev writev readv block ...passed 00:18:45.858 Test: blockdev writev readv size > 128k ...passed 00:18:45.858 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:45.858 Test: blockdev comparev and writev ...[2024-10-21 12:03:22.440613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:45.858 [2024-10-21 12:03:22.440663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:45.858 [2024-10-21 12:03:22.440680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:45.858 [2024-10-21 12:03:22.440689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:45.858 [2024-10-21 12:03:22.441142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:45.858 [2024-10-21 12:03:22.441154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:45.858 [2024-10-21 12:03:22.441169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:45.858 [2024-10-21 12:03:22.441185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:45.858 [2024-10-21 12:03:22.441605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:45.858 [2024-10-21 12:03:22.441617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:45.858 [2024-10-21 12:03:22.441632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:45.858 [2024-10-21 12:03:22.441640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:45.858 [2024-10-21 12:03:22.442073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:45.858 [2024-10-21 12:03:22.442085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:45.858 [2024-10-21 12:03:22.442099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:45.858 [2024-10-21 12:03:22.442107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:46.119 passed 00:18:46.119 Test: blockdev nvme passthru rw ...passed 00:18:46.119 Test: blockdev nvme passthru vendor specific ...[2024-10-21 12:03:22.527169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:46.119 [2024-10-21 12:03:22.527190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:46.119 [2024-10-21 12:03:22.527482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:46.119 [2024-10-21 12:03:22.527494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:46.119 [2024-10-21 12:03:22.527762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:46.119 [2024-10-21 12:03:22.527782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:46.119 [2024-10-21 12:03:22.528171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:46.119 [2024-10-21 12:03:22.528181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:46.119 passed 00:18:46.119 Test: blockdev nvme admin passthru ...passed 00:18:46.119 Test: blockdev copy ...passed 00:18:46.119 00:18:46.119 Run Summary: Type Total Ran Passed Failed Inactive 00:18:46.119 suites 1 1 n/a 0 0 00:18:46.119 tests 23 23 23 0 0 00:18:46.119 asserts 152 152 152 0 n/a 00:18:46.119 00:18:46.119 Elapsed time = 1.383 seconds 00:18:46.379 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:46.379 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.379 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:46.379 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.379 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:46.379 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:46.379 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:46.379 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:18:46.379 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:46.379 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:18:46.379 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:46.379 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:46.379 rmmod nvme_tcp 00:18:46.379 rmmod nvme_fabrics 00:18:46.379 rmmod nvme_keyring 00:18:46.379 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:46.379 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:18:46.379 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:18:46.379 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@515 -- # '[' -n 982917 ']' 00:18:46.379 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # killprocess 982917 00:18:46.379 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 982917 ']' 00:18:46.379 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 982917 00:18:46.379 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:18:46.379 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:46.379 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 982917 00:18:46.640 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:18:46.640 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:18:46.640 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 982917' 00:18:46.640 killing process with pid 982917 00:18:46.640 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 982917 00:18:46.640 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 982917 00:18:46.640 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:46.640 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:18:46.640 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:18:46.640 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:18:46.640 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-save 00:18:46.640 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:18:46.640 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-restore 00:18:46.640 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:46.640 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:46.640 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.640 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:46.640 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:49.186 00:18:49.186 real 0m12.698s 00:18:49.186 user 0m15.415s 00:18:49.186 sys 0m6.639s 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:49.186 ************************************ 00:18:49.186 END TEST nvmf_bdevio_no_huge 00:18:49.186 ************************************ 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:49.186 ************************************ 00:18:49.186 START TEST nvmf_tls 00:18:49.186 ************************************ 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:49.186 * Looking for test storage... 00:18:49.186 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:49.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.186 --rc genhtml_branch_coverage=1 00:18:49.186 --rc genhtml_function_coverage=1 00:18:49.186 --rc genhtml_legend=1 00:18:49.186 --rc geninfo_all_blocks=1 00:18:49.186 --rc geninfo_unexecuted_blocks=1 00:18:49.186 00:18:49.186 ' 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:49.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.186 --rc genhtml_branch_coverage=1 00:18:49.186 --rc genhtml_function_coverage=1 00:18:49.186 --rc genhtml_legend=1 00:18:49.186 --rc geninfo_all_blocks=1 00:18:49.186 --rc geninfo_unexecuted_blocks=1 00:18:49.186 00:18:49.186 ' 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:49.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.186 --rc genhtml_branch_coverage=1 00:18:49.186 --rc genhtml_function_coverage=1 00:18:49.186 --rc genhtml_legend=1 00:18:49.186 --rc geninfo_all_blocks=1 00:18:49.186 --rc geninfo_unexecuted_blocks=1 00:18:49.186 00:18:49.186 ' 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:49.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.186 --rc genhtml_branch_coverage=1 00:18:49.186 --rc genhtml_function_coverage=1 00:18:49.186 --rc genhtml_legend=1 00:18:49.186 --rc geninfo_all_blocks=1 00:18:49.186 --rc geninfo_unexecuted_blocks=1 00:18:49.186 00:18:49.186 ' 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:18:49.186 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:49.187 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:49.187 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:49.187 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.187 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.187 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.187 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:49.187 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.187 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:18:49.187 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:49.187 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:49.187 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:49.187 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:49.187 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:49.187 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:49.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:49.187 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:49.187 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:49.187 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:49.187 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:49.187 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:18:49.187 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:49.187 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:49.187 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:49.187 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:49.187 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:49.187 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:49.187 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:49.187 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.187 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:49.187 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:49.187 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:18:49.187 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:57.330 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:57.330 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:18:57.330 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:57.330 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:57.330 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:57.330 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:57.330 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:57.330 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:18:57.330 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:57.330 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:18:57.330 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:18:57.330 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:18:57.330 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:18:57.330 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:18:57.330 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:18:57.330 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:57.330 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:57.330 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:57.330 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:57.330 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:57.331 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:57.331 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:57.331 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:57.331 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # is_hw=yes 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:57.331 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:57.331 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:57.331 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:57.331 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:57.331 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:57.331 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:57.331 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:57.331 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:57.331 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:57.331 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.660 ms 00:18:57.331 00:18:57.331 --- 10.0.0.2 ping statistics --- 00:18:57.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.331 rtt min/avg/max/mdev = 0.660/0.660/0.660/0.000 ms 00:18:57.331 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:57.331 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:57.331 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:18:57.331 00:18:57.331 --- 10.0.0.1 ping statistics --- 00:18:57.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.331 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:18:57.331 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:57.331 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # return 0 00:18:57.331 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:57.331 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:57.331 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:18:57.331 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:18:57.331 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:57.331 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:18:57.331 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:18:57.331 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:57.331 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:57.331 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:57.331 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:57.331 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=987713 00:18:57.331 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 987713 00:18:57.331 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:57.331 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 987713 ']' 00:18:57.331 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:57.331 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:57.331 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:57.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:57.331 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:57.331 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:57.331 [2024-10-21 12:03:33.259229] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:18:57.331 [2024-10-21 12:03:33.259301] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:57.331 [2024-10-21 12:03:33.349560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.331 [2024-10-21 12:03:33.400064] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:57.331 [2024-10-21 12:03:33.400111] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:57.332 [2024-10-21 12:03:33.400120] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:57.332 [2024-10-21 12:03:33.400127] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:57.332 [2024-10-21 12:03:33.400134] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:57.332 [2024-10-21 12:03:33.400905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:57.593 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:57.593 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:57.593 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:57.593 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:57.593 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:57.593 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:57.593 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:18:57.593 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:57.855 true 00:18:57.855 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:57.855 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:18:58.116 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:18:58.116 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:18:58.116 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:58.378 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:58.378 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:18:58.378 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:18:58.378 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:18:58.378 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:58.639 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:58.639 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:18:58.899 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:18:58.900 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:18:58.900 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:58.900 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:18:58.900 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:18:58.900 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:18:58.900 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:59.161 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:59.161 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:18:59.422 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:18:59.422 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:18:59.422 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:59.422 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:59.422 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:18:59.683 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:18:59.683 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:18:59.683 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:59.683 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:59.683 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:18:59.683 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:18:59.683 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:18:59.683 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:18:59.683 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:18:59.683 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:59.683 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:59.683 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:59.683 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:18:59.683 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:18:59.683 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=ffeeddccbbaa99887766554433221100 00:18:59.683 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:18:59.683 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:18:59.683 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:59.683 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:59.683 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.IhNJs6vtw1 00:18:59.944 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:18:59.944 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.5H731Fa02E 00:18:59.944 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:59.944 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:59.944 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.IhNJs6vtw1 00:18:59.944 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.5H731Fa02E 00:18:59.944 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:59.944 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:00.205 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.IhNJs6vtw1 00:19:00.205 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.IhNJs6vtw1 00:19:00.205 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:00.466 [2024-10-21 12:03:36.842611] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:00.466 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:00.466 12:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:00.726 [2024-10-21 12:03:37.179442] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:00.726 [2024-10-21 12:03:37.179650] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:00.726 12:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:00.986 malloc0 00:19:00.986 12:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:00.986 12:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.IhNJs6vtw1 00:19:01.246 12:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:01.506 12:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.IhNJs6vtw1 00:19:11.499 Initializing NVMe Controllers 00:19:11.499 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:11.499 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:11.499 Initialization complete. Launching workers. 00:19:11.499 ======================================================== 00:19:11.499 Latency(us) 00:19:11.499 Device Information : IOPS MiB/s Average min max 00:19:11.499 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18541.59 72.43 3451.91 1155.54 4124.74 00:19:11.499 ======================================================== 00:19:11.499 Total : 18541.59 72.43 3451.91 1155.54 4124.74 00:19:11.499 00:19:11.499 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.IhNJs6vtw1 00:19:11.499 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:11.499 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:11.499 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:11.499 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.IhNJs6vtw1 00:19:11.499 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:11.499 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=990588 00:19:11.499 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:11.499 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 990588 /var/tmp/bdevperf.sock 00:19:11.499 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:11.499 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 990588 ']' 00:19:11.499 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:11.499 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:11.499 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:11.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:11.499 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:11.499 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:11.499 [2024-10-21 12:03:48.047753] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:19:11.499 [2024-10-21 12:03:48.047811] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid990588 ] 00:19:11.760 [2024-10-21 12:03:48.124171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.760 [2024-10-21 12:03:48.159804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:12.332 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:12.332 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:12.332 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.IhNJs6vtw1 00:19:12.593 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:12.593 [2024-10-21 12:03:49.138732] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:12.853 TLSTESTn1 00:19:12.853 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:12.853 Running I/O for 10 seconds... 00:19:14.739 5637.00 IOPS, 22.02 MiB/s [2024-10-21T10:03:52.720Z] 5167.00 IOPS, 20.18 MiB/s [2024-10-21T10:03:53.664Z] 5370.33 IOPS, 20.98 MiB/s [2024-10-21T10:03:54.608Z] 5583.75 IOPS, 21.81 MiB/s [2024-10-21T10:03:55.621Z] 5718.20 IOPS, 22.34 MiB/s [2024-10-21T10:03:56.608Z] 5710.83 IOPS, 22.31 MiB/s [2024-10-21T10:03:57.553Z] 5763.71 IOPS, 22.51 MiB/s [2024-10-21T10:03:58.497Z] 5770.88 IOPS, 22.54 MiB/s [2024-10-21T10:03:59.439Z] 5715.78 IOPS, 22.33 MiB/s [2024-10-21T10:03:59.439Z] 5699.00 IOPS, 22.26 MiB/s 00:19:22.844 Latency(us) 00:19:22.844 [2024-10-21T10:03:59.439Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:22.844 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:22.844 Verification LBA range: start 0x0 length 0x2000 00:19:22.844 TLSTESTn1 : 10.02 5703.01 22.28 0.00 0.00 22409.01 5434.03 27743.57 00:19:22.844 [2024-10-21T10:03:59.439Z] =================================================================================================================== 00:19:22.844 [2024-10-21T10:03:59.439Z] Total : 5703.01 22.28 0.00 0.00 22409.01 5434.03 27743.57 00:19:22.844 { 00:19:22.844 "results": [ 00:19:22.844 { 00:19:22.844 "job": "TLSTESTn1", 00:19:22.844 "core_mask": "0x4", 00:19:22.844 "workload": "verify", 00:19:22.844 "status": "finished", 00:19:22.844 "verify_range": { 00:19:22.844 "start": 0, 00:19:22.844 "length": 8192 00:19:22.844 }, 00:19:22.844 "queue_depth": 128, 00:19:22.844 "io_size": 4096, 00:19:22.844 "runtime": 10.015234, 00:19:22.844 "iops": 5703.012031471257, 00:19:22.844 "mibps": 22.277390747934597, 00:19:22.844 "io_failed": 0, 00:19:22.844 "io_timeout": 0, 00:19:22.844 "avg_latency_us": 22409.008009991187, 00:19:22.844 "min_latency_us": 5434.026666666667, 00:19:22.844 "max_latency_us": 27743.573333333334 00:19:22.844 } 00:19:22.844 ], 00:19:22.844 "core_count": 1 00:19:22.844 } 00:19:22.844 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:22.844 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 990588 00:19:22.844 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 990588 ']' 00:19:22.844 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 990588 00:19:22.844 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:22.844 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:22.844 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 990588 00:19:23.105 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:23.105 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:23.105 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 990588' 00:19:23.105 killing process with pid 990588 00:19:23.105 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 990588 00:19:23.105 Received shutdown signal, test time was about 10.000000 seconds 00:19:23.105 00:19:23.105 Latency(us) 00:19:23.105 [2024-10-21T10:03:59.700Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.105 [2024-10-21T10:03:59.700Z] =================================================================================================================== 00:19:23.105 [2024-10-21T10:03:59.700Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:23.105 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 990588 00:19:23.105 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5H731Fa02E 00:19:23.105 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:23.105 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5H731Fa02E 00:19:23.105 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:23.105 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:23.105 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:23.105 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:23.105 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5H731Fa02E 00:19:23.105 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:23.105 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:23.105 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:23.105 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.5H731Fa02E 00:19:23.105 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:23.105 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=992908 00:19:23.105 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:23.105 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:23.105 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 992908 /var/tmp/bdevperf.sock 00:19:23.105 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 992908 ']' 00:19:23.105 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:23.105 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:23.106 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:23.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:23.106 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:23.106 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:23.106 [2024-10-21 12:03:59.618512] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:19:23.106 [2024-10-21 12:03:59.618572] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid992908 ] 00:19:23.106 [2024-10-21 12:03:59.692817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.366 [2024-10-21 12:03:59.721711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:23.938 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:23.938 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:23.938 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5H731Fa02E 00:19:24.199 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:24.199 [2024-10-21 12:04:00.752370] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:24.199 [2024-10-21 12:04:00.759267] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:24.199 [2024-10-21 12:04:00.759470] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c91c0 (107): Transport endpoint is not connected 00:19:24.199 [2024-10-21 12:04:00.760465] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c91c0 (9): Bad file descriptor 00:19:24.199 [2024-10-21 12:04:00.761466] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:24.199 [2024-10-21 12:04:00.761476] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:24.199 [2024-10-21 12:04:00.761482] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:24.199 [2024-10-21 12:04:00.761490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:24.199 request: 00:19:24.199 { 00:19:24.199 "name": "TLSTEST", 00:19:24.199 "trtype": "tcp", 00:19:24.199 "traddr": "10.0.0.2", 00:19:24.199 "adrfam": "ipv4", 00:19:24.199 "trsvcid": "4420", 00:19:24.199 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:24.199 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:24.199 "prchk_reftag": false, 00:19:24.199 "prchk_guard": false, 00:19:24.199 "hdgst": false, 00:19:24.199 "ddgst": false, 00:19:24.199 "psk": "key0", 00:19:24.199 "allow_unrecognized_csi": false, 00:19:24.199 "method": "bdev_nvme_attach_controller", 00:19:24.199 "req_id": 1 00:19:24.199 } 00:19:24.199 Got JSON-RPC error response 00:19:24.199 response: 00:19:24.199 { 00:19:24.199 "code": -5, 00:19:24.199 "message": "Input/output error" 00:19:24.199 } 00:19:24.199 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 992908 00:19:24.199 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 992908 ']' 00:19:24.199 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 992908 00:19:24.199 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:24.199 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:24.199 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 992908 00:19:24.462 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:24.462 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:24.462 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 992908' 00:19:24.462 killing process with pid 992908 00:19:24.462 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 992908 00:19:24.462 Received shutdown signal, test time was about 10.000000 seconds 00:19:24.462 00:19:24.462 Latency(us) 00:19:24.462 [2024-10-21T10:04:01.057Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.462 [2024-10-21T10:04:01.057Z] =================================================================================================================== 00:19:24.462 [2024-10-21T10:04:01.057Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:24.462 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 992908 00:19:24.462 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:24.462 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:24.462 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:24.462 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:24.462 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:24.462 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.IhNJs6vtw1 00:19:24.462 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:24.462 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.IhNJs6vtw1 00:19:24.462 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:24.462 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:24.462 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:24.462 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:24.462 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.IhNJs6vtw1 00:19:24.462 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:24.462 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:24.462 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:24.462 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.IhNJs6vtw1 00:19:24.462 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:24.462 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=993104 00:19:24.462 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:24.462 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 993104 /var/tmp/bdevperf.sock 00:19:24.462 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:24.462 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 993104 ']' 00:19:24.462 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:24.462 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:24.462 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:24.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:24.462 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:24.462 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.462 [2024-10-21 12:04:00.994400] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:19:24.462 [2024-10-21 12:04:00.994456] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid993104 ] 00:19:24.724 [2024-10-21 12:04:01.070445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.724 [2024-10-21 12:04:01.098915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:25.294 12:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:25.294 12:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:25.294 12:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.IhNJs6vtw1 00:19:25.555 12:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:19:25.555 [2024-10-21 12:04:02.117129] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:25.555 [2024-10-21 12:04:02.122505] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:25.555 [2024-10-21 12:04:02.122523] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:25.555 [2024-10-21 12:04:02.122542] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:25.555 [2024-10-21 12:04:02.123127] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24a41c0 (107): Transport endpoint is not connected 00:19:25.555 [2024-10-21 12:04:02.124122] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24a41c0 (9): Bad file descriptor 00:19:25.555 [2024-10-21 12:04:02.125124] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:25.555 [2024-10-21 12:04:02.125131] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:25.555 [2024-10-21 12:04:02.125137] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:25.555 [2024-10-21 12:04:02.125144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:25.555 request: 00:19:25.555 { 00:19:25.555 "name": "TLSTEST", 00:19:25.555 "trtype": "tcp", 00:19:25.555 "traddr": "10.0.0.2", 00:19:25.555 "adrfam": "ipv4", 00:19:25.555 "trsvcid": "4420", 00:19:25.555 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:25.555 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:25.555 "prchk_reftag": false, 00:19:25.555 "prchk_guard": false, 00:19:25.555 "hdgst": false, 00:19:25.555 "ddgst": false, 00:19:25.555 "psk": "key0", 00:19:25.555 "allow_unrecognized_csi": false, 00:19:25.555 "method": "bdev_nvme_attach_controller", 00:19:25.555 "req_id": 1 00:19:25.555 } 00:19:25.555 Got JSON-RPC error response 00:19:25.555 response: 00:19:25.555 { 00:19:25.555 "code": -5, 00:19:25.555 "message": "Input/output error" 00:19:25.555 } 00:19:25.816 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 993104 00:19:25.817 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 993104 ']' 00:19:25.817 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 993104 00:19:25.817 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:25.817 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:25.817 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 993104 00:19:25.817 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:25.817 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:25.817 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 993104' 00:19:25.817 killing process with pid 993104 00:19:25.817 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 993104 00:19:25.817 Received shutdown signal, test time was about 10.000000 seconds 00:19:25.817 00:19:25.817 Latency(us) 00:19:25.817 [2024-10-21T10:04:02.412Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.817 [2024-10-21T10:04:02.412Z] =================================================================================================================== 00:19:25.817 [2024-10-21T10:04:02.412Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:25.817 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 993104 00:19:25.817 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:25.817 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:25.817 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:25.817 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:25.817 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:25.817 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.IhNJs6vtw1 00:19:25.817 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:25.817 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.IhNJs6vtw1 00:19:25.817 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:25.817 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:25.817 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:25.817 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:25.817 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.IhNJs6vtw1 00:19:25.817 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:25.817 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:25.817 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:25.817 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.IhNJs6vtw1 00:19:25.817 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:25.817 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=993311 00:19:25.817 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:25.817 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 993311 /var/tmp/bdevperf.sock 00:19:25.817 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:25.817 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 993311 ']' 00:19:25.817 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:25.817 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:25.817 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:25.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:25.817 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:25.817 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:25.817 [2024-10-21 12:04:02.384063] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:19:25.817 [2024-10-21 12:04:02.384117] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid993311 ] 00:19:26.079 [2024-10-21 12:04:02.462109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.079 [2024-10-21 12:04:02.490047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:26.651 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:26.651 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:26.651 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.IhNJs6vtw1 00:19:26.912 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:27.173 [2024-10-21 12:04:03.508207] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:27.173 [2024-10-21 12:04:03.515875] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:27.173 [2024-10-21 12:04:03.515893] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:27.173 [2024-10-21 12:04:03.515913] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:27.173 [2024-10-21 12:04:03.516324] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14e51c0 (107): Transport endpoint is not connected 00:19:27.173 [2024-10-21 12:04:03.517317] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14e51c0 (9): Bad file descriptor 00:19:27.173 [2024-10-21 12:04:03.518319] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:27.174 [2024-10-21 12:04:03.518328] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:27.174 [2024-10-21 12:04:03.518334] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:19:27.174 [2024-10-21 12:04:03.518342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:27.174 request: 00:19:27.174 { 00:19:27.174 "name": "TLSTEST", 00:19:27.174 "trtype": "tcp", 00:19:27.174 "traddr": "10.0.0.2", 00:19:27.174 "adrfam": "ipv4", 00:19:27.174 "trsvcid": "4420", 00:19:27.174 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:27.174 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:27.174 "prchk_reftag": false, 00:19:27.174 "prchk_guard": false, 00:19:27.174 "hdgst": false, 00:19:27.174 "ddgst": false, 00:19:27.174 "psk": "key0", 00:19:27.174 "allow_unrecognized_csi": false, 00:19:27.174 "method": "bdev_nvme_attach_controller", 00:19:27.174 "req_id": 1 00:19:27.174 } 00:19:27.174 Got JSON-RPC error response 00:19:27.174 response: 00:19:27.174 { 00:19:27.174 "code": -5, 00:19:27.174 "message": "Input/output error" 00:19:27.174 } 00:19:27.174 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 993311 00:19:27.174 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 993311 ']' 00:19:27.174 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 993311 00:19:27.174 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:27.174 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:27.174 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 993311 00:19:27.174 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:27.174 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:27.174 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 993311' 00:19:27.174 killing process with pid 993311 00:19:27.174 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 993311 00:19:27.174 Received shutdown signal, test time was about 10.000000 seconds 00:19:27.174 00:19:27.174 Latency(us) 00:19:27.174 [2024-10-21T10:04:03.769Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.174 [2024-10-21T10:04:03.769Z] =================================================================================================================== 00:19:27.174 [2024-10-21T10:04:03.769Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:27.174 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 993311 00:19:27.174 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:27.174 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:27.174 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:27.174 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:27.174 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:27.174 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:27.174 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:27.174 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:27.174 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:27.174 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:27.174 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:27.174 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:27.174 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:27.174 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:27.174 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:27.174 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:27.174 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:27.174 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:27.174 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=993645 00:19:27.174 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:27.174 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 993645 /var/tmp/bdevperf.sock 00:19:27.174 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:27.174 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 993645 ']' 00:19:27.174 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:27.174 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:27.174 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:27.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:27.174 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:27.174 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:27.435 [2024-10-21 12:04:03.768799] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:19:27.435 [2024-10-21 12:04:03.768870] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid993645 ] 00:19:27.435 [2024-10-21 12:04:03.846759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.435 [2024-10-21 12:04:03.874659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:28.008 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:28.008 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:28.008 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:19:28.270 [2024-10-21 12:04:04.756441] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:19:28.270 [2024-10-21 12:04:04.756465] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:28.270 request: 00:19:28.270 { 00:19:28.270 "name": "key0", 00:19:28.270 "path": "", 00:19:28.270 "method": "keyring_file_add_key", 00:19:28.270 "req_id": 1 00:19:28.270 } 00:19:28.270 Got JSON-RPC error response 00:19:28.270 response: 00:19:28.270 { 00:19:28.270 "code": -1, 00:19:28.270 "message": "Operation not permitted" 00:19:28.270 } 00:19:28.270 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:28.531 [2024-10-21 12:04:04.940981] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:28.531 [2024-10-21 12:04:04.941003] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:28.531 request: 00:19:28.531 { 00:19:28.531 "name": "TLSTEST", 00:19:28.531 "trtype": "tcp", 00:19:28.531 "traddr": "10.0.0.2", 00:19:28.531 "adrfam": "ipv4", 00:19:28.531 "trsvcid": "4420", 00:19:28.531 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:28.531 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:28.531 "prchk_reftag": false, 00:19:28.531 "prchk_guard": false, 00:19:28.531 "hdgst": false, 00:19:28.531 "ddgst": false, 00:19:28.531 "psk": "key0", 00:19:28.531 "allow_unrecognized_csi": false, 00:19:28.531 "method": "bdev_nvme_attach_controller", 00:19:28.531 "req_id": 1 00:19:28.531 } 00:19:28.531 Got JSON-RPC error response 00:19:28.531 response: 00:19:28.531 { 00:19:28.531 "code": -126, 00:19:28.531 "message": "Required key not available" 00:19:28.531 } 00:19:28.531 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 993645 00:19:28.531 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 993645 ']' 00:19:28.531 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 993645 00:19:28.531 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:28.531 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:28.531 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 993645 00:19:28.531 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:28.531 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:28.531 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 993645' 00:19:28.531 killing process with pid 993645 00:19:28.531 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 993645 00:19:28.531 Received shutdown signal, test time was about 10.000000 seconds 00:19:28.531 00:19:28.531 Latency(us) 00:19:28.531 [2024-10-21T10:04:05.126Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.531 [2024-10-21T10:04:05.126Z] =================================================================================================================== 00:19:28.531 [2024-10-21T10:04:05.126Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:28.531 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 993645 00:19:28.793 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:28.793 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:28.793 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:28.793 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:28.793 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:28.793 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 987713 00:19:28.793 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 987713 ']' 00:19:28.793 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 987713 00:19:28.793 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:28.793 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:28.793 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 987713 00:19:28.793 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:28.793 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:28.793 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 987713' 00:19:28.793 killing process with pid 987713 00:19:28.793 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 987713 00:19:28.793 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 987713 00:19:28.793 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:28.793 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:28.793 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:19:28.793 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:19:28.793 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:28.793 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=2 00:19:28.793 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:19:28.793 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:28.793 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:19:28.793 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.zqIwOwC1SQ 00:19:28.793 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:28.793 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.zqIwOwC1SQ 00:19:28.793 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:19:28.793 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:28.793 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:28.793 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.793 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=994002 00:19:28.793 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 994002 00:19:28.793 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:28.793 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 994002 ']' 00:19:28.793 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.793 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:28.793 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.793 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:28.793 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:29.055 [2024-10-21 12:04:05.409392] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:19:29.055 [2024-10-21 12:04:05.409447] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:29.055 [2024-10-21 12:04:05.469807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.055 [2024-10-21 12:04:05.499045] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:29.055 [2024-10-21 12:04:05.499074] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:29.055 [2024-10-21 12:04:05.499079] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:29.055 [2024-10-21 12:04:05.499084] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:29.055 [2024-10-21 12:04:05.499089] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:29.055 [2024-10-21 12:04:05.499551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:29.055 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:29.055 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:29.055 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:29.055 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:29.055 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:29.055 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:29.055 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.zqIwOwC1SQ 00:19:29.055 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.zqIwOwC1SQ 00:19:29.055 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:29.316 [2024-10-21 12:04:05.781232] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:29.316 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:29.576 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:29.576 [2024-10-21 12:04:06.150163] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:29.576 [2024-10-21 12:04:06.150369] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:29.576 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:29.837 malloc0 00:19:29.837 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:30.098 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.zqIwOwC1SQ 00:19:30.099 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:30.359 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zqIwOwC1SQ 00:19:30.359 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:30.359 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:30.359 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:30.359 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.zqIwOwC1SQ 00:19:30.359 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:30.359 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=994356 00:19:30.359 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:30.359 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 994356 /var/tmp/bdevperf.sock 00:19:30.359 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:30.359 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 994356 ']' 00:19:30.359 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:30.359 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:30.359 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:30.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:30.359 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:30.359 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:30.359 [2024-10-21 12:04:06.859030] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:19:30.359 [2024-10-21 12:04:06.859084] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid994356 ] 00:19:30.359 [2024-10-21 12:04:06.936019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.619 [2024-10-21 12:04:06.965090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:30.619 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:30.619 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:30.619 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zqIwOwC1SQ 00:19:30.880 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:30.880 [2024-10-21 12:04:07.381795] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:30.880 TLSTESTn1 00:19:31.141 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:31.141 Running I/O for 10 seconds... 00:19:33.029 5685.00 IOPS, 22.21 MiB/s [2024-10-21T10:04:11.008Z] 6046.50 IOPS, 23.62 MiB/s [2024-10-21T10:04:11.579Z] 6072.00 IOPS, 23.72 MiB/s [2024-10-21T10:04:12.963Z] 6109.25 IOPS, 23.86 MiB/s [2024-10-21T10:04:13.904Z] 6157.00 IOPS, 24.05 MiB/s [2024-10-21T10:04:14.845Z] 6169.83 IOPS, 24.10 MiB/s [2024-10-21T10:04:15.786Z] 6026.71 IOPS, 23.54 MiB/s [2024-10-21T10:04:16.729Z] 6016.75 IOPS, 23.50 MiB/s [2024-10-21T10:04:17.671Z] 6000.22 IOPS, 23.44 MiB/s [2024-10-21T10:04:17.671Z] 5960.60 IOPS, 23.28 MiB/s 00:19:41.076 Latency(us) 00:19:41.076 [2024-10-21T10:04:17.671Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:41.076 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:41.076 Verification LBA range: start 0x0 length 0x2000 00:19:41.076 TLSTESTn1 : 10.01 5966.25 23.31 0.00 0.00 21424.21 4560.21 30365.01 00:19:41.076 [2024-10-21T10:04:17.671Z] =================================================================================================================== 00:19:41.076 [2024-10-21T10:04:17.671Z] Total : 5966.25 23.31 0.00 0.00 21424.21 4560.21 30365.01 00:19:41.076 { 00:19:41.076 "results": [ 00:19:41.076 { 00:19:41.076 "job": "TLSTESTn1", 00:19:41.076 "core_mask": "0x4", 00:19:41.076 "workload": "verify", 00:19:41.076 "status": "finished", 00:19:41.076 "verify_range": { 00:19:41.076 "start": 0, 00:19:41.076 "length": 8192 00:19:41.076 }, 00:19:41.076 "queue_depth": 128, 00:19:41.076 "io_size": 4096, 00:19:41.076 "runtime": 10.011823, 00:19:41.076 "iops": 5966.2461072274255, 00:19:41.076 "mibps": 23.30564885635713, 00:19:41.076 "io_failed": 0, 00:19:41.076 "io_timeout": 0, 00:19:41.076 "avg_latency_us": 21424.214183338077, 00:19:41.076 "min_latency_us": 4560.213333333333, 00:19:41.076 "max_latency_us": 30365.013333333332 00:19:41.076 } 00:19:41.076 ], 00:19:41.076 "core_count": 1 00:19:41.076 } 00:19:41.076 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:41.076 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 994356 00:19:41.076 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 994356 ']' 00:19:41.076 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 994356 00:19:41.076 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:41.076 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:41.076 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 994356 00:19:41.337 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:41.338 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:41.338 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 994356' 00:19:41.338 killing process with pid 994356 00:19:41.338 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 994356 00:19:41.338 Received shutdown signal, test time was about 10.000000 seconds 00:19:41.338 00:19:41.338 Latency(us) 00:19:41.338 [2024-10-21T10:04:17.933Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:41.338 [2024-10-21T10:04:17.933Z] =================================================================================================================== 00:19:41.338 [2024-10-21T10:04:17.933Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:41.338 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 994356 00:19:41.338 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.zqIwOwC1SQ 00:19:41.338 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zqIwOwC1SQ 00:19:41.338 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:41.338 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zqIwOwC1SQ 00:19:41.338 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:41.338 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:41.338 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:41.338 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:41.338 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zqIwOwC1SQ 00:19:41.338 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:41.338 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:41.338 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:41.338 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.zqIwOwC1SQ 00:19:41.338 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:41.338 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=996375 00:19:41.338 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:41.338 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 996375 /var/tmp/bdevperf.sock 00:19:41.338 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:41.338 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 996375 ']' 00:19:41.338 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:41.338 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:41.338 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:41.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:41.338 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:41.338 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:41.338 [2024-10-21 12:04:17.853840] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:19:41.338 [2024-10-21 12:04:17.853893] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid996375 ] 00:19:41.338 [2024-10-21 12:04:17.928823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.600 [2024-10-21 12:04:17.957067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:41.600 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:41.600 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:41.600 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zqIwOwC1SQ 00:19:41.600 [2024-10-21 12:04:18.185294] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.zqIwOwC1SQ': 0100666 00:19:41.600 [2024-10-21 12:04:18.185324] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:41.600 request: 00:19:41.600 { 00:19:41.600 "name": "key0", 00:19:41.600 "path": "/tmp/tmp.zqIwOwC1SQ", 00:19:41.600 "method": "keyring_file_add_key", 00:19:41.600 "req_id": 1 00:19:41.600 } 00:19:41.600 Got JSON-RPC error response 00:19:41.600 response: 00:19:41.600 { 00:19:41.600 "code": -1, 00:19:41.600 "message": "Operation not permitted" 00:19:41.600 } 00:19:41.861 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:41.861 [2024-10-21 12:04:18.369832] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:41.861 [2024-10-21 12:04:18.369852] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:41.861 request: 00:19:41.861 { 00:19:41.861 "name": "TLSTEST", 00:19:41.861 "trtype": "tcp", 00:19:41.861 "traddr": "10.0.0.2", 00:19:41.861 "adrfam": "ipv4", 00:19:41.861 "trsvcid": "4420", 00:19:41.861 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:41.861 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:41.861 "prchk_reftag": false, 00:19:41.861 "prchk_guard": false, 00:19:41.861 "hdgst": false, 00:19:41.861 "ddgst": false, 00:19:41.861 "psk": "key0", 00:19:41.861 "allow_unrecognized_csi": false, 00:19:41.861 "method": "bdev_nvme_attach_controller", 00:19:41.861 "req_id": 1 00:19:41.861 } 00:19:41.861 Got JSON-RPC error response 00:19:41.861 response: 00:19:41.861 { 00:19:41.861 "code": -126, 00:19:41.861 "message": "Required key not available" 00:19:41.861 } 00:19:41.861 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 996375 00:19:41.861 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 996375 ']' 00:19:41.861 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 996375 00:19:41.861 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:41.861 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:41.861 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 996375 00:19:42.122 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:42.122 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:42.122 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 996375' 00:19:42.122 killing process with pid 996375 00:19:42.122 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 996375 00:19:42.122 Received shutdown signal, test time was about 10.000000 seconds 00:19:42.122 00:19:42.122 Latency(us) 00:19:42.122 [2024-10-21T10:04:18.717Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.122 [2024-10-21T10:04:18.717Z] =================================================================================================================== 00:19:42.122 [2024-10-21T10:04:18.717Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:42.122 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 996375 00:19:42.122 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:42.122 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:42.122 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:42.122 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:42.122 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:42.122 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 994002 00:19:42.122 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 994002 ']' 00:19:42.122 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 994002 00:19:42.122 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:42.122 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:42.122 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 994002 00:19:42.122 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:42.122 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:42.122 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 994002' 00:19:42.122 killing process with pid 994002 00:19:42.122 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 994002 00:19:42.122 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 994002 00:19:42.382 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:19:42.382 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:42.382 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:42.382 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.382 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=996717 00:19:42.382 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 996717 00:19:42.382 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:42.382 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 996717 ']' 00:19:42.382 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:42.382 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:42.382 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:42.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:42.382 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:42.383 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.383 [2024-10-21 12:04:18.792337] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:19:42.383 [2024-10-21 12:04:18.792391] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:42.383 [2024-10-21 12:04:18.878379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.383 [2024-10-21 12:04:18.907147] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:42.383 [2024-10-21 12:04:18.907179] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:42.383 [2024-10-21 12:04:18.907184] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:42.383 [2024-10-21 12:04:18.907189] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:42.383 [2024-10-21 12:04:18.907193] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:42.383 [2024-10-21 12:04:18.907680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:43.324 12:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:43.324 12:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:43.324 12:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:43.324 12:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:43.324 12:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:43.324 12:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:43.324 12:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.zqIwOwC1SQ 00:19:43.324 12:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:43.325 12:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.zqIwOwC1SQ 00:19:43.325 12:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:19:43.325 12:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:43.325 12:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:19:43.325 12:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:43.325 12:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.zqIwOwC1SQ 00:19:43.325 12:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.zqIwOwC1SQ 00:19:43.325 12:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:43.325 [2024-10-21 12:04:19.790919] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:43.325 12:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:43.586 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:43.586 [2024-10-21 12:04:20.159850] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:43.586 [2024-10-21 12:04:20.160064] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:43.846 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:43.846 malloc0 00:19:43.846 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:44.107 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.zqIwOwC1SQ 00:19:44.367 [2024-10-21 12:04:20.710839] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.zqIwOwC1SQ': 0100666 00:19:44.367 [2024-10-21 12:04:20.710860] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:44.367 request: 00:19:44.367 { 00:19:44.367 "name": "key0", 00:19:44.367 "path": "/tmp/tmp.zqIwOwC1SQ", 00:19:44.367 "method": "keyring_file_add_key", 00:19:44.367 "req_id": 1 00:19:44.367 } 00:19:44.367 Got JSON-RPC error response 00:19:44.367 response: 00:19:44.367 { 00:19:44.367 "code": -1, 00:19:44.367 "message": "Operation not permitted" 00:19:44.367 } 00:19:44.368 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:44.368 [2024-10-21 12:04:20.887294] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:19:44.368 [2024-10-21 12:04:20.887324] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:44.368 request: 00:19:44.368 { 00:19:44.368 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:44.368 "host": "nqn.2016-06.io.spdk:host1", 00:19:44.368 "psk": "key0", 00:19:44.368 "method": "nvmf_subsystem_add_host", 00:19:44.368 "req_id": 1 00:19:44.368 } 00:19:44.368 Got JSON-RPC error response 00:19:44.368 response: 00:19:44.368 { 00:19:44.368 "code": -32603, 00:19:44.368 "message": "Internal error" 00:19:44.368 } 00:19:44.368 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:44.368 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:44.368 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:44.368 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:44.368 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 996717 00:19:44.368 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 996717 ']' 00:19:44.368 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 996717 00:19:44.368 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:44.368 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:44.368 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 996717 00:19:44.628 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:44.628 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:44.628 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 996717' 00:19:44.628 killing process with pid 996717 00:19:44.628 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 996717 00:19:44.628 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 996717 00:19:44.628 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.zqIwOwC1SQ 00:19:44.628 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:19:44.628 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:44.628 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:44.628 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:44.628 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=997092 00:19:44.628 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:44.628 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 997092 00:19:44.628 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 997092 ']' 00:19:44.628 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.628 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:44.628 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.628 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:44.628 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:44.628 [2024-10-21 12:04:21.156013] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:19:44.628 [2024-10-21 12:04:21.156065] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:44.889 [2024-10-21 12:04:21.242025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.889 [2024-10-21 12:04:21.271296] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:44.889 [2024-10-21 12:04:21.271329] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:44.890 [2024-10-21 12:04:21.271335] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:44.890 [2024-10-21 12:04:21.271339] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:44.890 [2024-10-21 12:04:21.271344] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:44.890 [2024-10-21 12:04:21.271828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:45.492 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:45.492 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:45.492 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:45.492 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:45.492 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:45.492 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:45.492 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.zqIwOwC1SQ 00:19:45.492 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.zqIwOwC1SQ 00:19:45.492 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:45.777 [2024-10-21 12:04:22.151164] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:45.777 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:45.777 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:46.043 [2024-10-21 12:04:22.516067] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:46.043 [2024-10-21 12:04:22.516266] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:46.043 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:46.303 malloc0 00:19:46.304 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:46.564 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.zqIwOwC1SQ 00:19:46.564 12:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:46.825 12:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:46.825 12:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=997509 00:19:46.825 12:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:46.825 12:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 997509 /var/tmp/bdevperf.sock 00:19:46.825 12:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 997509 ']' 00:19:46.825 12:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:46.825 12:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:46.825 12:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:46.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:46.825 12:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:46.825 12:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.825 [2024-10-21 12:04:23.299423] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:19:46.825 [2024-10-21 12:04:23.299476] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid997509 ] 00:19:46.825 [2024-10-21 12:04:23.375945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.825 [2024-10-21 12:04:23.411140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:47.767 12:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:47.767 12:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:47.767 12:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zqIwOwC1SQ 00:19:47.767 12:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:48.029 [2024-10-21 12:04:24.430391] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:48.029 TLSTESTn1 00:19:48.029 12:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:48.290 12:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:19:48.290 "subsystems": [ 00:19:48.290 { 00:19:48.290 "subsystem": "keyring", 00:19:48.291 "config": [ 00:19:48.291 { 00:19:48.291 "method": "keyring_file_add_key", 00:19:48.291 "params": { 00:19:48.291 "name": "key0", 00:19:48.291 "path": "/tmp/tmp.zqIwOwC1SQ" 00:19:48.291 } 00:19:48.291 } 00:19:48.291 ] 00:19:48.291 }, 00:19:48.291 { 00:19:48.291 "subsystem": "iobuf", 00:19:48.291 "config": [ 00:19:48.291 { 00:19:48.291 "method": "iobuf_set_options", 00:19:48.291 "params": { 00:19:48.291 "small_pool_count": 8192, 00:19:48.291 "large_pool_count": 1024, 00:19:48.291 "small_bufsize": 8192, 00:19:48.291 "large_bufsize": 135168 00:19:48.291 } 00:19:48.291 } 00:19:48.291 ] 00:19:48.291 }, 00:19:48.291 { 00:19:48.291 "subsystem": "sock", 00:19:48.291 "config": [ 00:19:48.291 { 00:19:48.291 "method": "sock_set_default_impl", 00:19:48.291 "params": { 00:19:48.291 "impl_name": "posix" 00:19:48.291 } 00:19:48.291 }, 00:19:48.291 { 00:19:48.291 "method": "sock_impl_set_options", 00:19:48.291 "params": { 00:19:48.291 "impl_name": "ssl", 00:19:48.291 "recv_buf_size": 4096, 00:19:48.291 "send_buf_size": 4096, 00:19:48.291 "enable_recv_pipe": true, 00:19:48.291 "enable_quickack": false, 00:19:48.291 "enable_placement_id": 0, 00:19:48.291 "enable_zerocopy_send_server": true, 00:19:48.291 "enable_zerocopy_send_client": false, 00:19:48.291 "zerocopy_threshold": 0, 00:19:48.291 "tls_version": 0, 00:19:48.291 "enable_ktls": false 00:19:48.291 } 00:19:48.291 }, 00:19:48.291 { 00:19:48.291 "method": "sock_impl_set_options", 00:19:48.291 "params": { 00:19:48.291 "impl_name": "posix", 00:19:48.291 "recv_buf_size": 2097152, 00:19:48.291 "send_buf_size": 2097152, 00:19:48.291 "enable_recv_pipe": true, 00:19:48.291 "enable_quickack": false, 00:19:48.291 "enable_placement_id": 0, 00:19:48.291 "enable_zerocopy_send_server": true, 00:19:48.291 "enable_zerocopy_send_client": false, 00:19:48.291 "zerocopy_threshold": 0, 00:19:48.291 "tls_version": 0, 00:19:48.291 "enable_ktls": false 00:19:48.291 } 00:19:48.291 } 00:19:48.291 ] 00:19:48.291 }, 00:19:48.291 { 00:19:48.291 "subsystem": "vmd", 00:19:48.291 "config": [] 00:19:48.291 }, 00:19:48.291 { 00:19:48.291 "subsystem": "accel", 00:19:48.291 "config": [ 00:19:48.291 { 00:19:48.291 "method": "accel_set_options", 00:19:48.291 "params": { 00:19:48.291 "small_cache_size": 128, 00:19:48.291 "large_cache_size": 16, 00:19:48.291 "task_count": 2048, 00:19:48.291 "sequence_count": 2048, 00:19:48.291 "buf_count": 2048 00:19:48.291 } 00:19:48.291 } 00:19:48.291 ] 00:19:48.291 }, 00:19:48.291 { 00:19:48.291 "subsystem": "bdev", 00:19:48.291 "config": [ 00:19:48.291 { 00:19:48.291 "method": "bdev_set_options", 00:19:48.291 "params": { 00:19:48.291 "bdev_io_pool_size": 65535, 00:19:48.291 "bdev_io_cache_size": 256, 00:19:48.291 "bdev_auto_examine": true, 00:19:48.291 "iobuf_small_cache_size": 128, 00:19:48.291 "iobuf_large_cache_size": 16 00:19:48.291 } 00:19:48.291 }, 00:19:48.291 { 00:19:48.291 "method": "bdev_raid_set_options", 00:19:48.291 "params": { 00:19:48.291 "process_window_size_kb": 1024, 00:19:48.291 "process_max_bandwidth_mb_sec": 0 00:19:48.291 } 00:19:48.291 }, 00:19:48.291 { 00:19:48.291 "method": "bdev_iscsi_set_options", 00:19:48.291 "params": { 00:19:48.291 "timeout_sec": 30 00:19:48.291 } 00:19:48.291 }, 00:19:48.291 { 00:19:48.291 "method": "bdev_nvme_set_options", 00:19:48.291 "params": { 00:19:48.291 "action_on_timeout": "none", 00:19:48.291 "timeout_us": 0, 00:19:48.291 "timeout_admin_us": 0, 00:19:48.291 "keep_alive_timeout_ms": 10000, 00:19:48.291 "arbitration_burst": 0, 00:19:48.291 "low_priority_weight": 0, 00:19:48.291 "medium_priority_weight": 0, 00:19:48.291 "high_priority_weight": 0, 00:19:48.291 "nvme_adminq_poll_period_us": 10000, 00:19:48.291 "nvme_ioq_poll_period_us": 0, 00:19:48.291 "io_queue_requests": 0, 00:19:48.291 "delay_cmd_submit": true, 00:19:48.291 "transport_retry_count": 4, 00:19:48.291 "bdev_retry_count": 3, 00:19:48.291 "transport_ack_timeout": 0, 00:19:48.291 "ctrlr_loss_timeout_sec": 0, 00:19:48.291 "reconnect_delay_sec": 0, 00:19:48.291 "fast_io_fail_timeout_sec": 0, 00:19:48.291 "disable_auto_failback": false, 00:19:48.291 "generate_uuids": false, 00:19:48.291 "transport_tos": 0, 00:19:48.291 "nvme_error_stat": false, 00:19:48.291 "rdma_srq_size": 0, 00:19:48.291 "io_path_stat": false, 00:19:48.291 "allow_accel_sequence": false, 00:19:48.291 "rdma_max_cq_size": 0, 00:19:48.291 "rdma_cm_event_timeout_ms": 0, 00:19:48.291 "dhchap_digests": [ 00:19:48.291 "sha256", 00:19:48.291 "sha384", 00:19:48.291 "sha512" 00:19:48.291 ], 00:19:48.291 "dhchap_dhgroups": [ 00:19:48.291 "null", 00:19:48.291 "ffdhe2048", 00:19:48.291 "ffdhe3072", 00:19:48.291 "ffdhe4096", 00:19:48.291 "ffdhe6144", 00:19:48.291 "ffdhe8192" 00:19:48.291 ] 00:19:48.291 } 00:19:48.291 }, 00:19:48.291 { 00:19:48.291 "method": "bdev_nvme_set_hotplug", 00:19:48.291 "params": { 00:19:48.291 "period_us": 100000, 00:19:48.291 "enable": false 00:19:48.291 } 00:19:48.291 }, 00:19:48.291 { 00:19:48.291 "method": "bdev_malloc_create", 00:19:48.291 "params": { 00:19:48.291 "name": "malloc0", 00:19:48.291 "num_blocks": 8192, 00:19:48.291 "block_size": 4096, 00:19:48.291 "physical_block_size": 4096, 00:19:48.291 "uuid": "2f2c5c91-3ad3-4fee-b031-86c71c69baff", 00:19:48.291 "optimal_io_boundary": 0, 00:19:48.291 "md_size": 0, 00:19:48.291 "dif_type": 0, 00:19:48.291 "dif_is_head_of_md": false, 00:19:48.291 "dif_pi_format": 0 00:19:48.291 } 00:19:48.291 }, 00:19:48.291 { 00:19:48.291 "method": "bdev_wait_for_examine" 00:19:48.291 } 00:19:48.291 ] 00:19:48.291 }, 00:19:48.291 { 00:19:48.291 "subsystem": "nbd", 00:19:48.291 "config": [] 00:19:48.291 }, 00:19:48.291 { 00:19:48.291 "subsystem": "scheduler", 00:19:48.291 "config": [ 00:19:48.291 { 00:19:48.291 "method": "framework_set_scheduler", 00:19:48.291 "params": { 00:19:48.291 "name": "static" 00:19:48.291 } 00:19:48.291 } 00:19:48.291 ] 00:19:48.291 }, 00:19:48.291 { 00:19:48.291 "subsystem": "nvmf", 00:19:48.291 "config": [ 00:19:48.291 { 00:19:48.291 "method": "nvmf_set_config", 00:19:48.291 "params": { 00:19:48.291 "discovery_filter": "match_any", 00:19:48.291 "admin_cmd_passthru": { 00:19:48.291 "identify_ctrlr": false 00:19:48.291 }, 00:19:48.291 "dhchap_digests": [ 00:19:48.291 "sha256", 00:19:48.291 "sha384", 00:19:48.291 "sha512" 00:19:48.291 ], 00:19:48.291 "dhchap_dhgroups": [ 00:19:48.291 "null", 00:19:48.291 "ffdhe2048", 00:19:48.291 "ffdhe3072", 00:19:48.291 "ffdhe4096", 00:19:48.291 "ffdhe6144", 00:19:48.291 "ffdhe8192" 00:19:48.291 ] 00:19:48.291 } 00:19:48.291 }, 00:19:48.291 { 00:19:48.291 "method": "nvmf_set_max_subsystems", 00:19:48.291 "params": { 00:19:48.291 "max_subsystems": 1024 00:19:48.292 } 00:19:48.292 }, 00:19:48.292 { 00:19:48.292 "method": "nvmf_set_crdt", 00:19:48.292 "params": { 00:19:48.292 "crdt1": 0, 00:19:48.292 "crdt2": 0, 00:19:48.292 "crdt3": 0 00:19:48.292 } 00:19:48.292 }, 00:19:48.292 { 00:19:48.292 "method": "nvmf_create_transport", 00:19:48.292 "params": { 00:19:48.292 "trtype": "TCP", 00:19:48.292 "max_queue_depth": 128, 00:19:48.292 "max_io_qpairs_per_ctrlr": 127, 00:19:48.292 "in_capsule_data_size": 4096, 00:19:48.292 "max_io_size": 131072, 00:19:48.292 "io_unit_size": 131072, 00:19:48.292 "max_aq_depth": 128, 00:19:48.292 "num_shared_buffers": 511, 00:19:48.292 "buf_cache_size": 4294967295, 00:19:48.292 "dif_insert_or_strip": false, 00:19:48.292 "zcopy": false, 00:19:48.292 "c2h_success": false, 00:19:48.292 "sock_priority": 0, 00:19:48.292 "abort_timeout_sec": 1, 00:19:48.292 "ack_timeout": 0, 00:19:48.292 "data_wr_pool_size": 0 00:19:48.292 } 00:19:48.292 }, 00:19:48.292 { 00:19:48.292 "method": "nvmf_create_subsystem", 00:19:48.292 "params": { 00:19:48.292 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.292 "allow_any_host": false, 00:19:48.292 "serial_number": "SPDK00000000000001", 00:19:48.292 "model_number": "SPDK bdev Controller", 00:19:48.292 "max_namespaces": 10, 00:19:48.292 "min_cntlid": 1, 00:19:48.292 "max_cntlid": 65519, 00:19:48.292 "ana_reporting": false 00:19:48.292 } 00:19:48.292 }, 00:19:48.292 { 00:19:48.292 "method": "nvmf_subsystem_add_host", 00:19:48.292 "params": { 00:19:48.292 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.292 "host": "nqn.2016-06.io.spdk:host1", 00:19:48.292 "psk": "key0" 00:19:48.292 } 00:19:48.292 }, 00:19:48.292 { 00:19:48.292 "method": "nvmf_subsystem_add_ns", 00:19:48.292 "params": { 00:19:48.292 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.292 "namespace": { 00:19:48.292 "nsid": 1, 00:19:48.292 "bdev_name": "malloc0", 00:19:48.292 "nguid": "2F2C5C913AD34FEEB03186C71C69BAFF", 00:19:48.292 "uuid": "2f2c5c91-3ad3-4fee-b031-86c71c69baff", 00:19:48.292 "no_auto_visible": false 00:19:48.292 } 00:19:48.292 } 00:19:48.292 }, 00:19:48.292 { 00:19:48.292 "method": "nvmf_subsystem_add_listener", 00:19:48.292 "params": { 00:19:48.292 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.292 "listen_address": { 00:19:48.292 "trtype": "TCP", 00:19:48.292 "adrfam": "IPv4", 00:19:48.292 "traddr": "10.0.0.2", 00:19:48.292 "trsvcid": "4420" 00:19:48.292 }, 00:19:48.292 "secure_channel": true 00:19:48.292 } 00:19:48.292 } 00:19:48.292 ] 00:19:48.292 } 00:19:48.292 ] 00:19:48.292 }' 00:19:48.292 12:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:48.553 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:19:48.553 "subsystems": [ 00:19:48.553 { 00:19:48.553 "subsystem": "keyring", 00:19:48.553 "config": [ 00:19:48.553 { 00:19:48.553 "method": "keyring_file_add_key", 00:19:48.553 "params": { 00:19:48.553 "name": "key0", 00:19:48.553 "path": "/tmp/tmp.zqIwOwC1SQ" 00:19:48.553 } 00:19:48.553 } 00:19:48.553 ] 00:19:48.553 }, 00:19:48.553 { 00:19:48.553 "subsystem": "iobuf", 00:19:48.553 "config": [ 00:19:48.553 { 00:19:48.553 "method": "iobuf_set_options", 00:19:48.553 "params": { 00:19:48.553 "small_pool_count": 8192, 00:19:48.553 "large_pool_count": 1024, 00:19:48.553 "small_bufsize": 8192, 00:19:48.553 "large_bufsize": 135168 00:19:48.553 } 00:19:48.553 } 00:19:48.553 ] 00:19:48.553 }, 00:19:48.553 { 00:19:48.553 "subsystem": "sock", 00:19:48.553 "config": [ 00:19:48.553 { 00:19:48.553 "method": "sock_set_default_impl", 00:19:48.553 "params": { 00:19:48.553 "impl_name": "posix" 00:19:48.553 } 00:19:48.553 }, 00:19:48.553 { 00:19:48.553 "method": "sock_impl_set_options", 00:19:48.553 "params": { 00:19:48.553 "impl_name": "ssl", 00:19:48.553 "recv_buf_size": 4096, 00:19:48.553 "send_buf_size": 4096, 00:19:48.553 "enable_recv_pipe": true, 00:19:48.553 "enable_quickack": false, 00:19:48.553 "enable_placement_id": 0, 00:19:48.553 "enable_zerocopy_send_server": true, 00:19:48.553 "enable_zerocopy_send_client": false, 00:19:48.553 "zerocopy_threshold": 0, 00:19:48.553 "tls_version": 0, 00:19:48.553 "enable_ktls": false 00:19:48.553 } 00:19:48.553 }, 00:19:48.553 { 00:19:48.553 "method": "sock_impl_set_options", 00:19:48.553 "params": { 00:19:48.553 "impl_name": "posix", 00:19:48.553 "recv_buf_size": 2097152, 00:19:48.553 "send_buf_size": 2097152, 00:19:48.553 "enable_recv_pipe": true, 00:19:48.553 "enable_quickack": false, 00:19:48.553 "enable_placement_id": 0, 00:19:48.553 "enable_zerocopy_send_server": true, 00:19:48.553 "enable_zerocopy_send_client": false, 00:19:48.553 "zerocopy_threshold": 0, 00:19:48.553 "tls_version": 0, 00:19:48.553 "enable_ktls": false 00:19:48.553 } 00:19:48.553 } 00:19:48.553 ] 00:19:48.553 }, 00:19:48.553 { 00:19:48.553 "subsystem": "vmd", 00:19:48.553 "config": [] 00:19:48.553 }, 00:19:48.553 { 00:19:48.553 "subsystem": "accel", 00:19:48.553 "config": [ 00:19:48.553 { 00:19:48.553 "method": "accel_set_options", 00:19:48.553 "params": { 00:19:48.553 "small_cache_size": 128, 00:19:48.553 "large_cache_size": 16, 00:19:48.553 "task_count": 2048, 00:19:48.553 "sequence_count": 2048, 00:19:48.553 "buf_count": 2048 00:19:48.553 } 00:19:48.553 } 00:19:48.553 ] 00:19:48.553 }, 00:19:48.553 { 00:19:48.553 "subsystem": "bdev", 00:19:48.553 "config": [ 00:19:48.553 { 00:19:48.553 "method": "bdev_set_options", 00:19:48.553 "params": { 00:19:48.553 "bdev_io_pool_size": 65535, 00:19:48.553 "bdev_io_cache_size": 256, 00:19:48.553 "bdev_auto_examine": true, 00:19:48.553 "iobuf_small_cache_size": 128, 00:19:48.553 "iobuf_large_cache_size": 16 00:19:48.553 } 00:19:48.553 }, 00:19:48.553 { 00:19:48.553 "method": "bdev_raid_set_options", 00:19:48.553 "params": { 00:19:48.553 "process_window_size_kb": 1024, 00:19:48.553 "process_max_bandwidth_mb_sec": 0 00:19:48.553 } 00:19:48.553 }, 00:19:48.553 { 00:19:48.553 "method": "bdev_iscsi_set_options", 00:19:48.553 "params": { 00:19:48.553 "timeout_sec": 30 00:19:48.553 } 00:19:48.553 }, 00:19:48.553 { 00:19:48.553 "method": "bdev_nvme_set_options", 00:19:48.553 "params": { 00:19:48.553 "action_on_timeout": "none", 00:19:48.553 "timeout_us": 0, 00:19:48.553 "timeout_admin_us": 0, 00:19:48.553 "keep_alive_timeout_ms": 10000, 00:19:48.553 "arbitration_burst": 0, 00:19:48.553 "low_priority_weight": 0, 00:19:48.553 "medium_priority_weight": 0, 00:19:48.553 "high_priority_weight": 0, 00:19:48.553 "nvme_adminq_poll_period_us": 10000, 00:19:48.553 "nvme_ioq_poll_period_us": 0, 00:19:48.553 "io_queue_requests": 512, 00:19:48.553 "delay_cmd_submit": true, 00:19:48.553 "transport_retry_count": 4, 00:19:48.553 "bdev_retry_count": 3, 00:19:48.553 "transport_ack_timeout": 0, 00:19:48.553 "ctrlr_loss_timeout_sec": 0, 00:19:48.553 "reconnect_delay_sec": 0, 00:19:48.553 "fast_io_fail_timeout_sec": 0, 00:19:48.553 "disable_auto_failback": false, 00:19:48.553 "generate_uuids": false, 00:19:48.553 "transport_tos": 0, 00:19:48.553 "nvme_error_stat": false, 00:19:48.553 "rdma_srq_size": 0, 00:19:48.553 "io_path_stat": false, 00:19:48.553 "allow_accel_sequence": false, 00:19:48.553 "rdma_max_cq_size": 0, 00:19:48.553 "rdma_cm_event_timeout_ms": 0, 00:19:48.553 "dhchap_digests": [ 00:19:48.553 "sha256", 00:19:48.553 "sha384", 00:19:48.553 "sha512" 00:19:48.553 ], 00:19:48.553 "dhchap_dhgroups": [ 00:19:48.553 "null", 00:19:48.553 "ffdhe2048", 00:19:48.553 "ffdhe3072", 00:19:48.553 "ffdhe4096", 00:19:48.553 "ffdhe6144", 00:19:48.553 "ffdhe8192" 00:19:48.553 ] 00:19:48.553 } 00:19:48.553 }, 00:19:48.553 { 00:19:48.553 "method": "bdev_nvme_attach_controller", 00:19:48.553 "params": { 00:19:48.553 "name": "TLSTEST", 00:19:48.553 "trtype": "TCP", 00:19:48.553 "adrfam": "IPv4", 00:19:48.553 "traddr": "10.0.0.2", 00:19:48.553 "trsvcid": "4420", 00:19:48.553 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.553 "prchk_reftag": false, 00:19:48.553 "prchk_guard": false, 00:19:48.553 "ctrlr_loss_timeout_sec": 0, 00:19:48.553 "reconnect_delay_sec": 0, 00:19:48.553 "fast_io_fail_timeout_sec": 0, 00:19:48.553 "psk": "key0", 00:19:48.553 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:48.553 "hdgst": false, 00:19:48.554 "ddgst": false, 00:19:48.554 "multipath": "multipath" 00:19:48.554 } 00:19:48.554 }, 00:19:48.554 { 00:19:48.554 "method": "bdev_nvme_set_hotplug", 00:19:48.554 "params": { 00:19:48.554 "period_us": 100000, 00:19:48.554 "enable": false 00:19:48.554 } 00:19:48.554 }, 00:19:48.554 { 00:19:48.554 "method": "bdev_wait_for_examine" 00:19:48.554 } 00:19:48.554 ] 00:19:48.554 }, 00:19:48.554 { 00:19:48.554 "subsystem": "nbd", 00:19:48.554 "config": [] 00:19:48.554 } 00:19:48.554 ] 00:19:48.554 }' 00:19:48.554 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 997509 00:19:48.554 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 997509 ']' 00:19:48.554 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 997509 00:19:48.554 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:48.554 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:48.554 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 997509 00:19:48.554 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:48.554 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:48.554 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 997509' 00:19:48.554 killing process with pid 997509 00:19:48.554 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 997509 00:19:48.554 Received shutdown signal, test time was about 10.000000 seconds 00:19:48.554 00:19:48.554 Latency(us) 00:19:48.554 [2024-10-21T10:04:25.149Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:48.554 [2024-10-21T10:04:25.149Z] =================================================================================================================== 00:19:48.554 [2024-10-21T10:04:25.149Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:48.554 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 997509 00:19:48.815 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 997092 00:19:48.815 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 997092 ']' 00:19:48.815 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 997092 00:19:48.815 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:48.815 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:48.815 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 997092 00:19:48.815 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:48.815 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:48.815 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 997092' 00:19:48.815 killing process with pid 997092 00:19:48.815 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 997092 00:19:48.815 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 997092 00:19:48.815 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:48.815 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:48.815 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:48.815 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:48.815 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:19:48.815 "subsystems": [ 00:19:48.815 { 00:19:48.815 "subsystem": "keyring", 00:19:48.815 "config": [ 00:19:48.815 { 00:19:48.815 "method": "keyring_file_add_key", 00:19:48.815 "params": { 00:19:48.815 "name": "key0", 00:19:48.815 "path": "/tmp/tmp.zqIwOwC1SQ" 00:19:48.815 } 00:19:48.815 } 00:19:48.815 ] 00:19:48.815 }, 00:19:48.815 { 00:19:48.815 "subsystem": "iobuf", 00:19:48.815 "config": [ 00:19:48.815 { 00:19:48.815 "method": "iobuf_set_options", 00:19:48.815 "params": { 00:19:48.815 "small_pool_count": 8192, 00:19:48.815 "large_pool_count": 1024, 00:19:48.815 "small_bufsize": 8192, 00:19:48.815 "large_bufsize": 135168 00:19:48.815 } 00:19:48.815 } 00:19:48.815 ] 00:19:48.815 }, 00:19:48.815 { 00:19:48.815 "subsystem": "sock", 00:19:48.815 "config": [ 00:19:48.815 { 00:19:48.815 "method": "sock_set_default_impl", 00:19:48.815 "params": { 00:19:48.815 "impl_name": "posix" 00:19:48.815 } 00:19:48.815 }, 00:19:48.815 { 00:19:48.815 "method": "sock_impl_set_options", 00:19:48.815 "params": { 00:19:48.815 "impl_name": "ssl", 00:19:48.815 "recv_buf_size": 4096, 00:19:48.815 "send_buf_size": 4096, 00:19:48.815 "enable_recv_pipe": true, 00:19:48.815 "enable_quickack": false, 00:19:48.815 "enable_placement_id": 0, 00:19:48.815 "enable_zerocopy_send_server": true, 00:19:48.815 "enable_zerocopy_send_client": false, 00:19:48.815 "zerocopy_threshold": 0, 00:19:48.815 "tls_version": 0, 00:19:48.815 "enable_ktls": false 00:19:48.815 } 00:19:48.815 }, 00:19:48.815 { 00:19:48.815 "method": "sock_impl_set_options", 00:19:48.815 "params": { 00:19:48.815 "impl_name": "posix", 00:19:48.815 "recv_buf_size": 2097152, 00:19:48.815 "send_buf_size": 2097152, 00:19:48.815 "enable_recv_pipe": true, 00:19:48.815 "enable_quickack": false, 00:19:48.815 "enable_placement_id": 0, 00:19:48.815 "enable_zerocopy_send_server": true, 00:19:48.815 "enable_zerocopy_send_client": false, 00:19:48.815 "zerocopy_threshold": 0, 00:19:48.815 "tls_version": 0, 00:19:48.815 "enable_ktls": false 00:19:48.815 } 00:19:48.815 } 00:19:48.815 ] 00:19:48.815 }, 00:19:48.815 { 00:19:48.815 "subsystem": "vmd", 00:19:48.815 "config": [] 00:19:48.815 }, 00:19:48.815 { 00:19:48.815 "subsystem": "accel", 00:19:48.815 "config": [ 00:19:48.815 { 00:19:48.815 "method": "accel_set_options", 00:19:48.815 "params": { 00:19:48.815 "small_cache_size": 128, 00:19:48.815 "large_cache_size": 16, 00:19:48.815 "task_count": 2048, 00:19:48.815 "sequence_count": 2048, 00:19:48.815 "buf_count": 2048 00:19:48.815 } 00:19:48.815 } 00:19:48.815 ] 00:19:48.815 }, 00:19:48.816 { 00:19:48.816 "subsystem": "bdev", 00:19:48.816 "config": [ 00:19:48.816 { 00:19:48.816 "method": "bdev_set_options", 00:19:48.816 "params": { 00:19:48.816 "bdev_io_pool_size": 65535, 00:19:48.816 "bdev_io_cache_size": 256, 00:19:48.816 "bdev_auto_examine": true, 00:19:48.816 "iobuf_small_cache_size": 128, 00:19:48.816 "iobuf_large_cache_size": 16 00:19:48.816 } 00:19:48.816 }, 00:19:48.816 { 00:19:48.816 "method": "bdev_raid_set_options", 00:19:48.816 "params": { 00:19:48.816 "process_window_size_kb": 1024, 00:19:48.816 "process_max_bandwidth_mb_sec": 0 00:19:48.816 } 00:19:48.816 }, 00:19:48.816 { 00:19:48.816 "method": "bdev_iscsi_set_options", 00:19:48.816 "params": { 00:19:48.816 "timeout_sec": 30 00:19:48.816 } 00:19:48.816 }, 00:19:48.816 { 00:19:48.816 "method": "bdev_nvme_set_options", 00:19:48.816 "params": { 00:19:48.816 "action_on_timeout": "none", 00:19:48.816 "timeout_us": 0, 00:19:48.816 "timeout_admin_us": 0, 00:19:48.816 "keep_alive_timeout_ms": 10000, 00:19:48.816 "arbitration_burst": 0, 00:19:48.816 "low_priority_weight": 0, 00:19:48.816 "medium_priority_weight": 0, 00:19:48.816 "high_priority_weight": 0, 00:19:48.816 "nvme_adminq_poll_period_us": 10000, 00:19:48.816 "nvme_ioq_poll_period_us": 0, 00:19:48.816 "io_queue_requests": 0, 00:19:48.816 "delay_cmd_submit": true, 00:19:48.816 "transport_retry_count": 4, 00:19:48.816 "bdev_retry_count": 3, 00:19:48.816 "transport_ack_timeout": 0, 00:19:48.816 "ctrlr_loss_timeout_sec": 0, 00:19:48.816 "reconnect_delay_sec": 0, 00:19:48.816 "fast_io_fail_timeout_sec": 0, 00:19:48.816 "disable_auto_failback": false, 00:19:48.816 "generate_uuids": false, 00:19:48.816 "transport_tos": 0, 00:19:48.816 "nvme_error_stat": false, 00:19:48.816 "rdma_srq_size": 0, 00:19:48.816 "io_path_stat": false, 00:19:48.816 "allow_accel_sequence": false, 00:19:48.816 "rdma_max_cq_size": 0, 00:19:48.816 "rdma_cm_event_timeout_ms": 0, 00:19:48.816 "dhchap_digests": [ 00:19:48.816 "sha256", 00:19:48.816 "sha384", 00:19:48.816 "sha512" 00:19:48.816 ], 00:19:48.816 "dhchap_dhgroups": [ 00:19:48.816 "null", 00:19:48.816 "ffdhe2048", 00:19:48.816 "ffdhe3072", 00:19:48.816 "ffdhe4096", 00:19:48.816 "ffdhe6144", 00:19:48.816 "ffdhe8192" 00:19:48.816 ] 00:19:48.816 } 00:19:48.816 }, 00:19:48.816 { 00:19:48.816 "method": "bdev_nvme_set_hotplug", 00:19:48.816 "params": { 00:19:48.816 "period_us": 100000, 00:19:48.816 "enable": false 00:19:48.816 } 00:19:48.816 }, 00:19:48.816 { 00:19:48.816 "method": "bdev_malloc_create", 00:19:48.816 "params": { 00:19:48.816 "name": "malloc0", 00:19:48.816 "num_blocks": 8192, 00:19:48.816 "block_size": 4096, 00:19:48.816 "physical_block_size": 4096, 00:19:48.816 "uuid": "2f2c5c91-3ad3-4fee-b031-86c71c69baff", 00:19:48.816 "optimal_io_boundary": 0, 00:19:48.816 "md_size": 0, 00:19:48.816 "dif_type": 0, 00:19:48.816 "dif_is_head_of_md": false, 00:19:48.816 "dif_pi_format": 0 00:19:48.816 } 00:19:48.816 }, 00:19:48.816 { 00:19:48.816 "method": "bdev_wait_for_examine" 00:19:48.816 } 00:19:48.816 ] 00:19:48.816 }, 00:19:48.816 { 00:19:48.816 "subsystem": "nbd", 00:19:48.816 "config": [] 00:19:48.816 }, 00:19:48.816 { 00:19:48.816 "subsystem": "scheduler", 00:19:48.816 "config": [ 00:19:48.816 { 00:19:48.816 "method": "framework_set_scheduler", 00:19:48.816 "params": { 00:19:48.816 "name": "static" 00:19:48.816 } 00:19:48.816 } 00:19:48.816 ] 00:19:48.816 }, 00:19:48.816 { 00:19:48.816 "subsystem": "nvmf", 00:19:48.816 "config": [ 00:19:48.816 { 00:19:48.816 "method": "nvmf_set_config", 00:19:48.816 "params": { 00:19:48.816 "discovery_filter": "match_any", 00:19:48.816 "admin_cmd_passthru": { 00:19:48.816 "identify_ctrlr": false 00:19:48.816 }, 00:19:48.816 "dhchap_digests": [ 00:19:48.816 "sha256", 00:19:48.816 "sha384", 00:19:48.816 "sha512" 00:19:48.816 ], 00:19:48.816 "dhchap_dhgroups": [ 00:19:48.816 "null", 00:19:48.816 "ffdhe2048", 00:19:48.816 "ffdhe3072", 00:19:48.816 "ffdhe4096", 00:19:48.816 "ffdhe6144", 00:19:48.816 "ffdhe8192" 00:19:48.816 ] 00:19:48.816 } 00:19:48.816 }, 00:19:48.816 { 00:19:48.816 "method": "nvmf_set_max_subsystems", 00:19:48.816 "params": { 00:19:48.816 "max_subsystems": 1024 00:19:48.816 } 00:19:48.816 }, 00:19:48.816 { 00:19:48.816 "method": "nvmf_set_crdt", 00:19:48.816 "params": { 00:19:48.816 "crdt1": 0, 00:19:48.816 "crdt2": 0, 00:19:48.816 "crdt3": 0 00:19:48.816 } 00:19:48.816 }, 00:19:48.816 { 00:19:48.816 "method": "nvmf_create_transport", 00:19:48.816 "params": { 00:19:48.816 "trtype": "TCP", 00:19:48.816 "max_queue_depth": 128, 00:19:48.816 "max_io_qpairs_per_ctrlr": 127, 00:19:48.816 "in_capsule_data_size": 4096, 00:19:48.816 "max_io_size": 131072, 00:19:48.816 "io_unit_size": 131072, 00:19:48.816 "max_aq_depth": 128, 00:19:48.816 "num_shared_buffers": 511, 00:19:48.816 "buf_cache_size": 4294967295, 00:19:48.816 "dif_insert_or_strip": false, 00:19:48.816 "zcopy": false, 00:19:48.816 "c2h_success": false, 00:19:48.816 "sock_priority": 0, 00:19:48.816 "abort_timeout_sec": 1, 00:19:48.816 "ack_timeout": 0, 00:19:48.816 "data_wr_pool_size": 0 00:19:48.816 } 00:19:48.816 }, 00:19:48.816 { 00:19:48.816 "method": "nvmf_create_subsystem", 00:19:48.816 "params": { 00:19:48.816 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.816 "allow_any_host": false, 00:19:48.816 "serial_number": "SPDK00000000000001", 00:19:48.816 "model_number": "SPDK bdev Controller", 00:19:48.816 "max_namespaces": 10, 00:19:48.816 "min_cntlid": 1, 00:19:48.816 "max_cntlid": 65519, 00:19:48.816 "ana_reporting": false 00:19:48.816 } 00:19:48.816 }, 00:19:48.816 { 00:19:48.816 "method": "nvmf_subsystem_add_host", 00:19:48.816 "params": { 00:19:48.816 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.816 "host": "nqn.2016-06.io.spdk:host1", 00:19:48.816 "psk": "key0" 00:19:48.816 } 00:19:48.816 }, 00:19:48.816 { 00:19:48.816 "method": "nvmf_subsystem_add_ns", 00:19:48.816 "params": { 00:19:48.816 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.816 "namespace": { 00:19:48.816 "nsid": 1, 00:19:48.816 "bdev_name": "malloc0", 00:19:48.816 "nguid": "2F2C5C913AD34FEEB03186C71C69BAFF", 00:19:48.816 "uuid": "2f2c5c91-3ad3-4fee-b031-86c71c69baff", 00:19:48.816 "no_auto_visible": false 00:19:48.816 } 00:19:48.816 } 00:19:48.816 }, 00:19:48.816 { 00:19:48.816 "method": "nvmf_subsystem_add_listener", 00:19:48.816 "params": { 00:19:48.816 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.816 "listen_address": { 00:19:48.816 "trtype": "TCP", 00:19:48.816 "adrfam": "IPv4", 00:19:48.816 "traddr": "10.0.0.2", 00:19:48.816 "trsvcid": "4420" 00:19:48.816 }, 00:19:48.816 "secure_channel": true 00:19:48.816 } 00:19:48.816 } 00:19:48.816 ] 00:19:48.816 } 00:19:48.816 ] 00:19:48.816 }' 00:19:48.816 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=998063 00:19:48.816 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 998063 00:19:48.817 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:48.817 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 998063 ']' 00:19:48.817 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.817 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:48.817 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.817 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:48.817 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:49.078 [2024-10-21 12:04:25.454679] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:19:49.078 [2024-10-21 12:04:25.454738] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:49.078 [2024-10-21 12:04:25.540007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.078 [2024-10-21 12:04:25.570048] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:49.078 [2024-10-21 12:04:25.570076] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:49.078 [2024-10-21 12:04:25.570082] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:49.078 [2024-10-21 12:04:25.570087] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:49.078 [2024-10-21 12:04:25.570091] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:49.078 [2024-10-21 12:04:25.570582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:49.339 [2024-10-21 12:04:25.762642] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:49.339 [2024-10-21 12:04:25.794665] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:49.339 [2024-10-21 12:04:25.794882] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:49.913 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:49.913 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:49.913 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:49.913 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:49.913 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:49.913 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:49.913 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=998165 00:19:49.913 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 998165 /var/tmp/bdevperf.sock 00:19:49.913 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 998165 ']' 00:19:49.913 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:49.913 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:49.913 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:49.913 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:49.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:49.913 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:49.913 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:49.913 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:19:49.913 "subsystems": [ 00:19:49.913 { 00:19:49.913 "subsystem": "keyring", 00:19:49.913 "config": [ 00:19:49.913 { 00:19:49.913 "method": "keyring_file_add_key", 00:19:49.913 "params": { 00:19:49.913 "name": "key0", 00:19:49.913 "path": "/tmp/tmp.zqIwOwC1SQ" 00:19:49.913 } 00:19:49.913 } 00:19:49.913 ] 00:19:49.913 }, 00:19:49.913 { 00:19:49.913 "subsystem": "iobuf", 00:19:49.913 "config": [ 00:19:49.913 { 00:19:49.913 "method": "iobuf_set_options", 00:19:49.913 "params": { 00:19:49.913 "small_pool_count": 8192, 00:19:49.913 "large_pool_count": 1024, 00:19:49.913 "small_bufsize": 8192, 00:19:49.913 "large_bufsize": 135168 00:19:49.913 } 00:19:49.913 } 00:19:49.913 ] 00:19:49.913 }, 00:19:49.913 { 00:19:49.913 "subsystem": "sock", 00:19:49.913 "config": [ 00:19:49.913 { 00:19:49.913 "method": "sock_set_default_impl", 00:19:49.913 "params": { 00:19:49.913 "impl_name": "posix" 00:19:49.913 } 00:19:49.913 }, 00:19:49.913 { 00:19:49.913 "method": "sock_impl_set_options", 00:19:49.913 "params": { 00:19:49.913 "impl_name": "ssl", 00:19:49.913 "recv_buf_size": 4096, 00:19:49.913 "send_buf_size": 4096, 00:19:49.913 "enable_recv_pipe": true, 00:19:49.913 "enable_quickack": false, 00:19:49.913 "enable_placement_id": 0, 00:19:49.913 "enable_zerocopy_send_server": true, 00:19:49.913 "enable_zerocopy_send_client": false, 00:19:49.913 "zerocopy_threshold": 0, 00:19:49.913 "tls_version": 0, 00:19:49.913 "enable_ktls": false 00:19:49.913 } 00:19:49.913 }, 00:19:49.913 { 00:19:49.913 "method": "sock_impl_set_options", 00:19:49.913 "params": { 00:19:49.913 "impl_name": "posix", 00:19:49.913 "recv_buf_size": 2097152, 00:19:49.913 "send_buf_size": 2097152, 00:19:49.913 "enable_recv_pipe": true, 00:19:49.913 "enable_quickack": false, 00:19:49.913 "enable_placement_id": 0, 00:19:49.913 "enable_zerocopy_send_server": true, 00:19:49.913 "enable_zerocopy_send_client": false, 00:19:49.913 "zerocopy_threshold": 0, 00:19:49.913 "tls_version": 0, 00:19:49.913 "enable_ktls": false 00:19:49.913 } 00:19:49.913 } 00:19:49.913 ] 00:19:49.913 }, 00:19:49.913 { 00:19:49.913 "subsystem": "vmd", 00:19:49.913 "config": [] 00:19:49.913 }, 00:19:49.913 { 00:19:49.913 "subsystem": "accel", 00:19:49.913 "config": [ 00:19:49.913 { 00:19:49.913 "method": "accel_set_options", 00:19:49.913 "params": { 00:19:49.913 "small_cache_size": 128, 00:19:49.913 "large_cache_size": 16, 00:19:49.913 "task_count": 2048, 00:19:49.913 "sequence_count": 2048, 00:19:49.913 "buf_count": 2048 00:19:49.913 } 00:19:49.913 } 00:19:49.913 ] 00:19:49.913 }, 00:19:49.913 { 00:19:49.913 "subsystem": "bdev", 00:19:49.913 "config": [ 00:19:49.913 { 00:19:49.913 "method": "bdev_set_options", 00:19:49.913 "params": { 00:19:49.913 "bdev_io_pool_size": 65535, 00:19:49.913 "bdev_io_cache_size": 256, 00:19:49.913 "bdev_auto_examine": true, 00:19:49.913 "iobuf_small_cache_size": 128, 00:19:49.913 "iobuf_large_cache_size": 16 00:19:49.913 } 00:19:49.913 }, 00:19:49.913 { 00:19:49.913 "method": "bdev_raid_set_options", 00:19:49.913 "params": { 00:19:49.913 "process_window_size_kb": 1024, 00:19:49.913 "process_max_bandwidth_mb_sec": 0 00:19:49.913 } 00:19:49.913 }, 00:19:49.913 { 00:19:49.913 "method": "bdev_iscsi_set_options", 00:19:49.913 "params": { 00:19:49.913 "timeout_sec": 30 00:19:49.913 } 00:19:49.913 }, 00:19:49.913 { 00:19:49.913 "method": "bdev_nvme_set_options", 00:19:49.913 "params": { 00:19:49.913 "action_on_timeout": "none", 00:19:49.913 "timeout_us": 0, 00:19:49.913 "timeout_admin_us": 0, 00:19:49.913 "keep_alive_timeout_ms": 10000, 00:19:49.913 "arbitration_burst": 0, 00:19:49.913 "low_priority_weight": 0, 00:19:49.913 "medium_priority_weight": 0, 00:19:49.913 "high_priority_weight": 0, 00:19:49.913 "nvme_adminq_poll_period_us": 10000, 00:19:49.913 "nvme_ioq_poll_period_us": 0, 00:19:49.913 "io_queue_requests": 512, 00:19:49.913 "delay_cmd_submit": true, 00:19:49.913 "transport_retry_count": 4, 00:19:49.913 "bdev_retry_count": 3, 00:19:49.913 "transport_ack_timeout": 0, 00:19:49.913 "ctrlr_loss_timeout_sec": 0, 00:19:49.913 "reconnect_delay_sec": 0, 00:19:49.913 "fast_io_fail_timeout_sec": 0, 00:19:49.913 "disable_auto_failback": false, 00:19:49.913 "generate_uuids": false, 00:19:49.913 "transport_tos": 0, 00:19:49.913 "nvme_error_stat": false, 00:19:49.913 "rdma_srq_size": 0, 00:19:49.913 "io_path_stat": false, 00:19:49.913 "allow_accel_sequence": false, 00:19:49.913 "rdma_max_cq_size": 0, 00:19:49.913 "rdma_cm_event_timeout_ms": 0, 00:19:49.913 "dhchap_digests": [ 00:19:49.913 "sha256", 00:19:49.913 "sha384", 00:19:49.913 "sha512" 00:19:49.913 ], 00:19:49.913 "dhchap_dhgroups": [ 00:19:49.913 "null", 00:19:49.913 "ffdhe2048", 00:19:49.913 "ffdhe3072", 00:19:49.913 "ffdhe4096", 00:19:49.913 "ffdhe6144", 00:19:49.913 "ffdhe8192" 00:19:49.913 ] 00:19:49.913 } 00:19:49.913 }, 00:19:49.913 { 00:19:49.913 "method": "bdev_nvme_attach_controller", 00:19:49.913 "params": { 00:19:49.913 "name": "TLSTEST", 00:19:49.913 "trtype": "TCP", 00:19:49.913 "adrfam": "IPv4", 00:19:49.913 "traddr": "10.0.0.2", 00:19:49.913 "trsvcid": "4420", 00:19:49.913 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:49.913 "prchk_reftag": false, 00:19:49.913 "prchk_guard": false, 00:19:49.913 "ctrlr_loss_timeout_sec": 0, 00:19:49.913 "reconnect_delay_sec": 0, 00:19:49.913 "fast_io_fail_timeout_sec": 0, 00:19:49.913 "psk": "key0", 00:19:49.913 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:49.913 "hdgst": false, 00:19:49.914 "ddgst": false, 00:19:49.914 "multipath": "multipath" 00:19:49.914 } 00:19:49.914 }, 00:19:49.914 { 00:19:49.914 "method": "bdev_nvme_set_hotplug", 00:19:49.914 "params": { 00:19:49.914 "period_us": 100000, 00:19:49.914 "enable": false 00:19:49.914 } 00:19:49.914 }, 00:19:49.914 { 00:19:49.914 "method": "bdev_wait_for_examine" 00:19:49.914 } 00:19:49.914 ] 00:19:49.914 }, 00:19:49.914 { 00:19:49.914 "subsystem": "nbd", 00:19:49.914 "config": [] 00:19:49.914 } 00:19:49.914 ] 00:19:49.914 }' 00:19:49.914 [2024-10-21 12:04:26.328408] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:19:49.914 [2024-10-21 12:04:26.328460] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid998165 ] 00:19:49.914 [2024-10-21 12:04:26.404757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.914 [2024-10-21 12:04:26.439876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:50.175 [2024-10-21 12:04:26.578783] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:50.747 12:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:50.747 12:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:50.747 12:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:50.747 Running I/O for 10 seconds... 00:19:52.632 6091.00 IOPS, 23.79 MiB/s [2024-10-21T10:04:30.613Z] 6261.50 IOPS, 24.46 MiB/s [2024-10-21T10:04:31.553Z] 6352.67 IOPS, 24.82 MiB/s [2024-10-21T10:04:32.493Z] 6398.00 IOPS, 24.99 MiB/s [2024-10-21T10:04:33.434Z] 6423.80 IOPS, 25.09 MiB/s [2024-10-21T10:04:34.375Z] 6420.17 IOPS, 25.08 MiB/s [2024-10-21T10:04:35.315Z] 6429.14 IOPS, 25.11 MiB/s [2024-10-21T10:04:36.257Z] 6443.62 IOPS, 25.17 MiB/s [2024-10-21T10:04:37.639Z] 6463.11 IOPS, 25.25 MiB/s [2024-10-21T10:04:37.639Z] 6459.40 IOPS, 25.23 MiB/s 00:20:01.044 Latency(us) 00:20:01.044 [2024-10-21T10:04:37.639Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:01.044 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:01.044 Verification LBA range: start 0x0 length 0x2000 00:20:01.044 TLSTESTn1 : 10.02 6460.89 25.24 0.00 0.00 19776.66 6116.69 22500.69 00:20:01.044 [2024-10-21T10:04:37.639Z] =================================================================================================================== 00:20:01.044 [2024-10-21T10:04:37.639Z] Total : 6460.89 25.24 0.00 0.00 19776.66 6116.69 22500.69 00:20:01.044 { 00:20:01.044 "results": [ 00:20:01.044 { 00:20:01.044 "job": "TLSTESTn1", 00:20:01.044 "core_mask": "0x4", 00:20:01.044 "workload": "verify", 00:20:01.044 "status": "finished", 00:20:01.044 "verify_range": { 00:20:01.044 "start": 0, 00:20:01.044 "length": 8192 00:20:01.044 }, 00:20:01.044 "queue_depth": 128, 00:20:01.044 "io_size": 4096, 00:20:01.044 "runtime": 10.017201, 00:20:01.044 "iops": 6460.886628909612, 00:20:01.044 "mibps": 25.237838394178173, 00:20:01.044 "io_failed": 0, 00:20:01.044 "io_timeout": 0, 00:20:01.044 "avg_latency_us": 19776.660304903173, 00:20:01.044 "min_latency_us": 6116.693333333334, 00:20:01.044 "max_latency_us": 22500.693333333333 00:20:01.044 } 00:20:01.044 ], 00:20:01.044 "core_count": 1 00:20:01.044 } 00:20:01.044 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:01.044 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 998165 00:20:01.044 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 998165 ']' 00:20:01.044 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 998165 00:20:01.044 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:01.044 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:01.044 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 998165 00:20:01.044 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:01.044 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:01.044 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 998165' 00:20:01.044 killing process with pid 998165 00:20:01.044 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 998165 00:20:01.045 Received shutdown signal, test time was about 10.000000 seconds 00:20:01.045 00:20:01.045 Latency(us) 00:20:01.045 [2024-10-21T10:04:37.640Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:01.045 [2024-10-21T10:04:37.640Z] =================================================================================================================== 00:20:01.045 [2024-10-21T10:04:37.640Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:01.045 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 998165 00:20:01.045 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 998063 00:20:01.045 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 998063 ']' 00:20:01.045 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 998063 00:20:01.045 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:01.045 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:01.045 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 998063 00:20:01.045 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:01.045 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:01.045 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 998063' 00:20:01.045 killing process with pid 998063 00:20:01.045 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 998063 00:20:01.045 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 998063 00:20:01.045 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:20:01.045 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:01.045 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:01.045 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.045 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1000457 00:20:01.045 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1000457 00:20:01.045 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:01.045 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1000457 ']' 00:20:01.045 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.045 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:01.045 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.045 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:01.045 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.306 [2024-10-21 12:04:37.676195] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:20:01.306 [2024-10-21 12:04:37.676256] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:01.306 [2024-10-21 12:04:37.759874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.306 [2024-10-21 12:04:37.803855] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:01.306 [2024-10-21 12:04:37.803903] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:01.306 [2024-10-21 12:04:37.803912] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:01.306 [2024-10-21 12:04:37.803919] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:01.306 [2024-10-21 12:04:37.803925] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:01.306 [2024-10-21 12:04:37.804655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.248 12:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:02.248 12:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:02.248 12:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:02.248 12:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:02.248 12:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:02.248 12:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:02.248 12:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.zqIwOwC1SQ 00:20:02.248 12:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.zqIwOwC1SQ 00:20:02.248 12:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:02.248 [2024-10-21 12:04:38.694624] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:02.248 12:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:02.508 12:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:02.508 [2024-10-21 12:04:39.043513] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:02.508 [2024-10-21 12:04:39.043874] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:02.508 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:02.768 malloc0 00:20:02.768 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:03.029 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.zqIwOwC1SQ 00:20:03.029 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:03.290 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1000868 00:20:03.290 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:03.290 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:03.290 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1000868 /var/tmp/bdevperf.sock 00:20:03.290 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1000868 ']' 00:20:03.290 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:03.290 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:03.290 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:03.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:03.290 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:03.290 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.290 [2024-10-21 12:04:39.853024] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:20:03.290 [2024-10-21 12:04:39.853096] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1000868 ] 00:20:03.550 [2024-10-21 12:04:39.933861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.550 [2024-10-21 12:04:39.969223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:04.120 12:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:04.120 12:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:04.120 12:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zqIwOwC1SQ 00:20:04.380 12:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:04.380 [2024-10-21 12:04:40.956707] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:04.641 nvme0n1 00:20:04.641 12:04:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:04.641 Running I/O for 1 seconds... 00:20:05.582 5747.00 IOPS, 22.45 MiB/s 00:20:05.582 Latency(us) 00:20:05.582 [2024-10-21T10:04:42.177Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.582 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:05.582 Verification LBA range: start 0x0 length 0x2000 00:20:05.582 nvme0n1 : 1.02 5753.09 22.47 0.00 0.00 22046.73 6717.44 45656.75 00:20:05.582 [2024-10-21T10:04:42.177Z] =================================================================================================================== 00:20:05.582 [2024-10-21T10:04:42.177Z] Total : 5753.09 22.47 0.00 0.00 22046.73 6717.44 45656.75 00:20:05.582 { 00:20:05.582 "results": [ 00:20:05.582 { 00:20:05.582 "job": "nvme0n1", 00:20:05.582 "core_mask": "0x2", 00:20:05.582 "workload": "verify", 00:20:05.582 "status": "finished", 00:20:05.582 "verify_range": { 00:20:05.582 "start": 0, 00:20:05.582 "length": 8192 00:20:05.582 }, 00:20:05.582 "queue_depth": 128, 00:20:05.582 "io_size": 4096, 00:20:05.582 "runtime": 1.021365, 00:20:05.582 "iops": 5753.085331884292, 00:20:05.582 "mibps": 22.472989577673015, 00:20:05.582 "io_failed": 0, 00:20:05.582 "io_timeout": 0, 00:20:05.582 "avg_latency_us": 22046.731617880643, 00:20:05.582 "min_latency_us": 6717.44, 00:20:05.582 "max_latency_us": 45656.746666666666 00:20:05.582 } 00:20:05.582 ], 00:20:05.582 "core_count": 1 00:20:05.582 } 00:20:05.843 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1000868 00:20:05.843 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1000868 ']' 00:20:05.843 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1000868 00:20:05.843 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:05.843 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:05.843 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1000868 00:20:05.843 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:05.843 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:05.843 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1000868' 00:20:05.843 killing process with pid 1000868 00:20:05.843 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1000868 00:20:05.843 Received shutdown signal, test time was about 1.000000 seconds 00:20:05.844 00:20:05.844 Latency(us) 00:20:05.844 [2024-10-21T10:04:42.439Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.844 [2024-10-21T10:04:42.439Z] =================================================================================================================== 00:20:05.844 [2024-10-21T10:04:42.439Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:05.844 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1000868 00:20:05.844 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1000457 00:20:05.844 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1000457 ']' 00:20:05.844 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1000457 00:20:05.844 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:05.844 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:05.844 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1000457 00:20:05.844 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:05.844 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:05.844 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1000457' 00:20:05.844 killing process with pid 1000457 00:20:05.844 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1000457 00:20:05.844 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1000457 00:20:06.105 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:20:06.105 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:06.105 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:06.105 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.105 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1001345 00:20:06.105 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1001345 00:20:06.105 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:06.105 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1001345 ']' 00:20:06.105 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.105 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:06.105 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.105 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:06.105 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.105 [2024-10-21 12:04:42.612088] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:20:06.105 [2024-10-21 12:04:42.612143] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:06.105 [2024-10-21 12:04:42.696142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.366 [2024-10-21 12:04:42.739733] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:06.366 [2024-10-21 12:04:42.739787] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:06.366 [2024-10-21 12:04:42.739795] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:06.366 [2024-10-21 12:04:42.739803] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:06.366 [2024-10-21 12:04:42.739809] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:06.366 [2024-10-21 12:04:42.740546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.937 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:06.938 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:06.938 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:06.938 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:06.938 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.938 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:06.938 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:20:06.938 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.938 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.938 [2024-10-21 12:04:43.474477] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:06.938 malloc0 00:20:06.938 [2024-10-21 12:04:43.504608] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:06.938 [2024-10-21 12:04:43.504954] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:07.198 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.198 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1001584 00:20:07.198 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1001584 /var/tmp/bdevperf.sock 00:20:07.198 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:07.198 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1001584 ']' 00:20:07.198 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:07.198 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:07.198 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:07.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:07.198 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:07.198 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.198 [2024-10-21 12:04:43.595031] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:20:07.198 [2024-10-21 12:04:43.595095] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1001584 ] 00:20:07.198 [2024-10-21 12:04:43.674682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.198 [2024-10-21 12:04:43.709669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:08.137 12:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:08.137 12:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:08.137 12:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zqIwOwC1SQ 00:20:08.137 12:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:08.137 [2024-10-21 12:04:44.699485] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:08.397 nvme0n1 00:20:08.397 12:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:08.397 Running I/O for 1 seconds... 00:20:09.338 5929.00 IOPS, 23.16 MiB/s 00:20:09.338 Latency(us) 00:20:09.338 [2024-10-21T10:04:45.933Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.338 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:09.338 Verification LBA range: start 0x0 length 0x2000 00:20:09.338 nvme0n1 : 1.01 5984.21 23.38 0.00 0.00 21256.43 4724.05 23811.41 00:20:09.338 [2024-10-21T10:04:45.933Z] =================================================================================================================== 00:20:09.338 [2024-10-21T10:04:45.933Z] Total : 5984.21 23.38 0.00 0.00 21256.43 4724.05 23811.41 00:20:09.338 { 00:20:09.338 "results": [ 00:20:09.338 { 00:20:09.338 "job": "nvme0n1", 00:20:09.338 "core_mask": "0x2", 00:20:09.338 "workload": "verify", 00:20:09.338 "status": "finished", 00:20:09.338 "verify_range": { 00:20:09.338 "start": 0, 00:20:09.338 "length": 8192 00:20:09.338 }, 00:20:09.338 "queue_depth": 128, 00:20:09.338 "io_size": 4096, 00:20:09.338 "runtime": 1.012331, 00:20:09.338 "iops": 5984.208722245985, 00:20:09.338 "mibps": 23.37581532127338, 00:20:09.338 "io_failed": 0, 00:20:09.338 "io_timeout": 0, 00:20:09.338 "avg_latency_us": 21256.43247716518, 00:20:09.338 "min_latency_us": 4724.053333333333, 00:20:09.338 "max_latency_us": 23811.413333333334 00:20:09.338 } 00:20:09.338 ], 00:20:09.338 "core_count": 1 00:20:09.338 } 00:20:09.338 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:20:09.338 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.338 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.599 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.599 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:20:09.599 "subsystems": [ 00:20:09.599 { 00:20:09.599 "subsystem": "keyring", 00:20:09.599 "config": [ 00:20:09.599 { 00:20:09.599 "method": "keyring_file_add_key", 00:20:09.599 "params": { 00:20:09.599 "name": "key0", 00:20:09.599 "path": "/tmp/tmp.zqIwOwC1SQ" 00:20:09.599 } 00:20:09.599 } 00:20:09.599 ] 00:20:09.599 }, 00:20:09.599 { 00:20:09.599 "subsystem": "iobuf", 00:20:09.599 "config": [ 00:20:09.599 { 00:20:09.599 "method": "iobuf_set_options", 00:20:09.599 "params": { 00:20:09.599 "small_pool_count": 8192, 00:20:09.599 "large_pool_count": 1024, 00:20:09.599 "small_bufsize": 8192, 00:20:09.599 "large_bufsize": 135168 00:20:09.599 } 00:20:09.599 } 00:20:09.599 ] 00:20:09.599 }, 00:20:09.599 { 00:20:09.599 "subsystem": "sock", 00:20:09.599 "config": [ 00:20:09.599 { 00:20:09.599 "method": "sock_set_default_impl", 00:20:09.599 "params": { 00:20:09.599 "impl_name": "posix" 00:20:09.599 } 00:20:09.599 }, 00:20:09.599 { 00:20:09.599 "method": "sock_impl_set_options", 00:20:09.599 "params": { 00:20:09.599 "impl_name": "ssl", 00:20:09.599 "recv_buf_size": 4096, 00:20:09.599 "send_buf_size": 4096, 00:20:09.599 "enable_recv_pipe": true, 00:20:09.599 "enable_quickack": false, 00:20:09.599 "enable_placement_id": 0, 00:20:09.599 "enable_zerocopy_send_server": true, 00:20:09.599 "enable_zerocopy_send_client": false, 00:20:09.599 "zerocopy_threshold": 0, 00:20:09.599 "tls_version": 0, 00:20:09.599 "enable_ktls": false 00:20:09.599 } 00:20:09.599 }, 00:20:09.599 { 00:20:09.599 "method": "sock_impl_set_options", 00:20:09.599 "params": { 00:20:09.599 "impl_name": "posix", 00:20:09.599 "recv_buf_size": 2097152, 00:20:09.599 "send_buf_size": 2097152, 00:20:09.599 "enable_recv_pipe": true, 00:20:09.599 "enable_quickack": false, 00:20:09.599 "enable_placement_id": 0, 00:20:09.599 "enable_zerocopy_send_server": true, 00:20:09.599 "enable_zerocopy_send_client": false, 00:20:09.599 "zerocopy_threshold": 0, 00:20:09.599 "tls_version": 0, 00:20:09.599 "enable_ktls": false 00:20:09.599 } 00:20:09.599 } 00:20:09.599 ] 00:20:09.599 }, 00:20:09.599 { 00:20:09.599 "subsystem": "vmd", 00:20:09.599 "config": [] 00:20:09.599 }, 00:20:09.599 { 00:20:09.599 "subsystem": "accel", 00:20:09.599 "config": [ 00:20:09.599 { 00:20:09.599 "method": "accel_set_options", 00:20:09.599 "params": { 00:20:09.599 "small_cache_size": 128, 00:20:09.599 "large_cache_size": 16, 00:20:09.599 "task_count": 2048, 00:20:09.599 "sequence_count": 2048, 00:20:09.599 "buf_count": 2048 00:20:09.599 } 00:20:09.599 } 00:20:09.599 ] 00:20:09.599 }, 00:20:09.599 { 00:20:09.599 "subsystem": "bdev", 00:20:09.599 "config": [ 00:20:09.599 { 00:20:09.599 "method": "bdev_set_options", 00:20:09.599 "params": { 00:20:09.599 "bdev_io_pool_size": 65535, 00:20:09.599 "bdev_io_cache_size": 256, 00:20:09.599 "bdev_auto_examine": true, 00:20:09.599 "iobuf_small_cache_size": 128, 00:20:09.599 "iobuf_large_cache_size": 16 00:20:09.599 } 00:20:09.599 }, 00:20:09.599 { 00:20:09.599 "method": "bdev_raid_set_options", 00:20:09.599 "params": { 00:20:09.599 "process_window_size_kb": 1024, 00:20:09.599 "process_max_bandwidth_mb_sec": 0 00:20:09.599 } 00:20:09.599 }, 00:20:09.599 { 00:20:09.599 "method": "bdev_iscsi_set_options", 00:20:09.599 "params": { 00:20:09.599 "timeout_sec": 30 00:20:09.599 } 00:20:09.599 }, 00:20:09.599 { 00:20:09.599 "method": "bdev_nvme_set_options", 00:20:09.599 "params": { 00:20:09.599 "action_on_timeout": "none", 00:20:09.599 "timeout_us": 0, 00:20:09.599 "timeout_admin_us": 0, 00:20:09.599 "keep_alive_timeout_ms": 10000, 00:20:09.599 "arbitration_burst": 0, 00:20:09.599 "low_priority_weight": 0, 00:20:09.599 "medium_priority_weight": 0, 00:20:09.599 "high_priority_weight": 0, 00:20:09.599 "nvme_adminq_poll_period_us": 10000, 00:20:09.599 "nvme_ioq_poll_period_us": 0, 00:20:09.599 "io_queue_requests": 0, 00:20:09.599 "delay_cmd_submit": true, 00:20:09.599 "transport_retry_count": 4, 00:20:09.599 "bdev_retry_count": 3, 00:20:09.599 "transport_ack_timeout": 0, 00:20:09.599 "ctrlr_loss_timeout_sec": 0, 00:20:09.599 "reconnect_delay_sec": 0, 00:20:09.599 "fast_io_fail_timeout_sec": 0, 00:20:09.599 "disable_auto_failback": false, 00:20:09.599 "generate_uuids": false, 00:20:09.599 "transport_tos": 0, 00:20:09.599 "nvme_error_stat": false, 00:20:09.599 "rdma_srq_size": 0, 00:20:09.599 "io_path_stat": false, 00:20:09.599 "allow_accel_sequence": false, 00:20:09.599 "rdma_max_cq_size": 0, 00:20:09.599 "rdma_cm_event_timeout_ms": 0, 00:20:09.599 "dhchap_digests": [ 00:20:09.599 "sha256", 00:20:09.599 "sha384", 00:20:09.599 "sha512" 00:20:09.599 ], 00:20:09.599 "dhchap_dhgroups": [ 00:20:09.599 "null", 00:20:09.599 "ffdhe2048", 00:20:09.599 "ffdhe3072", 00:20:09.599 "ffdhe4096", 00:20:09.599 "ffdhe6144", 00:20:09.599 "ffdhe8192" 00:20:09.599 ] 00:20:09.599 } 00:20:09.599 }, 00:20:09.599 { 00:20:09.599 "method": "bdev_nvme_set_hotplug", 00:20:09.599 "params": { 00:20:09.599 "period_us": 100000, 00:20:09.599 "enable": false 00:20:09.600 } 00:20:09.600 }, 00:20:09.600 { 00:20:09.600 "method": "bdev_malloc_create", 00:20:09.600 "params": { 00:20:09.600 "name": "malloc0", 00:20:09.600 "num_blocks": 8192, 00:20:09.600 "block_size": 4096, 00:20:09.600 "physical_block_size": 4096, 00:20:09.600 "uuid": "15836dc5-d782-440a-a351-312b95003ebd", 00:20:09.600 "optimal_io_boundary": 0, 00:20:09.600 "md_size": 0, 00:20:09.600 "dif_type": 0, 00:20:09.600 "dif_is_head_of_md": false, 00:20:09.600 "dif_pi_format": 0 00:20:09.600 } 00:20:09.600 }, 00:20:09.600 { 00:20:09.600 "method": "bdev_wait_for_examine" 00:20:09.600 } 00:20:09.600 ] 00:20:09.600 }, 00:20:09.600 { 00:20:09.600 "subsystem": "nbd", 00:20:09.600 "config": [] 00:20:09.600 }, 00:20:09.600 { 00:20:09.600 "subsystem": "scheduler", 00:20:09.600 "config": [ 00:20:09.600 { 00:20:09.600 "method": "framework_set_scheduler", 00:20:09.600 "params": { 00:20:09.600 "name": "static" 00:20:09.600 } 00:20:09.600 } 00:20:09.600 ] 00:20:09.600 }, 00:20:09.600 { 00:20:09.600 "subsystem": "nvmf", 00:20:09.600 "config": [ 00:20:09.600 { 00:20:09.600 "method": "nvmf_set_config", 00:20:09.600 "params": { 00:20:09.600 "discovery_filter": "match_any", 00:20:09.600 "admin_cmd_passthru": { 00:20:09.600 "identify_ctrlr": false 00:20:09.600 }, 00:20:09.600 "dhchap_digests": [ 00:20:09.600 "sha256", 00:20:09.600 "sha384", 00:20:09.600 "sha512" 00:20:09.600 ], 00:20:09.600 "dhchap_dhgroups": [ 00:20:09.600 "null", 00:20:09.600 "ffdhe2048", 00:20:09.600 "ffdhe3072", 00:20:09.600 "ffdhe4096", 00:20:09.600 "ffdhe6144", 00:20:09.600 "ffdhe8192" 00:20:09.600 ] 00:20:09.600 } 00:20:09.600 }, 00:20:09.600 { 00:20:09.600 "method": "nvmf_set_max_subsystems", 00:20:09.600 "params": { 00:20:09.600 "max_subsystems": 1024 00:20:09.600 } 00:20:09.600 }, 00:20:09.600 { 00:20:09.600 "method": "nvmf_set_crdt", 00:20:09.600 "params": { 00:20:09.600 "crdt1": 0, 00:20:09.600 "crdt2": 0, 00:20:09.600 "crdt3": 0 00:20:09.600 } 00:20:09.600 }, 00:20:09.600 { 00:20:09.600 "method": "nvmf_create_transport", 00:20:09.600 "params": { 00:20:09.600 "trtype": "TCP", 00:20:09.600 "max_queue_depth": 128, 00:20:09.600 "max_io_qpairs_per_ctrlr": 127, 00:20:09.600 "in_capsule_data_size": 4096, 00:20:09.600 "max_io_size": 131072, 00:20:09.600 "io_unit_size": 131072, 00:20:09.600 "max_aq_depth": 128, 00:20:09.600 "num_shared_buffers": 511, 00:20:09.600 "buf_cache_size": 4294967295, 00:20:09.600 "dif_insert_or_strip": false, 00:20:09.600 "zcopy": false, 00:20:09.600 "c2h_success": false, 00:20:09.600 "sock_priority": 0, 00:20:09.600 "abort_timeout_sec": 1, 00:20:09.600 "ack_timeout": 0, 00:20:09.600 "data_wr_pool_size": 0 00:20:09.600 } 00:20:09.600 }, 00:20:09.600 { 00:20:09.600 "method": "nvmf_create_subsystem", 00:20:09.600 "params": { 00:20:09.600 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.600 "allow_any_host": false, 00:20:09.600 "serial_number": "00000000000000000000", 00:20:09.600 "model_number": "SPDK bdev Controller", 00:20:09.600 "max_namespaces": 32, 00:20:09.600 "min_cntlid": 1, 00:20:09.600 "max_cntlid": 65519, 00:20:09.600 "ana_reporting": false 00:20:09.600 } 00:20:09.600 }, 00:20:09.600 { 00:20:09.600 "method": "nvmf_subsystem_add_host", 00:20:09.600 "params": { 00:20:09.600 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.600 "host": "nqn.2016-06.io.spdk:host1", 00:20:09.600 "psk": "key0" 00:20:09.600 } 00:20:09.600 }, 00:20:09.600 { 00:20:09.600 "method": "nvmf_subsystem_add_ns", 00:20:09.600 "params": { 00:20:09.600 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.600 "namespace": { 00:20:09.600 "nsid": 1, 00:20:09.600 "bdev_name": "malloc0", 00:20:09.600 "nguid": "15836DC5D782440AA351312B95003EBD", 00:20:09.600 "uuid": "15836dc5-d782-440a-a351-312b95003ebd", 00:20:09.600 "no_auto_visible": false 00:20:09.600 } 00:20:09.600 } 00:20:09.600 }, 00:20:09.600 { 00:20:09.600 "method": "nvmf_subsystem_add_listener", 00:20:09.600 "params": { 00:20:09.600 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.600 "listen_address": { 00:20:09.600 "trtype": "TCP", 00:20:09.600 "adrfam": "IPv4", 00:20:09.600 "traddr": "10.0.0.2", 00:20:09.600 "trsvcid": "4420" 00:20:09.600 }, 00:20:09.600 "secure_channel": false, 00:20:09.600 "sock_impl": "ssl" 00:20:09.600 } 00:20:09.600 } 00:20:09.600 ] 00:20:09.600 } 00:20:09.600 ] 00:20:09.600 }' 00:20:09.600 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:09.861 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:20:09.861 "subsystems": [ 00:20:09.861 { 00:20:09.861 "subsystem": "keyring", 00:20:09.861 "config": [ 00:20:09.861 { 00:20:09.861 "method": "keyring_file_add_key", 00:20:09.861 "params": { 00:20:09.861 "name": "key0", 00:20:09.861 "path": "/tmp/tmp.zqIwOwC1SQ" 00:20:09.861 } 00:20:09.861 } 00:20:09.861 ] 00:20:09.861 }, 00:20:09.861 { 00:20:09.861 "subsystem": "iobuf", 00:20:09.861 "config": [ 00:20:09.861 { 00:20:09.861 "method": "iobuf_set_options", 00:20:09.861 "params": { 00:20:09.861 "small_pool_count": 8192, 00:20:09.861 "large_pool_count": 1024, 00:20:09.861 "small_bufsize": 8192, 00:20:09.861 "large_bufsize": 135168 00:20:09.861 } 00:20:09.861 } 00:20:09.861 ] 00:20:09.861 }, 00:20:09.861 { 00:20:09.861 "subsystem": "sock", 00:20:09.861 "config": [ 00:20:09.861 { 00:20:09.861 "method": "sock_set_default_impl", 00:20:09.861 "params": { 00:20:09.861 "impl_name": "posix" 00:20:09.861 } 00:20:09.861 }, 00:20:09.861 { 00:20:09.861 "method": "sock_impl_set_options", 00:20:09.861 "params": { 00:20:09.861 "impl_name": "ssl", 00:20:09.861 "recv_buf_size": 4096, 00:20:09.861 "send_buf_size": 4096, 00:20:09.861 "enable_recv_pipe": true, 00:20:09.861 "enable_quickack": false, 00:20:09.861 "enable_placement_id": 0, 00:20:09.861 "enable_zerocopy_send_server": true, 00:20:09.861 "enable_zerocopy_send_client": false, 00:20:09.861 "zerocopy_threshold": 0, 00:20:09.861 "tls_version": 0, 00:20:09.861 "enable_ktls": false 00:20:09.861 } 00:20:09.861 }, 00:20:09.861 { 00:20:09.861 "method": "sock_impl_set_options", 00:20:09.861 "params": { 00:20:09.861 "impl_name": "posix", 00:20:09.861 "recv_buf_size": 2097152, 00:20:09.861 "send_buf_size": 2097152, 00:20:09.861 "enable_recv_pipe": true, 00:20:09.861 "enable_quickack": false, 00:20:09.861 "enable_placement_id": 0, 00:20:09.861 "enable_zerocopy_send_server": true, 00:20:09.861 "enable_zerocopy_send_client": false, 00:20:09.861 "zerocopy_threshold": 0, 00:20:09.861 "tls_version": 0, 00:20:09.861 "enable_ktls": false 00:20:09.861 } 00:20:09.861 } 00:20:09.861 ] 00:20:09.861 }, 00:20:09.861 { 00:20:09.861 "subsystem": "vmd", 00:20:09.861 "config": [] 00:20:09.861 }, 00:20:09.861 { 00:20:09.861 "subsystem": "accel", 00:20:09.861 "config": [ 00:20:09.861 { 00:20:09.861 "method": "accel_set_options", 00:20:09.861 "params": { 00:20:09.861 "small_cache_size": 128, 00:20:09.861 "large_cache_size": 16, 00:20:09.861 "task_count": 2048, 00:20:09.861 "sequence_count": 2048, 00:20:09.861 "buf_count": 2048 00:20:09.861 } 00:20:09.861 } 00:20:09.861 ] 00:20:09.861 }, 00:20:09.861 { 00:20:09.861 "subsystem": "bdev", 00:20:09.861 "config": [ 00:20:09.861 { 00:20:09.861 "method": "bdev_set_options", 00:20:09.861 "params": { 00:20:09.861 "bdev_io_pool_size": 65535, 00:20:09.861 "bdev_io_cache_size": 256, 00:20:09.861 "bdev_auto_examine": true, 00:20:09.862 "iobuf_small_cache_size": 128, 00:20:09.862 "iobuf_large_cache_size": 16 00:20:09.862 } 00:20:09.862 }, 00:20:09.862 { 00:20:09.862 "method": "bdev_raid_set_options", 00:20:09.862 "params": { 00:20:09.862 "process_window_size_kb": 1024, 00:20:09.862 "process_max_bandwidth_mb_sec": 0 00:20:09.862 } 00:20:09.862 }, 00:20:09.862 { 00:20:09.862 "method": "bdev_iscsi_set_options", 00:20:09.862 "params": { 00:20:09.862 "timeout_sec": 30 00:20:09.862 } 00:20:09.862 }, 00:20:09.862 { 00:20:09.862 "method": "bdev_nvme_set_options", 00:20:09.862 "params": { 00:20:09.862 "action_on_timeout": "none", 00:20:09.862 "timeout_us": 0, 00:20:09.862 "timeout_admin_us": 0, 00:20:09.862 "keep_alive_timeout_ms": 10000, 00:20:09.862 "arbitration_burst": 0, 00:20:09.862 "low_priority_weight": 0, 00:20:09.862 "medium_priority_weight": 0, 00:20:09.862 "high_priority_weight": 0, 00:20:09.862 "nvme_adminq_poll_period_us": 10000, 00:20:09.862 "nvme_ioq_poll_period_us": 0, 00:20:09.862 "io_queue_requests": 512, 00:20:09.862 "delay_cmd_submit": true, 00:20:09.862 "transport_retry_count": 4, 00:20:09.862 "bdev_retry_count": 3, 00:20:09.862 "transport_ack_timeout": 0, 00:20:09.862 "ctrlr_loss_timeout_sec": 0, 00:20:09.862 "reconnect_delay_sec": 0, 00:20:09.862 "fast_io_fail_timeout_sec": 0, 00:20:09.862 "disable_auto_failback": false, 00:20:09.862 "generate_uuids": false, 00:20:09.862 "transport_tos": 0, 00:20:09.862 "nvme_error_stat": false, 00:20:09.862 "rdma_srq_size": 0, 00:20:09.862 "io_path_stat": false, 00:20:09.862 "allow_accel_sequence": false, 00:20:09.862 "rdma_max_cq_size": 0, 00:20:09.862 "rdma_cm_event_timeout_ms": 0, 00:20:09.862 "dhchap_digests": [ 00:20:09.862 "sha256", 00:20:09.862 "sha384", 00:20:09.862 "sha512" 00:20:09.862 ], 00:20:09.862 "dhchap_dhgroups": [ 00:20:09.862 "null", 00:20:09.862 "ffdhe2048", 00:20:09.862 "ffdhe3072", 00:20:09.862 "ffdhe4096", 00:20:09.862 "ffdhe6144", 00:20:09.862 "ffdhe8192" 00:20:09.862 ] 00:20:09.862 } 00:20:09.862 }, 00:20:09.862 { 00:20:09.862 "method": "bdev_nvme_attach_controller", 00:20:09.862 "params": { 00:20:09.862 "name": "nvme0", 00:20:09.862 "trtype": "TCP", 00:20:09.862 "adrfam": "IPv4", 00:20:09.862 "traddr": "10.0.0.2", 00:20:09.862 "trsvcid": "4420", 00:20:09.862 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.862 "prchk_reftag": false, 00:20:09.862 "prchk_guard": false, 00:20:09.862 "ctrlr_loss_timeout_sec": 0, 00:20:09.862 "reconnect_delay_sec": 0, 00:20:09.862 "fast_io_fail_timeout_sec": 0, 00:20:09.862 "psk": "key0", 00:20:09.862 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:09.862 "hdgst": false, 00:20:09.862 "ddgst": false, 00:20:09.862 "multipath": "multipath" 00:20:09.862 } 00:20:09.862 }, 00:20:09.862 { 00:20:09.862 "method": "bdev_nvme_set_hotplug", 00:20:09.862 "params": { 00:20:09.862 "period_us": 100000, 00:20:09.862 "enable": false 00:20:09.862 } 00:20:09.862 }, 00:20:09.862 { 00:20:09.862 "method": "bdev_enable_histogram", 00:20:09.862 "params": { 00:20:09.862 "name": "nvme0n1", 00:20:09.862 "enable": true 00:20:09.862 } 00:20:09.862 }, 00:20:09.862 { 00:20:09.862 "method": "bdev_wait_for_examine" 00:20:09.862 } 00:20:09.862 ] 00:20:09.862 }, 00:20:09.862 { 00:20:09.862 "subsystem": "nbd", 00:20:09.862 "config": [] 00:20:09.862 } 00:20:09.862 ] 00:20:09.862 }' 00:20:09.862 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1001584 00:20:09.862 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1001584 ']' 00:20:09.862 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1001584 00:20:09.862 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:09.862 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:09.862 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1001584 00:20:09.862 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:09.862 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:09.862 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1001584' 00:20:09.862 killing process with pid 1001584 00:20:09.862 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1001584 00:20:09.862 Received shutdown signal, test time was about 1.000000 seconds 00:20:09.862 00:20:09.862 Latency(us) 00:20:09.862 [2024-10-21T10:04:46.457Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.862 [2024-10-21T10:04:46.457Z] =================================================================================================================== 00:20:09.862 [2024-10-21T10:04:46.457Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:09.862 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1001584 00:20:09.862 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1001345 00:20:09.862 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1001345 ']' 00:20:09.862 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1001345 00:20:09.862 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:10.123 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:10.123 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1001345 00:20:10.123 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:10.123 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:10.123 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1001345' 00:20:10.123 killing process with pid 1001345 00:20:10.123 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1001345 00:20:10.123 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1001345 00:20:10.123 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:20:10.123 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:10.123 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:10.123 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:20:10.123 "subsystems": [ 00:20:10.124 { 00:20:10.124 "subsystem": "keyring", 00:20:10.124 "config": [ 00:20:10.124 { 00:20:10.124 "method": "keyring_file_add_key", 00:20:10.124 "params": { 00:20:10.124 "name": "key0", 00:20:10.124 "path": "/tmp/tmp.zqIwOwC1SQ" 00:20:10.124 } 00:20:10.124 } 00:20:10.124 ] 00:20:10.124 }, 00:20:10.124 { 00:20:10.124 "subsystem": "iobuf", 00:20:10.124 "config": [ 00:20:10.124 { 00:20:10.124 "method": "iobuf_set_options", 00:20:10.124 "params": { 00:20:10.124 "small_pool_count": 8192, 00:20:10.124 "large_pool_count": 1024, 00:20:10.124 "small_bufsize": 8192, 00:20:10.124 "large_bufsize": 135168 00:20:10.124 } 00:20:10.124 } 00:20:10.124 ] 00:20:10.124 }, 00:20:10.124 { 00:20:10.124 "subsystem": "sock", 00:20:10.124 "config": [ 00:20:10.124 { 00:20:10.124 "method": "sock_set_default_impl", 00:20:10.124 "params": { 00:20:10.124 "impl_name": "posix" 00:20:10.124 } 00:20:10.124 }, 00:20:10.124 { 00:20:10.124 "method": "sock_impl_set_options", 00:20:10.124 "params": { 00:20:10.124 "impl_name": "ssl", 00:20:10.124 "recv_buf_size": 4096, 00:20:10.124 "send_buf_size": 4096, 00:20:10.124 "enable_recv_pipe": true, 00:20:10.124 "enable_quickack": false, 00:20:10.124 "enable_placement_id": 0, 00:20:10.124 "enable_zerocopy_send_server": true, 00:20:10.124 "enable_zerocopy_send_client": false, 00:20:10.124 "zerocopy_threshold": 0, 00:20:10.124 "tls_version": 0, 00:20:10.124 "enable_ktls": false 00:20:10.124 } 00:20:10.124 }, 00:20:10.124 { 00:20:10.124 "method": "sock_impl_set_options", 00:20:10.124 "params": { 00:20:10.124 "impl_name": "posix", 00:20:10.124 "recv_buf_size": 2097152, 00:20:10.124 "send_buf_size": 2097152, 00:20:10.124 "enable_recv_pipe": true, 00:20:10.124 "enable_quickack": false, 00:20:10.124 "enable_placement_id": 0, 00:20:10.124 "enable_zerocopy_send_server": true, 00:20:10.124 "enable_zerocopy_send_client": false, 00:20:10.124 "zerocopy_threshold": 0, 00:20:10.124 "tls_version": 0, 00:20:10.124 "enable_ktls": false 00:20:10.124 } 00:20:10.124 } 00:20:10.124 ] 00:20:10.124 }, 00:20:10.124 { 00:20:10.124 "subsystem": "vmd", 00:20:10.124 "config": [] 00:20:10.124 }, 00:20:10.124 { 00:20:10.124 "subsystem": "accel", 00:20:10.124 "config": [ 00:20:10.124 { 00:20:10.124 "method": "accel_set_options", 00:20:10.124 "params": { 00:20:10.124 "small_cache_size": 128, 00:20:10.124 "large_cache_size": 16, 00:20:10.124 "task_count": 2048, 00:20:10.124 "sequence_count": 2048, 00:20:10.124 "buf_count": 2048 00:20:10.124 } 00:20:10.124 } 00:20:10.124 ] 00:20:10.124 }, 00:20:10.124 { 00:20:10.124 "subsystem": "bdev", 00:20:10.124 "config": [ 00:20:10.124 { 00:20:10.124 "method": "bdev_set_options", 00:20:10.124 "params": { 00:20:10.124 "bdev_io_pool_size": 65535, 00:20:10.124 "bdev_io_cache_size": 256, 00:20:10.124 "bdev_auto_examine": true, 00:20:10.124 "iobuf_small_cache_size": 128, 00:20:10.124 "iobuf_large_cache_size": 16 00:20:10.124 } 00:20:10.124 }, 00:20:10.124 { 00:20:10.124 "method": "bdev_raid_set_options", 00:20:10.124 "params": { 00:20:10.124 "process_window_size_kb": 1024, 00:20:10.124 "process_max_bandwidth_mb_sec": 0 00:20:10.124 } 00:20:10.124 }, 00:20:10.124 { 00:20:10.124 "method": "bdev_iscsi_set_options", 00:20:10.124 "params": { 00:20:10.124 "timeout_sec": 30 00:20:10.124 } 00:20:10.124 }, 00:20:10.124 { 00:20:10.124 "method": "bdev_nvme_set_options", 00:20:10.124 "params": { 00:20:10.124 "action_on_timeout": "none", 00:20:10.124 "timeout_us": 0, 00:20:10.124 "timeout_admin_us": 0, 00:20:10.124 "keep_alive_timeout_ms": 10000, 00:20:10.124 "arbitration_burst": 0, 00:20:10.124 "low_priority_weight": 0, 00:20:10.124 "medium_priority_weight": 0, 00:20:10.124 "high_priority_weight": 0, 00:20:10.124 "nvme_adminq_poll_period_us": 10000, 00:20:10.124 "nvme_ioq_poll_period_us": 0, 00:20:10.124 "io_queue_requests": 0, 00:20:10.124 "delay_cmd_submit": true, 00:20:10.124 "transport_retry_count": 4, 00:20:10.124 "bdev_retry_count": 3, 00:20:10.124 "transport_ack_timeout": 0, 00:20:10.124 "ctrlr_loss_timeout_sec": 0, 00:20:10.124 "reconnect_delay_sec": 0, 00:20:10.124 "fast_io_fail_timeout_sec": 0, 00:20:10.124 "disable_auto_failback": false, 00:20:10.124 "generate_uuids": false, 00:20:10.124 "transport_tos": 0, 00:20:10.124 "nvme_error_stat": false, 00:20:10.124 "rdma_srq_size": 0, 00:20:10.124 "io_path_stat": false, 00:20:10.124 "allow_accel_sequence": false, 00:20:10.124 "rdma_max_cq_size": 0, 00:20:10.124 "rdma_cm_event_timeout_ms": 0, 00:20:10.124 "dhchap_digests": [ 00:20:10.124 "sha256", 00:20:10.124 "sha384", 00:20:10.124 "sha512" 00:20:10.124 ], 00:20:10.124 "dhchap_dhgroups": [ 00:20:10.124 "null", 00:20:10.124 "ffdhe2048", 00:20:10.124 "ffdhe3072", 00:20:10.124 "ffdhe4096", 00:20:10.124 "ffdhe6144", 00:20:10.124 "ffdhe8192" 00:20:10.124 ] 00:20:10.124 } 00:20:10.124 }, 00:20:10.124 { 00:20:10.124 "method": "bdev_nvme_set_hotplug", 00:20:10.124 "params": { 00:20:10.124 "period_us": 100000, 00:20:10.124 "enable": false 00:20:10.124 } 00:20:10.124 }, 00:20:10.124 { 00:20:10.124 "method": "bdev_malloc_create", 00:20:10.124 "params": { 00:20:10.124 "name": "malloc0", 00:20:10.124 "num_blocks": 8192, 00:20:10.124 "block_size": 4096, 00:20:10.124 "physical_block_size": 4096, 00:20:10.124 "uuid": "15836dc5-d782-440a-a351-312b95003ebd", 00:20:10.124 "optimal_io_boundary": 0, 00:20:10.124 "md_size": 0, 00:20:10.124 "dif_type": 0, 00:20:10.124 "dif_is_head_of_md": false, 00:20:10.124 "dif_pi_format": 0 00:20:10.124 } 00:20:10.124 }, 00:20:10.124 { 00:20:10.124 "method": "bdev_wait_for_examine" 00:20:10.124 } 00:20:10.124 ] 00:20:10.124 }, 00:20:10.124 { 00:20:10.124 "subsystem": "nbd", 00:20:10.124 "config": [] 00:20:10.124 }, 00:20:10.124 { 00:20:10.124 "subsystem": "scheduler", 00:20:10.124 "config": [ 00:20:10.124 { 00:20:10.124 "method": "framework_set_scheduler", 00:20:10.124 "params": { 00:20:10.124 "name": "static" 00:20:10.124 } 00:20:10.124 } 00:20:10.124 ] 00:20:10.124 }, 00:20:10.124 { 00:20:10.124 "subsystem": "nvmf", 00:20:10.124 "config": [ 00:20:10.124 { 00:20:10.124 "method": "nvmf_set_config", 00:20:10.124 "params": { 00:20:10.124 "discovery_filter": "match_any", 00:20:10.124 "admin_cmd_passthru": { 00:20:10.124 "identify_ctrlr": false 00:20:10.124 }, 00:20:10.124 "dhchap_digests": [ 00:20:10.124 "sha256", 00:20:10.124 "sha384", 00:20:10.124 "sha512" 00:20:10.124 ], 00:20:10.124 "dhchap_dhgroups": [ 00:20:10.124 "null", 00:20:10.124 "ffdhe2048", 00:20:10.124 "ffdhe3072", 00:20:10.124 "ffdhe4096", 00:20:10.124 "ffdhe6144", 00:20:10.124 "ffdhe8192" 00:20:10.124 ] 00:20:10.124 } 00:20:10.124 }, 00:20:10.124 { 00:20:10.124 "method": "nvmf_set_max_subsystems", 00:20:10.124 "params": { 00:20:10.124 "max_subsystems": 1024 00:20:10.124 } 00:20:10.124 }, 00:20:10.124 { 00:20:10.124 "method": "nvmf_set_crdt", 00:20:10.124 "params": { 00:20:10.124 "crdt1": 0, 00:20:10.124 "crdt2": 0, 00:20:10.124 "crdt3": 0 00:20:10.124 } 00:20:10.124 }, 00:20:10.124 { 00:20:10.124 "method": "nvmf_create_transport", 00:20:10.124 "params": { 00:20:10.124 "trtype": "TCP", 00:20:10.124 "max_queue_depth": 128, 00:20:10.124 "max_io_qpairs_per_ctrlr": 127, 00:20:10.124 "in_capsule_data_size": 4096, 00:20:10.124 "max_io_size": 131072, 00:20:10.124 "io_unit_size": 131072, 00:20:10.124 "max_aq_depth": 128, 00:20:10.124 "num_shared_buffers": 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.124 511, 00:20:10.125 "buf_cache_size": 4294967295, 00:20:10.125 "dif_insert_or_strip": false, 00:20:10.125 "zcopy": false, 00:20:10.125 "c2h_success": false, 00:20:10.125 "sock_priority": 0, 00:20:10.125 "abort_timeout_sec": 1, 00:20:10.125 "ack_timeout": 0, 00:20:10.125 "data_wr_pool_size": 0 00:20:10.125 } 00:20:10.125 }, 00:20:10.125 { 00:20:10.125 "method": "nvmf_create_subsystem", 00:20:10.125 "params": { 00:20:10.125 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.125 "allow_any_host": false, 00:20:10.125 "serial_number": "00000000000000000000", 00:20:10.125 "model_number": "SPDK bdev Controller", 00:20:10.125 "max_namespaces": 32, 00:20:10.125 "min_cntlid": 1, 00:20:10.125 "max_cntlid": 65519, 00:20:10.125 "ana_reporting": false 00:20:10.125 } 00:20:10.125 }, 00:20:10.125 { 00:20:10.125 "method": "nvmf_subsystem_add_host", 00:20:10.125 "params": { 00:20:10.125 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.125 "host": "nqn.2016-06.io.spdk:host1", 00:20:10.125 "psk": "key0" 00:20:10.125 } 00:20:10.125 }, 00:20:10.125 { 00:20:10.125 "method": "nvmf_subsystem_add_ns", 00:20:10.125 "params": { 00:20:10.125 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.125 "namespace": { 00:20:10.125 "nsid": 1, 00:20:10.125 "bdev_name": "malloc0", 00:20:10.125 "nguid": "15836DC5D782440AA351312B95003EBD", 00:20:10.125 "uuid": "15836dc5-d782-440a-a351-312b95003ebd", 00:20:10.125 "no_auto_visible": false 00:20:10.125 } 00:20:10.125 } 00:20:10.125 }, 00:20:10.125 { 00:20:10.125 "method": "nvmf_subsystem_add_listener", 00:20:10.125 "params": { 00:20:10.125 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.125 "listen_address": { 00:20:10.125 "trtype": "TCP", 00:20:10.125 "adrfam": "IPv4", 00:20:10.125 "traddr": "10.0.0.2", 00:20:10.125 "trsvcid": "4420" 00:20:10.125 }, 00:20:10.125 "secure_channel": false, 00:20:10.125 "sock_impl": "ssl" 00:20:10.125 } 00:20:10.125 } 00:20:10.125 ] 00:20:10.125 } 00:20:10.125 ] 00:20:10.125 }' 00:20:10.125 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1002265 00:20:10.125 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1002265 00:20:10.125 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:10.125 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1002265 ']' 00:20:10.125 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:10.125 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:10.125 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:10.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:10.125 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:10.125 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.125 [2024-10-21 12:04:46.686696] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:20:10.125 [2024-10-21 12:04:46.686755] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:10.385 [2024-10-21 12:04:46.770758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.385 [2024-10-21 12:04:46.800139] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:10.385 [2024-10-21 12:04:46.800167] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:10.385 [2024-10-21 12:04:46.800172] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:10.385 [2024-10-21 12:04:46.800177] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:10.385 [2024-10-21 12:04:46.800184] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:10.385 [2024-10-21 12:04:46.800677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.645 [2024-10-21 12:04:46.993333] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:10.645 [2024-10-21 12:04:47.025370] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:10.645 [2024-10-21 12:04:47.025578] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:10.905 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:10.905 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:10.905 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:10.905 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:10.905 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:11.166 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:11.166 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1002303 00:20:11.166 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1002303 /var/tmp/bdevperf.sock 00:20:11.166 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1002303 ']' 00:20:11.166 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:11.166 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:11.166 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:11.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:11.166 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:11.166 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:11.166 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:11.166 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:20:11.166 "subsystems": [ 00:20:11.166 { 00:20:11.166 "subsystem": "keyring", 00:20:11.166 "config": [ 00:20:11.166 { 00:20:11.166 "method": "keyring_file_add_key", 00:20:11.166 "params": { 00:20:11.166 "name": "key0", 00:20:11.166 "path": "/tmp/tmp.zqIwOwC1SQ" 00:20:11.166 } 00:20:11.166 } 00:20:11.166 ] 00:20:11.166 }, 00:20:11.166 { 00:20:11.166 "subsystem": "iobuf", 00:20:11.166 "config": [ 00:20:11.166 { 00:20:11.166 "method": "iobuf_set_options", 00:20:11.166 "params": { 00:20:11.166 "small_pool_count": 8192, 00:20:11.166 "large_pool_count": 1024, 00:20:11.166 "small_bufsize": 8192, 00:20:11.166 "large_bufsize": 135168 00:20:11.166 } 00:20:11.166 } 00:20:11.166 ] 00:20:11.166 }, 00:20:11.166 { 00:20:11.166 "subsystem": "sock", 00:20:11.166 "config": [ 00:20:11.166 { 00:20:11.166 "method": "sock_set_default_impl", 00:20:11.167 "params": { 00:20:11.167 "impl_name": "posix" 00:20:11.167 } 00:20:11.167 }, 00:20:11.167 { 00:20:11.167 "method": "sock_impl_set_options", 00:20:11.167 "params": { 00:20:11.167 "impl_name": "ssl", 00:20:11.167 "recv_buf_size": 4096, 00:20:11.167 "send_buf_size": 4096, 00:20:11.167 "enable_recv_pipe": true, 00:20:11.167 "enable_quickack": false, 00:20:11.167 "enable_placement_id": 0, 00:20:11.167 "enable_zerocopy_send_server": true, 00:20:11.167 "enable_zerocopy_send_client": false, 00:20:11.167 "zerocopy_threshold": 0, 00:20:11.167 "tls_version": 0, 00:20:11.167 "enable_ktls": false 00:20:11.167 } 00:20:11.167 }, 00:20:11.167 { 00:20:11.167 "method": "sock_impl_set_options", 00:20:11.167 "params": { 00:20:11.167 "impl_name": "posix", 00:20:11.167 "recv_buf_size": 2097152, 00:20:11.167 "send_buf_size": 2097152, 00:20:11.167 "enable_recv_pipe": true, 00:20:11.167 "enable_quickack": false, 00:20:11.167 "enable_placement_id": 0, 00:20:11.167 "enable_zerocopy_send_server": true, 00:20:11.167 "enable_zerocopy_send_client": false, 00:20:11.167 "zerocopy_threshold": 0, 00:20:11.167 "tls_version": 0, 00:20:11.167 "enable_ktls": false 00:20:11.167 } 00:20:11.167 } 00:20:11.167 ] 00:20:11.167 }, 00:20:11.167 { 00:20:11.167 "subsystem": "vmd", 00:20:11.167 "config": [] 00:20:11.167 }, 00:20:11.167 { 00:20:11.167 "subsystem": "accel", 00:20:11.167 "config": [ 00:20:11.167 { 00:20:11.167 "method": "accel_set_options", 00:20:11.167 "params": { 00:20:11.167 "small_cache_size": 128, 00:20:11.167 "large_cache_size": 16, 00:20:11.167 "task_count": 2048, 00:20:11.167 "sequence_count": 2048, 00:20:11.167 "buf_count": 2048 00:20:11.167 } 00:20:11.167 } 00:20:11.167 ] 00:20:11.167 }, 00:20:11.167 { 00:20:11.167 "subsystem": "bdev", 00:20:11.167 "config": [ 00:20:11.167 { 00:20:11.167 "method": "bdev_set_options", 00:20:11.167 "params": { 00:20:11.167 "bdev_io_pool_size": 65535, 00:20:11.167 "bdev_io_cache_size": 256, 00:20:11.167 "bdev_auto_examine": true, 00:20:11.167 "iobuf_small_cache_size": 128, 00:20:11.167 "iobuf_large_cache_size": 16 00:20:11.167 } 00:20:11.167 }, 00:20:11.167 { 00:20:11.167 "method": "bdev_raid_set_options", 00:20:11.167 "params": { 00:20:11.167 "process_window_size_kb": 1024, 00:20:11.167 "process_max_bandwidth_mb_sec": 0 00:20:11.167 } 00:20:11.167 }, 00:20:11.167 { 00:20:11.167 "method": "bdev_iscsi_set_options", 00:20:11.167 "params": { 00:20:11.167 "timeout_sec": 30 00:20:11.167 } 00:20:11.167 }, 00:20:11.167 { 00:20:11.167 "method": "bdev_nvme_set_options", 00:20:11.167 "params": { 00:20:11.167 "action_on_timeout": "none", 00:20:11.167 "timeout_us": 0, 00:20:11.167 "timeout_admin_us": 0, 00:20:11.167 "keep_alive_timeout_ms": 10000, 00:20:11.167 "arbitration_burst": 0, 00:20:11.167 "low_priority_weight": 0, 00:20:11.167 "medium_priority_weight": 0, 00:20:11.167 "high_priority_weight": 0, 00:20:11.167 "nvme_adminq_poll_period_us": 10000, 00:20:11.167 "nvme_ioq_poll_period_us": 0, 00:20:11.167 "io_queue_requests": 512, 00:20:11.167 "delay_cmd_submit": true, 00:20:11.167 "transport_retry_count": 4, 00:20:11.167 "bdev_retry_count": 3, 00:20:11.167 "transport_ack_timeout": 0, 00:20:11.167 "ctrlr_loss_timeout_sec": 0, 00:20:11.167 "reconnect_delay_sec": 0, 00:20:11.167 "fast_io_fail_timeout_sec": 0, 00:20:11.167 "disable_auto_failback": false, 00:20:11.167 "generate_uuids": false, 00:20:11.167 "transport_tos": 0, 00:20:11.167 "nvme_error_stat": false, 00:20:11.167 "rdma_srq_size": 0, 00:20:11.167 "io_path_stat": false, 00:20:11.167 "allow_accel_sequence": false, 00:20:11.167 "rdma_max_cq_size": 0, 00:20:11.167 "rdma_cm_event_timeout_ms": 0, 00:20:11.167 "dhchap_digests": [ 00:20:11.167 "sha256", 00:20:11.167 "sha384", 00:20:11.167 "sha512" 00:20:11.167 ], 00:20:11.167 "dhchap_dhgroups": [ 00:20:11.167 "null", 00:20:11.167 "ffdhe2048", 00:20:11.167 "ffdhe3072", 00:20:11.167 "ffdhe4096", 00:20:11.167 "ffdhe6144", 00:20:11.167 "ffdhe8192" 00:20:11.167 ] 00:20:11.167 } 00:20:11.167 }, 00:20:11.167 { 00:20:11.167 "method": "bdev_nvme_attach_controller", 00:20:11.167 "params": { 00:20:11.167 "name": "nvme0", 00:20:11.167 "trtype": "TCP", 00:20:11.167 "adrfam": "IPv4", 00:20:11.167 "traddr": "10.0.0.2", 00:20:11.167 "trsvcid": "4420", 00:20:11.167 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:11.167 "prchk_reftag": false, 00:20:11.167 "prchk_guard": false, 00:20:11.167 "ctrlr_loss_timeout_sec": 0, 00:20:11.167 "reconnect_delay_sec": 0, 00:20:11.167 "fast_io_fail_timeout_sec": 0, 00:20:11.167 "psk": "key0", 00:20:11.167 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:11.167 "hdgst": false, 00:20:11.167 "ddgst": false, 00:20:11.167 "multipath": "multipath" 00:20:11.167 } 00:20:11.167 }, 00:20:11.167 { 00:20:11.167 "method": "bdev_nvme_set_hotplug", 00:20:11.167 "params": { 00:20:11.167 "period_us": 100000, 00:20:11.167 "enable": false 00:20:11.167 } 00:20:11.167 }, 00:20:11.167 { 00:20:11.167 "method": "bdev_enable_histogram", 00:20:11.167 "params": { 00:20:11.167 "name": "nvme0n1", 00:20:11.167 "enable": true 00:20:11.167 } 00:20:11.167 }, 00:20:11.167 { 00:20:11.167 "method": "bdev_wait_for_examine" 00:20:11.167 } 00:20:11.167 ] 00:20:11.167 }, 00:20:11.167 { 00:20:11.167 "subsystem": "nbd", 00:20:11.167 "config": [] 00:20:11.167 } 00:20:11.167 ] 00:20:11.167 }' 00:20:11.167 [2024-10-21 12:04:47.573093] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:20:11.167 [2024-10-21 12:04:47.573149] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1002303 ] 00:20:11.167 [2024-10-21 12:04:47.646984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.167 [2024-10-21 12:04:47.676823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:11.428 [2024-10-21 12:04:47.811235] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:11.998 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:11.998 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:11.998 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:11.998 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:20:11.998 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.998 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:12.268 Running I/O for 1 seconds... 00:20:13.398 6316.00 IOPS, 24.67 MiB/s 00:20:13.398 Latency(us) 00:20:13.398 [2024-10-21T10:04:49.993Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:13.398 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:13.398 Verification LBA range: start 0x0 length 0x2000 00:20:13.398 nvme0n1 : 1.02 6337.56 24.76 0.00 0.00 20038.19 5242.88 21408.43 00:20:13.398 [2024-10-21T10:04:49.993Z] =================================================================================================================== 00:20:13.398 [2024-10-21T10:04:49.993Z] Total : 6337.56 24.76 0.00 0.00 20038.19 5242.88 21408.43 00:20:13.398 { 00:20:13.398 "results": [ 00:20:13.398 { 00:20:13.398 "job": "nvme0n1", 00:20:13.398 "core_mask": "0x2", 00:20:13.398 "workload": "verify", 00:20:13.398 "status": "finished", 00:20:13.398 "verify_range": { 00:20:13.398 "start": 0, 00:20:13.398 "length": 8192 00:20:13.398 }, 00:20:13.398 "queue_depth": 128, 00:20:13.398 "io_size": 4096, 00:20:13.398 "runtime": 1.016795, 00:20:13.398 "iops": 6337.560668571345, 00:20:13.399 "mibps": 24.756096361606815, 00:20:13.399 "io_failed": 0, 00:20:13.399 "io_timeout": 0, 00:20:13.399 "avg_latency_us": 20038.1883881647, 00:20:13.399 "min_latency_us": 5242.88, 00:20:13.399 "max_latency_us": 21408.426666666666 00:20:13.399 } 00:20:13.399 ], 00:20:13.399 "core_count": 1 00:20:13.399 } 00:20:13.399 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:20:13.399 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:20:13.399 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:13.399 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:20:13.399 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:20:13.399 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:20:13.399 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:13.399 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:20:13.399 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:20:13.399 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:20:13.399 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:13.399 nvmf_trace.0 00:20:13.399 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:20:13.399 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1002303 00:20:13.399 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1002303 ']' 00:20:13.399 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1002303 00:20:13.399 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:13.399 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:13.399 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1002303 00:20:13.399 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:13.399 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:13.399 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1002303' 00:20:13.399 killing process with pid 1002303 00:20:13.399 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1002303 00:20:13.399 Received shutdown signal, test time was about 1.000000 seconds 00:20:13.399 00:20:13.399 Latency(us) 00:20:13.399 [2024-10-21T10:04:49.994Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:13.399 [2024-10-21T10:04:49.994Z] =================================================================================================================== 00:20:13.399 [2024-10-21T10:04:49.994Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:13.399 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1002303 00:20:13.399 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:13.399 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:13.399 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:20:13.399 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:13.399 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:20:13.399 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:13.399 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:13.399 rmmod nvme_tcp 00:20:13.399 rmmod nvme_fabrics 00:20:13.399 rmmod nvme_keyring 00:20:13.399 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:13.399 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:20:13.399 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:20:13.399 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@515 -- # '[' -n 1002265 ']' 00:20:13.399 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # killprocess 1002265 00:20:13.399 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1002265 ']' 00:20:13.399 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1002265 00:20:13.399 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:13.399 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:13.399 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1002265 00:20:13.660 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:13.660 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:13.660 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1002265' 00:20:13.660 killing process with pid 1002265 00:20:13.660 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1002265 00:20:13.660 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1002265 00:20:13.660 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:13.660 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:13.660 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:13.660 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:20:13.660 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-save 00:20:13.660 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:13.660 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-restore 00:20:13.660 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:13.660 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:13.660 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:13.660 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:13.660 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.IhNJs6vtw1 /tmp/tmp.5H731Fa02E /tmp/tmp.zqIwOwC1SQ 00:20:16.228 00:20:16.228 real 1m26.861s 00:20:16.228 user 2m17.494s 00:20:16.228 sys 0m26.909s 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.228 ************************************ 00:20:16.228 END TEST nvmf_tls 00:20:16.228 ************************************ 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:16.228 ************************************ 00:20:16.228 START TEST nvmf_fips 00:20:16.228 ************************************ 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:16.228 * Looking for test storage... 00:20:16.228 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:16.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.228 --rc genhtml_branch_coverage=1 00:20:16.228 --rc genhtml_function_coverage=1 00:20:16.228 --rc genhtml_legend=1 00:20:16.228 --rc geninfo_all_blocks=1 00:20:16.228 --rc geninfo_unexecuted_blocks=1 00:20:16.228 00:20:16.228 ' 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:16.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.228 --rc genhtml_branch_coverage=1 00:20:16.228 --rc genhtml_function_coverage=1 00:20:16.228 --rc genhtml_legend=1 00:20:16.228 --rc geninfo_all_blocks=1 00:20:16.228 --rc geninfo_unexecuted_blocks=1 00:20:16.228 00:20:16.228 ' 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:16.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.228 --rc genhtml_branch_coverage=1 00:20:16.228 --rc genhtml_function_coverage=1 00:20:16.228 --rc genhtml_legend=1 00:20:16.228 --rc geninfo_all_blocks=1 00:20:16.228 --rc geninfo_unexecuted_blocks=1 00:20:16.228 00:20:16.228 ' 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:16.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.228 --rc genhtml_branch_coverage=1 00:20:16.228 --rc genhtml_function_coverage=1 00:20:16.228 --rc genhtml_legend=1 00:20:16.228 --rc geninfo_all_blocks=1 00:20:16.228 --rc geninfo_unexecuted_blocks=1 00:20:16.228 00:20:16.228 ' 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:16.228 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:16.229 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:20:16.229 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:16.230 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:20:16.230 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:20:16.230 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:16.230 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:20:16.230 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:16.230 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:20:16.230 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:16.230 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:20:16.230 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:20:16.230 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:20:16.230 Error setting digest 00:20:16.230 40F2B713C87F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:16.230 40F2B713C87F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:16.230 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:20:16.230 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:16.230 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:16.230 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:16.230 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:20:16.230 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:16.230 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:16.230 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:16.230 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:16.230 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:16.230 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:16.230 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:16.230 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:16.230 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:16.230 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:16.230 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:20:16.230 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:24.377 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:24.377 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:24.377 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:24.377 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:24.377 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:24.378 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # is_hw=yes 00:20:24.378 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:24.378 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:20:24.378 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:20:24.378 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:24.378 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:24.378 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:24.378 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:24.378 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:24.378 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:24.378 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:24.378 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:24.378 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:24.378 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:24.378 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:24.378 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:24.378 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:24.378 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:24.378 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:24.378 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:24.378 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:24.378 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:24.378 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:24.378 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:24.378 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:24.378 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:24.378 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:24.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:24.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:20:24.378 00:20:24.378 --- 10.0.0.2 ping statistics --- 00:20:24.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.378 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:20:24.378 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:24.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:24.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:20:24.378 00:20:24.378 --- 10.0.0.1 ping statistics --- 00:20:24.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.378 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:20:24.378 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:24.378 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # return 0 00:20:24.378 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:24.378 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:24.378 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:24.378 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:24.378 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:24.378 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:24.378 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:24.378 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:24.378 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:24.378 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:24.378 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:24.378 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # nvmfpid=1007093 00:20:24.378 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # waitforlisten 1007093 00:20:24.378 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:24.378 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1007093 ']' 00:20:24.378 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.378 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:24.378 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.378 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:24.378 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:24.378 [2024-10-21 12:05:00.316853] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:20:24.378 [2024-10-21 12:05:00.316927] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:24.378 [2024-10-21 12:05:00.406814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.378 [2024-10-21 12:05:00.457917] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:24.378 [2024-10-21 12:05:00.457967] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:24.378 [2024-10-21 12:05:00.457976] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:24.378 [2024-10-21 12:05:00.457989] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:24.378 [2024-10-21 12:05:00.457995] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:24.378 [2024-10-21 12:05:00.458762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:24.639 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:24.639 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:20:24.639 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:24.639 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:24.639 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:24.639 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:24.639 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:24.639 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:24.639 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:20:24.639 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.cA0 00:20:24.639 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:24.639 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.cA0 00:20:24.639 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.cA0 00:20:24.639 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.cA0 00:20:24.639 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:24.900 [2024-10-21 12:05:01.328776] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:24.900 [2024-10-21 12:05:01.344783] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:24.900 [2024-10-21 12:05:01.345077] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:24.900 malloc0 00:20:24.900 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:24.900 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1007356 00:20:24.900 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1007356 /var/tmp/bdevperf.sock 00:20:24.900 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:24.900 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1007356 ']' 00:20:24.900 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:24.900 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:24.900 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:24.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:24.900 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:24.900 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:24.900 [2024-10-21 12:05:01.492030] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:20:24.900 [2024-10-21 12:05:01.492109] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1007356 ] 00:20:25.161 [2024-10-21 12:05:01.574588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.161 [2024-10-21 12:05:01.625714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:25.733 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:25.733 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:20:25.733 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.cA0 00:20:25.995 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:26.256 [2024-10-21 12:05:02.642482] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:26.256 TLSTESTn1 00:20:26.256 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:26.256 Running I/O for 10 seconds... 00:20:28.581 4599.00 IOPS, 17.96 MiB/s [2024-10-21T10:05:06.119Z] 4839.00 IOPS, 18.90 MiB/s [2024-10-21T10:05:07.060Z] 5357.00 IOPS, 20.93 MiB/s [2024-10-21T10:05:08.002Z] 5532.75 IOPS, 21.61 MiB/s [2024-10-21T10:05:08.943Z] 5404.00 IOPS, 21.11 MiB/s [2024-10-21T10:05:09.882Z] 5423.17 IOPS, 21.18 MiB/s [2024-10-21T10:05:11.264Z] 5514.14 IOPS, 21.54 MiB/s [2024-10-21T10:05:12.205Z] 5492.38 IOPS, 21.45 MiB/s [2024-10-21T10:05:13.146Z] 5436.89 IOPS, 21.24 MiB/s [2024-10-21T10:05:13.146Z] 5420.80 IOPS, 21.18 MiB/s 00:20:36.551 Latency(us) 00:20:36.551 [2024-10-21T10:05:13.146Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.551 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:36.551 Verification LBA range: start 0x0 length 0x2000 00:20:36.551 TLSTESTn1 : 10.01 5425.68 21.19 0.00 0.00 23554.86 5870.93 62040.75 00:20:36.551 [2024-10-21T10:05:13.146Z] =================================================================================================================== 00:20:36.551 [2024-10-21T10:05:13.146Z] Total : 5425.68 21.19 0.00 0.00 23554.86 5870.93 62040.75 00:20:36.551 { 00:20:36.551 "results": [ 00:20:36.551 { 00:20:36.551 "job": "TLSTESTn1", 00:20:36.551 "core_mask": "0x4", 00:20:36.551 "workload": "verify", 00:20:36.551 "status": "finished", 00:20:36.551 "verify_range": { 00:20:36.551 "start": 0, 00:20:36.551 "length": 8192 00:20:36.551 }, 00:20:36.551 "queue_depth": 128, 00:20:36.551 "io_size": 4096, 00:20:36.551 "runtime": 10.014404, 00:20:36.551 "iops": 5425.684843551348, 00:20:36.551 "mibps": 21.194081420122455, 00:20:36.551 "io_failed": 0, 00:20:36.551 "io_timeout": 0, 00:20:36.551 "avg_latency_us": 23554.855301861906, 00:20:36.551 "min_latency_us": 5870.933333333333, 00:20:36.551 "max_latency_us": 62040.746666666666 00:20:36.551 } 00:20:36.551 ], 00:20:36.551 "core_count": 1 00:20:36.551 } 00:20:36.551 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:36.551 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:36.551 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:20:36.551 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:20:36.551 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:20:36.551 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:36.551 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:20:36.551 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:20:36.551 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:20:36.551 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:36.551 nvmf_trace.0 00:20:36.551 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:20:36.551 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1007356 00:20:36.551 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1007356 ']' 00:20:36.551 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1007356 00:20:36.552 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:20:36.552 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:36.552 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1007356 00:20:36.552 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:36.552 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:36.552 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1007356' 00:20:36.552 killing process with pid 1007356 00:20:36.552 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1007356 00:20:36.552 Received shutdown signal, test time was about 10.000000 seconds 00:20:36.552 00:20:36.552 Latency(us) 00:20:36.552 [2024-10-21T10:05:13.147Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.552 [2024-10-21T10:05:13.147Z] =================================================================================================================== 00:20:36.552 [2024-10-21T10:05:13.147Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:36.552 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1007356 00:20:36.812 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:36.812 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:36.812 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:20:36.812 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:36.812 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:20:36.812 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:36.812 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:36.812 rmmod nvme_tcp 00:20:36.812 rmmod nvme_fabrics 00:20:36.812 rmmod nvme_keyring 00:20:36.812 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:36.812 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:20:36.812 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:20:36.812 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@515 -- # '[' -n 1007093 ']' 00:20:36.812 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # killprocess 1007093 00:20:36.812 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1007093 ']' 00:20:36.812 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1007093 00:20:36.812 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:20:36.812 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:36.812 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1007093 00:20:36.812 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:36.812 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:36.812 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1007093' 00:20:36.812 killing process with pid 1007093 00:20:36.812 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1007093 00:20:36.812 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1007093 00:20:37.072 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:37.072 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:37.073 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:37.073 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:20:37.073 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-save 00:20:37.073 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:37.073 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-restore 00:20:37.073 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:37.073 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:37.073 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.073 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:37.073 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:38.985 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:38.985 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.cA0 00:20:38.985 00:20:38.985 real 0m23.198s 00:20:38.985 user 0m24.792s 00:20:38.985 sys 0m9.759s 00:20:38.985 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:38.985 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:38.985 ************************************ 00:20:38.985 END TEST nvmf_fips 00:20:38.985 ************************************ 00:20:38.985 12:05:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:38.985 12:05:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:38.985 12:05:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:38.985 12:05:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:39.246 ************************************ 00:20:39.246 START TEST nvmf_control_msg_list 00:20:39.246 ************************************ 00:20:39.246 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:39.246 * Looking for test storage... 00:20:39.246 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:39.246 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:39.246 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:20:39.246 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:39.246 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:39.246 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:39.246 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:39.246 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:39.246 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:20:39.246 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:20:39.246 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:20:39.246 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:20:39.246 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:20:39.246 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:20:39.246 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:20:39.246 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:39.246 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:20:39.246 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:20:39.246 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:39.246 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:39.246 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:20:39.246 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:20:39.246 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:39.246 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:20:39.246 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:20:39.246 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:20:39.246 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:20:39.246 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:39.246 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:20:39.246 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:20:39.246 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:39.246 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:39.247 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:20:39.247 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:39.247 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:39.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.247 --rc genhtml_branch_coverage=1 00:20:39.247 --rc genhtml_function_coverage=1 00:20:39.247 --rc genhtml_legend=1 00:20:39.247 --rc geninfo_all_blocks=1 00:20:39.247 --rc geninfo_unexecuted_blocks=1 00:20:39.247 00:20:39.247 ' 00:20:39.247 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:39.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.247 --rc genhtml_branch_coverage=1 00:20:39.247 --rc genhtml_function_coverage=1 00:20:39.247 --rc genhtml_legend=1 00:20:39.247 --rc geninfo_all_blocks=1 00:20:39.247 --rc geninfo_unexecuted_blocks=1 00:20:39.247 00:20:39.247 ' 00:20:39.247 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:39.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.247 --rc genhtml_branch_coverage=1 00:20:39.247 --rc genhtml_function_coverage=1 00:20:39.247 --rc genhtml_legend=1 00:20:39.247 --rc geninfo_all_blocks=1 00:20:39.247 --rc geninfo_unexecuted_blocks=1 00:20:39.247 00:20:39.247 ' 00:20:39.247 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:39.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.247 --rc genhtml_branch_coverage=1 00:20:39.247 --rc genhtml_function_coverage=1 00:20:39.247 --rc genhtml_legend=1 00:20:39.247 --rc geninfo_all_blocks=1 00:20:39.247 --rc geninfo_unexecuted_blocks=1 00:20:39.247 00:20:39.247 ' 00:20:39.247 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:39.247 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:20:39.247 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:39.247 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:39.247 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:39.247 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:39.247 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:39.247 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:39.247 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:39.247 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:39.247 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:39.247 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:39.247 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:39.247 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:39.247 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:39.247 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:39.247 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:39.247 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:39.247 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:39.247 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:20:39.247 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:39.247 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:39.247 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:39.247 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.247 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.247 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.247 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:20:39.247 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.247 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:20:39.247 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:39.247 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:39.247 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:39.247 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:39.247 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:39.247 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:39.247 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:39.247 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:39.247 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:39.247 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:39.508 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:20:39.508 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:39.508 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:39.508 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:39.508 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:39.508 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:39.508 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.508 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:39.508 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.508 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:39.508 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:39.508 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:20:39.508 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:47.652 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:47.652 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:20:47.652 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:47.652 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:47.652 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:47.652 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:47.652 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:47.652 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:20:47.652 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:47.652 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:20:47.652 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:20:47.652 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:20:47.652 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:20:47.652 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:20:47.652 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:20:47.652 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:47.652 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:47.653 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:47.653 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:47.653 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:47.653 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # is_hw=yes 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:47.653 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:47.653 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:47.653 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:47.653 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:47.653 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:47.653 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:47.653 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:47.653 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:47.653 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:47.653 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:47.653 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:20:47.653 00:20:47.653 --- 10.0.0.2 ping statistics --- 00:20:47.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.653 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:20:47.653 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:47.653 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:47.653 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:20:47.653 00:20:47.653 --- 10.0.0.1 ping statistics --- 00:20:47.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.653 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:20:47.653 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:47.653 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@448 -- # return 0 00:20:47.653 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:47.653 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:47.653 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:47.653 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:47.653 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:47.653 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:47.653 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:47.653 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:20:47.653 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:47.653 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:47.653 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:47.653 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # nvmfpid=1013835 00:20:47.653 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # waitforlisten 1013835 00:20:47.654 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:47.654 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 1013835 ']' 00:20:47.654 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:47.654 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:47.654 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:47.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:47.654 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:47.654 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:47.654 [2024-10-21 12:05:23.403199] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:20:47.654 [2024-10-21 12:05:23.403269] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:47.654 [2024-10-21 12:05:23.493254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.654 [2024-10-21 12:05:23.545169] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:47.654 [2024-10-21 12:05:23.545218] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:47.654 [2024-10-21 12:05:23.545227] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:47.654 [2024-10-21 12:05:23.545234] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:47.654 [2024-10-21 12:05:23.545240] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:47.654 [2024-10-21 12:05:23.545962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.654 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:47.654 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:20:47.654 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:47.654 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:47.654 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:47.915 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:47.915 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:47.915 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:47.915 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:20:47.915 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.915 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:47.915 [2024-10-21 12:05:24.263523] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:47.915 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.915 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:20:47.915 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.915 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:47.915 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.915 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:47.915 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.915 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:47.915 Malloc0 00:20:47.915 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.915 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:47.915 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.915 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:47.915 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.915 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:47.915 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.915 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:47.915 [2024-10-21 12:05:24.317951] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:47.915 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.915 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1014056 00:20:47.915 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:47.915 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1014057 00:20:47.915 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:47.916 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1014058 00:20:47.916 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1014056 00:20:47.916 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:47.916 [2024-10-21 12:05:24.408791] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:47.916 [2024-10-21 12:05:24.409092] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:47.916 [2024-10-21 12:05:24.409317] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:48.859 Initializing NVMe Controllers 00:20:48.859 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:48.859 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:20:48.859 Initialization complete. Launching workers. 00:20:48.859 ======================================================== 00:20:48.859 Latency(us) 00:20:48.859 Device Information : IOPS MiB/s Average min max 00:20:48.859 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40896.58 40772.82 40953.76 00:20:48.859 ======================================================== 00:20:48.859 Total : 25.00 0.10 40896.58 40772.82 40953.76 00:20:48.859 00:20:49.121 Initializing NVMe Controllers 00:20:49.121 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:49.121 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:20:49.121 Initialization complete. Launching workers. 00:20:49.121 ======================================================== 00:20:49.121 Latency(us) 00:20:49.121 Device Information : IOPS MiB/s Average min max 00:20:49.121 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40921.58 40868.56 41334.06 00:20:49.121 ======================================================== 00:20:49.121 Total : 25.00 0.10 40921.58 40868.56 41334.06 00:20:49.121 00:20:49.121 Initializing NVMe Controllers 00:20:49.121 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:49.121 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:20:49.121 Initialization complete. Launching workers. 00:20:49.121 ======================================================== 00:20:49.121 Latency(us) 00:20:49.121 Device Information : IOPS MiB/s Average min max 00:20:49.121 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40907.11 40841.59 40990.76 00:20:49.121 ======================================================== 00:20:49.121 Total : 25.00 0.10 40907.11 40841.59 40990.76 00:20:49.121 00:20:49.121 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1014057 00:20:49.121 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1014058 00:20:49.121 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:49.121 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:20:49.121 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:49.121 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:20:49.121 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:49.121 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:20:49.121 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:49.121 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:49.121 rmmod nvme_tcp 00:20:49.121 rmmod nvme_fabrics 00:20:49.121 rmmod nvme_keyring 00:20:49.121 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:49.121 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:20:49.121 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:20:49.121 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@515 -- # '[' -n 1013835 ']' 00:20:49.121 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # killprocess 1013835 00:20:49.121 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 1013835 ']' 00:20:49.121 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 1013835 00:20:49.121 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:20:49.121 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:49.121 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1013835 00:20:49.382 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:49.382 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:49.382 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1013835' 00:20:49.382 killing process with pid 1013835 00:20:49.382 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 1013835 00:20:49.382 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 1013835 00:20:49.382 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:49.382 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:49.382 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:49.382 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:20:49.382 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-save 00:20:49.382 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:49.382 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-restore 00:20:49.382 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:49.382 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:49.382 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:49.383 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:49.383 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:51.931 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:51.931 00:20:51.931 real 0m12.360s 00:20:51.931 user 0m7.903s 00:20:51.931 sys 0m6.472s 00:20:51.931 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:51.931 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:51.931 ************************************ 00:20:51.931 END TEST nvmf_control_msg_list 00:20:51.931 ************************************ 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:51.931 ************************************ 00:20:51.931 START TEST nvmf_wait_for_buf 00:20:51.931 ************************************ 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:51.931 * Looking for test storage... 00:20:51.931 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:51.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.931 --rc genhtml_branch_coverage=1 00:20:51.931 --rc genhtml_function_coverage=1 00:20:51.931 --rc genhtml_legend=1 00:20:51.931 --rc geninfo_all_blocks=1 00:20:51.931 --rc geninfo_unexecuted_blocks=1 00:20:51.931 00:20:51.931 ' 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:51.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.931 --rc genhtml_branch_coverage=1 00:20:51.931 --rc genhtml_function_coverage=1 00:20:51.931 --rc genhtml_legend=1 00:20:51.931 --rc geninfo_all_blocks=1 00:20:51.931 --rc geninfo_unexecuted_blocks=1 00:20:51.931 00:20:51.931 ' 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:51.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.931 --rc genhtml_branch_coverage=1 00:20:51.931 --rc genhtml_function_coverage=1 00:20:51.931 --rc genhtml_legend=1 00:20:51.931 --rc geninfo_all_blocks=1 00:20:51.931 --rc geninfo_unexecuted_blocks=1 00:20:51.931 00:20:51.931 ' 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:51.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.931 --rc genhtml_branch_coverage=1 00:20:51.931 --rc genhtml_function_coverage=1 00:20:51.931 --rc genhtml_legend=1 00:20:51.931 --rc geninfo_all_blocks=1 00:20:51.931 --rc geninfo_unexecuted_blocks=1 00:20:51.931 00:20:51.931 ' 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:51.931 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:51.932 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:51.932 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:51.932 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:51.932 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:51.932 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:51.932 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:51.932 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:51.932 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.932 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.932 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.932 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:20:51.932 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.932 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:20:51.932 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:51.932 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:51.932 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:51.932 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:51.932 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:51.932 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:51.932 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:51.932 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:51.932 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:51.932 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:51.932 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:20:51.932 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:51.932 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:51.932 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:51.932 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:51.932 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:51.932 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:51.932 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:51.932 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:51.932 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:51.932 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:51.932 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:20:51.932 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:00.076 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:00.076 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:00.076 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:00.076 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # is_hw=yes 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:00.076 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:00.077 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:00.077 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:00.077 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:00.077 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:00.077 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:00.077 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:00.077 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:00.077 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:21:00.077 00:21:00.077 --- 10.0.0.2 ping statistics --- 00:21:00.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.077 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:21:00.077 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:00.077 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:00.077 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:21:00.077 00:21:00.077 --- 10.0.0.1 ping statistics --- 00:21:00.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.077 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:21:00.077 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:00.077 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@448 -- # return 0 00:21:00.077 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:00.077 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:00.077 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:00.077 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:00.077 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:00.077 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:00.077 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:00.077 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:21:00.077 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:00.077 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:00.077 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:00.077 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # nvmfpid=1018438 00:21:00.077 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # waitforlisten 1018438 00:21:00.077 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:00.077 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 1018438 ']' 00:21:00.077 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.077 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:00.077 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.077 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:00.077 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:00.077 [2024-10-21 12:05:35.881813] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:21:00.077 [2024-10-21 12:05:35.881881] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:00.077 [2024-10-21 12:05:35.968687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.077 [2024-10-21 12:05:36.020650] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:00.077 [2024-10-21 12:05:36.020703] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:00.077 [2024-10-21 12:05:36.020711] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:00.077 [2024-10-21 12:05:36.020718] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:00.077 [2024-10-21 12:05:36.020725] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:00.077 [2024-10-21 12:05:36.021507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:00.338 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:00.338 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:21:00.338 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:00.338 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:00.338 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:00.338 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:00.338 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:00.338 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:00.338 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:21:00.339 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.339 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:00.339 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.339 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:21:00.339 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.339 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:00.339 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.339 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:21:00.339 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.339 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:00.339 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.339 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:00.339 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.339 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:00.339 Malloc0 00:21:00.339 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.339 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:21:00.339 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.339 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:00.339 [2024-10-21 12:05:36.861655] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:00.339 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.339 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:21:00.339 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.339 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:00.339 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.339 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:00.339 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.339 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:00.339 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.339 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:00.339 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.339 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:00.339 [2024-10-21 12:05:36.897966] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:00.339 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.339 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:00.600 [2024-10-21 12:05:36.979427] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:01.987 Initializing NVMe Controllers 00:21:01.987 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:01.987 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:21:01.987 Initialization complete. Launching workers. 00:21:01.987 ======================================================== 00:21:01.987 Latency(us) 00:21:01.987 Device Information : IOPS MiB/s Average min max 00:21:01.987 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32294.67 8006.77 63855.53 00:21:01.987 ======================================================== 00:21:01.987 Total : 129.00 16.12 32294.67 8006.77 63855.53 00:21:01.987 00:21:01.987 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:21:01.987 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:21:01.987 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.987 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:01.987 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.987 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:21:01.987 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:21:01.987 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:01.987 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:21:01.987 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:01.987 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:21:01.987 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:01.987 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:21:01.987 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:01.987 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:01.987 rmmod nvme_tcp 00:21:01.987 rmmod nvme_fabrics 00:21:01.987 rmmod nvme_keyring 00:21:01.987 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:01.987 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:21:01.987 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:21:01.987 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@515 -- # '[' -n 1018438 ']' 00:21:01.987 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # killprocess 1018438 00:21:01.987 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 1018438 ']' 00:21:01.987 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 1018438 00:21:01.987 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:21:01.987 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:01.987 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1018438 00:21:01.987 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:01.987 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:01.987 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1018438' 00:21:01.987 killing process with pid 1018438 00:21:01.987 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 1018438 00:21:01.987 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 1018438 00:21:02.249 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:02.249 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:02.249 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:02.249 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:21:02.249 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-save 00:21:02.249 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:02.249 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-restore 00:21:02.249 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:02.249 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:02.249 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.249 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:02.249 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.809 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:04.809 00:21:04.809 real 0m12.733s 00:21:04.809 user 0m5.133s 00:21:04.809 sys 0m6.209s 00:21:04.809 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:04.809 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:04.809 ************************************ 00:21:04.809 END TEST nvmf_wait_for_buf 00:21:04.809 ************************************ 00:21:04.809 12:05:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:21:04.809 12:05:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:21:04.809 12:05:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:21:04.809 12:05:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:21:04.809 12:05:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:21:04.809 12:05:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:11.397 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:11.397 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:11.397 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:11.397 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.398 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:11.398 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:11.398 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.398 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:11.398 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:11.398 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:21:11.398 12:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:11.398 12:05:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:11.398 12:05:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:11.398 12:05:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:11.660 ************************************ 00:21:11.660 START TEST nvmf_perf_adq 00:21:11.660 ************************************ 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:11.660 * Looking for test storage... 00:21:11.660 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lcov --version 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:11.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:11.660 --rc genhtml_branch_coverage=1 00:21:11.660 --rc genhtml_function_coverage=1 00:21:11.660 --rc genhtml_legend=1 00:21:11.660 --rc geninfo_all_blocks=1 00:21:11.660 --rc geninfo_unexecuted_blocks=1 00:21:11.660 00:21:11.660 ' 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:11.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:11.660 --rc genhtml_branch_coverage=1 00:21:11.660 --rc genhtml_function_coverage=1 00:21:11.660 --rc genhtml_legend=1 00:21:11.660 --rc geninfo_all_blocks=1 00:21:11.660 --rc geninfo_unexecuted_blocks=1 00:21:11.660 00:21:11.660 ' 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:11.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:11.660 --rc genhtml_branch_coverage=1 00:21:11.660 --rc genhtml_function_coverage=1 00:21:11.660 --rc genhtml_legend=1 00:21:11.660 --rc geninfo_all_blocks=1 00:21:11.660 --rc geninfo_unexecuted_blocks=1 00:21:11.660 00:21:11.660 ' 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:11.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:11.660 --rc genhtml_branch_coverage=1 00:21:11.660 --rc genhtml_function_coverage=1 00:21:11.660 --rc genhtml_legend=1 00:21:11.660 --rc geninfo_all_blocks=1 00:21:11.660 --rc geninfo_unexecuted_blocks=1 00:21:11.660 00:21:11.660 ' 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:11.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:11.660 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.803 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:19.803 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:19.803 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:19.803 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:19.803 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:19.804 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:19.804 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:19.804 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:19.804 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:19.804 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:20.375 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:22.929 12:05:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:28.223 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:21:28.223 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:28.223 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:28.223 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:28.223 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:28.223 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:28.223 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:28.223 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:28.223 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:28.223 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:28.223 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:28.223 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:28.223 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:28.223 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:28.223 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:28.223 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:28.223 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:28.223 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:28.223 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:28.223 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:28.223 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:28.223 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:28.223 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:28.223 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:28.223 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:28.223 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:28.223 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:28.223 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:28.223 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:28.223 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:28.223 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:28.223 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:28.223 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:28.223 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:28.223 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:28.224 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:28.224 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:28.224 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:28.224 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:28.224 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:28.224 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.516 ms 00:21:28.224 00:21:28.224 --- 10.0.0.2 ping statistics --- 00:21:28.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.224 rtt min/avg/max/mdev = 0.516/0.516/0.516/0.000 ms 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:28.224 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:28.224 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:21:28.224 00:21:28.224 --- 10.0.0.1 ping statistics --- 00:21:28.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.224 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:21:28.224 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:28.225 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:28.225 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:28.225 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:28.225 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:28.225 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:28.225 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:28.225 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:28.225 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:28.225 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:28.225 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:28.225 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=1028737 00:21:28.225 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 1028737 00:21:28.225 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:28.225 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1028737 ']' 00:21:28.225 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:28.225 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:28.225 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:28.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:28.225 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:28.225 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:28.225 [2024-10-21 12:06:04.454104] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:21:28.225 [2024-10-21 12:06:04.454173] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:28.225 [2024-10-21 12:06:04.543180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:28.225 [2024-10-21 12:06:04.596660] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:28.225 [2024-10-21 12:06:04.596712] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:28.225 [2024-10-21 12:06:04.596720] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:28.225 [2024-10-21 12:06:04.596728] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:28.225 [2024-10-21 12:06:04.596735] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:28.225 [2024-10-21 12:06:04.598754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:28.225 [2024-10-21 12:06:04.598915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:28.225 [2024-10-21 12:06:04.599078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:28.225 [2024-10-21 12:06:04.599078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:28.799 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:28.799 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:21:28.799 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:28.799 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:28.799 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:28.799 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:28.799 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:21:28.799 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:28.799 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:28.799 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.799 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:28.799 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.799 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:28.799 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:28.799 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.799 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:28.799 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.799 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:28.799 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.799 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:29.060 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.060 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:29.060 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.060 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:29.060 [2024-10-21 12:06:05.479132] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:29.060 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.060 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:29.060 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.060 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:29.060 Malloc1 00:21:29.060 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.060 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:29.060 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.060 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:29.060 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.060 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:29.060 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.060 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:29.060 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.060 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:29.060 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.060 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:29.060 [2024-10-21 12:06:05.555450] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:29.060 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.060 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1029093 00:21:29.060 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:21:29.060 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:30.977 12:06:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:21:30.977 12:06:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.977 12:06:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:31.237 12:06:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.237 12:06:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:21:31.237 "tick_rate": 2400000000, 00:21:31.237 "poll_groups": [ 00:21:31.237 { 00:21:31.237 "name": "nvmf_tgt_poll_group_000", 00:21:31.237 "admin_qpairs": 1, 00:21:31.237 "io_qpairs": 1, 00:21:31.237 "current_admin_qpairs": 1, 00:21:31.237 "current_io_qpairs": 1, 00:21:31.237 "pending_bdev_io": 0, 00:21:31.237 "completed_nvme_io": 16971, 00:21:31.237 "transports": [ 00:21:31.237 { 00:21:31.237 "trtype": "TCP" 00:21:31.237 } 00:21:31.237 ] 00:21:31.237 }, 00:21:31.237 { 00:21:31.237 "name": "nvmf_tgt_poll_group_001", 00:21:31.237 "admin_qpairs": 0, 00:21:31.237 "io_qpairs": 1, 00:21:31.237 "current_admin_qpairs": 0, 00:21:31.237 "current_io_qpairs": 1, 00:21:31.237 "pending_bdev_io": 0, 00:21:31.237 "completed_nvme_io": 17876, 00:21:31.238 "transports": [ 00:21:31.238 { 00:21:31.238 "trtype": "TCP" 00:21:31.238 } 00:21:31.238 ] 00:21:31.238 }, 00:21:31.238 { 00:21:31.238 "name": "nvmf_tgt_poll_group_002", 00:21:31.238 "admin_qpairs": 0, 00:21:31.238 "io_qpairs": 1, 00:21:31.238 "current_admin_qpairs": 0, 00:21:31.238 "current_io_qpairs": 1, 00:21:31.238 "pending_bdev_io": 0, 00:21:31.238 "completed_nvme_io": 19246, 00:21:31.238 "transports": [ 00:21:31.238 { 00:21:31.238 "trtype": "TCP" 00:21:31.238 } 00:21:31.238 ] 00:21:31.238 }, 00:21:31.238 { 00:21:31.238 "name": "nvmf_tgt_poll_group_003", 00:21:31.238 "admin_qpairs": 0, 00:21:31.238 "io_qpairs": 1, 00:21:31.238 "current_admin_qpairs": 0, 00:21:31.238 "current_io_qpairs": 1, 00:21:31.238 "pending_bdev_io": 0, 00:21:31.238 "completed_nvme_io": 16824, 00:21:31.238 "transports": [ 00:21:31.238 { 00:21:31.238 "trtype": "TCP" 00:21:31.238 } 00:21:31.238 ] 00:21:31.238 } 00:21:31.238 ] 00:21:31.238 }' 00:21:31.238 12:06:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:31.238 12:06:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:21:31.238 12:06:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:21:31.238 12:06:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:21:31.238 12:06:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1029093 00:21:39.370 Initializing NVMe Controllers 00:21:39.370 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:39.370 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:39.370 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:39.370 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:39.370 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:39.370 Initialization complete. Launching workers. 00:21:39.370 ======================================================== 00:21:39.370 Latency(us) 00:21:39.370 Device Information : IOPS MiB/s Average min max 00:21:39.370 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 13887.10 54.25 4608.55 1195.02 11256.93 00:21:39.370 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13133.80 51.30 4873.93 1098.98 13608.09 00:21:39.370 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13387.60 52.30 4781.52 1306.62 12944.34 00:21:39.370 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12513.50 48.88 5114.78 1293.43 13257.31 00:21:39.370 ======================================================== 00:21:39.370 Total : 52922.00 206.73 4837.87 1098.98 13608.09 00:21:39.370 00:21:39.370 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:21:39.370 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:39.370 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:39.370 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:39.370 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:39.370 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:39.370 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:39.370 rmmod nvme_tcp 00:21:39.370 rmmod nvme_fabrics 00:21:39.370 rmmod nvme_keyring 00:21:39.370 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:39.370 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:39.370 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:39.370 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 1028737 ']' 00:21:39.370 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 1028737 00:21:39.370 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1028737 ']' 00:21:39.370 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1028737 00:21:39.370 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:21:39.370 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:39.370 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1028737 00:21:39.370 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:39.370 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:39.370 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1028737' 00:21:39.370 killing process with pid 1028737 00:21:39.370 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1028737 00:21:39.370 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1028737 00:21:39.370 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:39.371 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:39.371 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:39.371 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:39.371 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:21:39.371 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:39.371 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:21:39.371 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:39.371 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:39.371 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:39.371 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:39.371 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:41.916 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:41.916 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:21:41.916 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:41.916 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:43.298 12:06:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:45.214 12:06:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:50.506 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:50.506 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:50.506 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:50.506 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:50.507 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:50.507 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:50.507 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:50.507 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:50.507 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:50.507 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:50.507 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:50.507 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:21:50.507 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:50.507 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:50.507 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:50.507 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:50.507 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:50.507 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:50.507 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:50.507 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:50.507 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:50.507 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:50.507 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:50.507 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:50.507 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:50.507 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:50.507 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:50.507 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:50.507 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:50.507 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:50.507 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:50.507 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:50.507 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:50.507 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:50.507 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:50.768 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:50.768 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:50.768 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:50.768 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:50.768 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.679 ms 00:21:50.768 00:21:50.768 --- 10.0.0.2 ping statistics --- 00:21:50.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:50.768 rtt min/avg/max/mdev = 0.679/0.679/0.679/0.000 ms 00:21:50.768 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:50.768 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:50.768 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:21:50.768 00:21:50.768 --- 10.0.0.1 ping statistics --- 00:21:50.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:50.768 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:21:50.768 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:50.768 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:21:50.768 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:50.768 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:50.768 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:50.768 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:50.768 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:50.768 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:50.768 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:50.768 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:21:50.768 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:50.768 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:50.768 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:50.768 net.core.busy_poll = 1 00:21:50.768 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:50.768 net.core.busy_read = 1 00:21:50.768 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:50.768 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:51.029 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:51.029 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:51.029 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:51.029 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:51.029 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:51.029 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:51.029 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:51.029 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=1034036 00:21:51.030 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 1034036 00:21:51.030 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:51.030 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1034036 ']' 00:21:51.030 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:51.030 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:51.030 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:51.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:51.030 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:51.030 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:51.030 [2024-10-21 12:06:27.513560] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:21:51.030 [2024-10-21 12:06:27.513628] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:51.030 [2024-10-21 12:06:27.605207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:51.291 [2024-10-21 12:06:27.658912] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:51.291 [2024-10-21 12:06:27.658969] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:51.291 [2024-10-21 12:06:27.658978] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:51.291 [2024-10-21 12:06:27.658986] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:51.291 [2024-10-21 12:06:27.658992] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:51.291 [2024-10-21 12:06:27.661164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:51.291 [2024-10-21 12:06:27.661355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:51.291 [2024-10-21 12:06:27.661450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:51.291 [2024-10-21 12:06:27.661631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:51.863 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:51.863 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:21:51.863 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:51.863 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:51.863 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:51.863 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:51.863 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:21:51.863 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:51.863 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:51.863 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.863 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:51.863 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.863 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:51.863 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:51.863 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.863 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:51.863 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.863 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:51.863 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.863 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:52.123 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.123 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:52.123 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.123 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:52.123 [2024-10-21 12:06:28.534340] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:52.123 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.123 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:52.123 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.123 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:52.123 Malloc1 00:21:52.123 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.123 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:52.123 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.123 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:52.123 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.123 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:52.123 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.123 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:52.123 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.123 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:52.123 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.123 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:52.123 [2024-10-21 12:06:28.613544] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:52.123 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.123 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1034395 00:21:52.123 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:21:52.123 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:54.037 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:21:54.037 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.037 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:54.297 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.297 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:21:54.297 "tick_rate": 2400000000, 00:21:54.297 "poll_groups": [ 00:21:54.297 { 00:21:54.297 "name": "nvmf_tgt_poll_group_000", 00:21:54.297 "admin_qpairs": 1, 00:21:54.297 "io_qpairs": 3, 00:21:54.297 "current_admin_qpairs": 1, 00:21:54.297 "current_io_qpairs": 3, 00:21:54.297 "pending_bdev_io": 0, 00:21:54.297 "completed_nvme_io": 26225, 00:21:54.297 "transports": [ 00:21:54.297 { 00:21:54.297 "trtype": "TCP" 00:21:54.297 } 00:21:54.297 ] 00:21:54.297 }, 00:21:54.297 { 00:21:54.297 "name": "nvmf_tgt_poll_group_001", 00:21:54.297 "admin_qpairs": 0, 00:21:54.297 "io_qpairs": 1, 00:21:54.297 "current_admin_qpairs": 0, 00:21:54.297 "current_io_qpairs": 1, 00:21:54.297 "pending_bdev_io": 0, 00:21:54.297 "completed_nvme_io": 25435, 00:21:54.297 "transports": [ 00:21:54.297 { 00:21:54.297 "trtype": "TCP" 00:21:54.297 } 00:21:54.297 ] 00:21:54.297 }, 00:21:54.297 { 00:21:54.297 "name": "nvmf_tgt_poll_group_002", 00:21:54.297 "admin_qpairs": 0, 00:21:54.297 "io_qpairs": 0, 00:21:54.297 "current_admin_qpairs": 0, 00:21:54.297 "current_io_qpairs": 0, 00:21:54.297 "pending_bdev_io": 0, 00:21:54.297 "completed_nvme_io": 0, 00:21:54.297 "transports": [ 00:21:54.297 { 00:21:54.297 "trtype": "TCP" 00:21:54.297 } 00:21:54.297 ] 00:21:54.297 }, 00:21:54.297 { 00:21:54.297 "name": "nvmf_tgt_poll_group_003", 00:21:54.297 "admin_qpairs": 0, 00:21:54.297 "io_qpairs": 0, 00:21:54.298 "current_admin_qpairs": 0, 00:21:54.298 "current_io_qpairs": 0, 00:21:54.298 "pending_bdev_io": 0, 00:21:54.298 "completed_nvme_io": 0, 00:21:54.298 "transports": [ 00:21:54.298 { 00:21:54.298 "trtype": "TCP" 00:21:54.298 } 00:21:54.298 ] 00:21:54.298 } 00:21:54.298 ] 00:21:54.298 }' 00:21:54.298 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:54.298 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:21:54.298 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:21:54.298 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:21:54.298 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1034395 00:22:02.649 Initializing NVMe Controllers 00:22:02.649 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:02.649 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:02.649 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:02.649 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:02.649 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:02.649 Initialization complete. Launching workers. 00:22:02.649 ======================================================== 00:22:02.649 Latency(us) 00:22:02.649 Device Information : IOPS MiB/s Average min max 00:22:02.649 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7403.70 28.92 8645.14 1344.27 58194.95 00:22:02.649 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6042.70 23.60 10591.90 1035.81 59506.56 00:22:02.649 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6217.20 24.29 10298.90 1410.75 60418.40 00:22:02.649 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 17771.39 69.42 3601.17 943.65 45894.04 00:22:02.649 ======================================================== 00:22:02.649 Total : 37434.99 146.23 6839.53 943.65 60418.40 00:22:02.649 00:22:02.649 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:22:02.649 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:02.649 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:02.649 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:02.649 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:02.649 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:02.649 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:02.649 rmmod nvme_tcp 00:22:02.649 rmmod nvme_fabrics 00:22:02.649 rmmod nvme_keyring 00:22:02.649 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:02.649 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:02.649 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:02.649 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 1034036 ']' 00:22:02.649 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 1034036 00:22:02.649 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1034036 ']' 00:22:02.649 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1034036 00:22:02.649 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:22:02.649 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:02.649 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1034036 00:22:02.649 12:06:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:02.649 12:06:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:02.649 12:06:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1034036' 00:22:02.649 killing process with pid 1034036 00:22:02.649 12:06:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1034036 00:22:02.649 12:06:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1034036 00:22:02.649 12:06:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:02.649 12:06:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:02.649 12:06:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:02.649 12:06:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:02.649 12:06:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:22:02.649 12:06:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:02.649 12:06:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:22:02.649 12:06:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:02.649 12:06:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:02.649 12:06:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:02.649 12:06:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:02.649 12:06:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:22:05.194 00:22:05.194 real 0m53.205s 00:22:05.194 user 2m50.158s 00:22:05.194 sys 0m11.349s 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.194 ************************************ 00:22:05.194 END TEST nvmf_perf_adq 00:22:05.194 ************************************ 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:05.194 ************************************ 00:22:05.194 START TEST nvmf_shutdown 00:22:05.194 ************************************ 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:05.194 * Looking for test storage... 00:22:05.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:05.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.194 --rc genhtml_branch_coverage=1 00:22:05.194 --rc genhtml_function_coverage=1 00:22:05.194 --rc genhtml_legend=1 00:22:05.194 --rc geninfo_all_blocks=1 00:22:05.194 --rc geninfo_unexecuted_blocks=1 00:22:05.194 00:22:05.194 ' 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:05.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.194 --rc genhtml_branch_coverage=1 00:22:05.194 --rc genhtml_function_coverage=1 00:22:05.194 --rc genhtml_legend=1 00:22:05.194 --rc geninfo_all_blocks=1 00:22:05.194 --rc geninfo_unexecuted_blocks=1 00:22:05.194 00:22:05.194 ' 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:05.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.194 --rc genhtml_branch_coverage=1 00:22:05.194 --rc genhtml_function_coverage=1 00:22:05.194 --rc genhtml_legend=1 00:22:05.194 --rc geninfo_all_blocks=1 00:22:05.194 --rc geninfo_unexecuted_blocks=1 00:22:05.194 00:22:05.194 ' 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:05.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.194 --rc genhtml_branch_coverage=1 00:22:05.194 --rc genhtml_function_coverage=1 00:22:05.194 --rc genhtml_legend=1 00:22:05.194 --rc geninfo_all_blocks=1 00:22:05.194 --rc geninfo_unexecuted_blocks=1 00:22:05.194 00:22:05.194 ' 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:05.194 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:05.195 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:05.195 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:05.195 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:05.195 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:05.195 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:05.195 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:05.195 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:05.195 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:05.195 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:22:05.195 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:05.195 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:05.195 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:05.195 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.195 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.195 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.195 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:05.195 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.195 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:22:05.195 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:05.195 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:05.195 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:05.195 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:05.195 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:05.195 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:05.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:05.195 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:05.195 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:05.195 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:05.195 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:05.195 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:05.195 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:05.195 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:05.195 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:05.195 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:05.195 ************************************ 00:22:05.195 START TEST nvmf_shutdown_tc1 00:22:05.195 ************************************ 00:22:05.195 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:22:05.195 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:22:05.195 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:05.195 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:05.195 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:05.195 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:05.195 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:05.195 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:05.195 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:05.195 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:05.195 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.195 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:05.195 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:05.195 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:05.195 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:13.342 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:13.342 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:13.342 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:13.342 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:13.342 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:13.342 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:13.342 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:13.342 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:22:13.342 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:13.342 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:22:13.342 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:22:13.342 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:22:13.342 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:22:13.342 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:22:13.342 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:13.342 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:13.342 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:13.342 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:13.342 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:13.342 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:13.342 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:13.342 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:13.342 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:13.342 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:13.342 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:13.342 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:13.342 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:13.342 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:13.342 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:13.342 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:13.342 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:13.342 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:13.342 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:13.342 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:13.342 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:13.342 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:13.342 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:13.342 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:13.342 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:13.342 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:13.342 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:13.342 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:13.342 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:13.343 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:13.343 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:13.343 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:13.343 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:13.343 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:13.343 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:13.343 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:13.343 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:13.343 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:13.343 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:13.343 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:13.343 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:13.343 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:13.343 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:13.343 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:13.343 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:13.343 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:13.343 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:13.343 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:13.343 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:13.343 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:13.343 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:13.343 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:13.343 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:13.343 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:13.343 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:13.343 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:13.343 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:13.343 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:13.343 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:13.343 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # is_hw=yes 00:22:13.343 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:13.343 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:13.343 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:13.343 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:13.343 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:13.343 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:13.343 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:13.343 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:13.343 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:13.343 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:13.343 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:13.343 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:13.343 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:13.343 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:13.343 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:13.343 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:13.343 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:13.343 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:13.343 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:13.343 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:13.343 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:13.343 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:13.343 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:13.343 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:13.343 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:13.343 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:13.343 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:13.343 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:22:13.343 00:22:13.343 --- 10.0.0.2 ping statistics --- 00:22:13.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.343 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:22:13.343 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:13.343 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:13.343 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:22:13.343 00:22:13.343 --- 10.0.0.1 ping statistics --- 00:22:13.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.343 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:22:13.343 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:13.343 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # return 0 00:22:13.343 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:13.343 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:13.343 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:13.343 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:13.343 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:13.343 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:13.343 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:13.343 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:13.343 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:13.343 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:13.343 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:13.343 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # nvmfpid=1040591 00:22:13.343 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # waitforlisten 1040591 00:22:13.343 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:13.343 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1040591 ']' 00:22:13.343 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:13.343 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:13.343 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:13.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:13.343 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:13.343 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:13.343 [2024-10-21 12:06:49.240902] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:22:13.343 [2024-10-21 12:06:49.240968] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:13.343 [2024-10-21 12:06:49.330268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:13.343 [2024-10-21 12:06:49.383175] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:13.343 [2024-10-21 12:06:49.383225] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:13.343 [2024-10-21 12:06:49.383234] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:13.343 [2024-10-21 12:06:49.383242] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:13.343 [2024-10-21 12:06:49.383248] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:13.343 [2024-10-21 12:06:49.385289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:13.343 [2024-10-21 12:06:49.385525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:13.343 [2024-10-21 12:06:49.385527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:13.343 [2024-10-21 12:06:49.385368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:13.605 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:13.605 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:22:13.605 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:13.605 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:13.605 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:13.605 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:13.605 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:13.605 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.605 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:13.605 [2024-10-21 12:06:50.125728] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:13.605 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.605 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:13.605 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:13.605 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:13.605 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:13.605 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:13.605 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:13.605 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:13.605 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:13.605 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:13.605 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:13.605 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:13.605 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:13.605 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:13.605 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:13.605 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:13.605 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:13.605 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:13.605 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:13.605 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:13.605 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:13.605 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:13.605 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:13.605 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:13.605 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:13.605 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:13.605 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:13.605 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.605 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:13.866 Malloc1 00:22:13.866 [2024-10-21 12:06:50.250957] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:13.866 Malloc2 00:22:13.866 Malloc3 00:22:13.866 Malloc4 00:22:13.866 Malloc5 00:22:13.866 Malloc6 00:22:14.128 Malloc7 00:22:14.128 Malloc8 00:22:14.128 Malloc9 00:22:14.128 Malloc10 00:22:14.128 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.128 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:14.128 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:14.128 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:14.128 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1040916 00:22:14.128 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1040916 /var/tmp/bdevperf.sock 00:22:14.128 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1040916 ']' 00:22:14.128 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:14.128 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:14.128 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:14.128 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:14.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:14.128 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:14.128 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:14.128 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:14.128 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:22:14.128 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:22:14.128 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:14.390 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:14.390 { 00:22:14.390 "params": { 00:22:14.390 "name": "Nvme$subsystem", 00:22:14.390 "trtype": "$TEST_TRANSPORT", 00:22:14.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:14.390 "adrfam": "ipv4", 00:22:14.390 "trsvcid": "$NVMF_PORT", 00:22:14.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:14.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:14.390 "hdgst": ${hdgst:-false}, 00:22:14.390 "ddgst": ${ddgst:-false} 00:22:14.390 }, 00:22:14.390 "method": "bdev_nvme_attach_controller" 00:22:14.390 } 00:22:14.390 EOF 00:22:14.390 )") 00:22:14.390 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:14.390 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:14.390 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:14.390 { 00:22:14.390 "params": { 00:22:14.390 "name": "Nvme$subsystem", 00:22:14.390 "trtype": "$TEST_TRANSPORT", 00:22:14.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:14.390 "adrfam": "ipv4", 00:22:14.390 "trsvcid": "$NVMF_PORT", 00:22:14.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:14.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:14.390 "hdgst": ${hdgst:-false}, 00:22:14.390 "ddgst": ${ddgst:-false} 00:22:14.390 }, 00:22:14.390 "method": "bdev_nvme_attach_controller" 00:22:14.390 } 00:22:14.390 EOF 00:22:14.390 )") 00:22:14.390 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:14.390 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:14.390 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:14.390 { 00:22:14.390 "params": { 00:22:14.390 "name": "Nvme$subsystem", 00:22:14.390 "trtype": "$TEST_TRANSPORT", 00:22:14.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:14.390 "adrfam": "ipv4", 00:22:14.390 "trsvcid": "$NVMF_PORT", 00:22:14.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:14.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:14.390 "hdgst": ${hdgst:-false}, 00:22:14.390 "ddgst": ${ddgst:-false} 00:22:14.390 }, 00:22:14.390 "method": "bdev_nvme_attach_controller" 00:22:14.390 } 00:22:14.390 EOF 00:22:14.390 )") 00:22:14.390 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:14.390 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:14.390 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:14.390 { 00:22:14.390 "params": { 00:22:14.390 "name": "Nvme$subsystem", 00:22:14.390 "trtype": "$TEST_TRANSPORT", 00:22:14.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:14.390 "adrfam": "ipv4", 00:22:14.390 "trsvcid": "$NVMF_PORT", 00:22:14.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:14.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:14.390 "hdgst": ${hdgst:-false}, 00:22:14.390 "ddgst": ${ddgst:-false} 00:22:14.390 }, 00:22:14.390 "method": "bdev_nvme_attach_controller" 00:22:14.390 } 00:22:14.390 EOF 00:22:14.390 )") 00:22:14.390 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:14.390 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:14.390 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:14.390 { 00:22:14.390 "params": { 00:22:14.390 "name": "Nvme$subsystem", 00:22:14.390 "trtype": "$TEST_TRANSPORT", 00:22:14.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:14.390 "adrfam": "ipv4", 00:22:14.390 "trsvcid": "$NVMF_PORT", 00:22:14.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:14.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:14.390 "hdgst": ${hdgst:-false}, 00:22:14.390 "ddgst": ${ddgst:-false} 00:22:14.390 }, 00:22:14.390 "method": "bdev_nvme_attach_controller" 00:22:14.390 } 00:22:14.390 EOF 00:22:14.390 )") 00:22:14.390 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:14.390 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:14.390 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:14.390 { 00:22:14.390 "params": { 00:22:14.390 "name": "Nvme$subsystem", 00:22:14.390 "trtype": "$TEST_TRANSPORT", 00:22:14.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:14.390 "adrfam": "ipv4", 00:22:14.390 "trsvcid": "$NVMF_PORT", 00:22:14.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:14.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:14.390 "hdgst": ${hdgst:-false}, 00:22:14.390 "ddgst": ${ddgst:-false} 00:22:14.390 }, 00:22:14.390 "method": "bdev_nvme_attach_controller" 00:22:14.390 } 00:22:14.390 EOF 00:22:14.390 )") 00:22:14.390 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:14.390 [2024-10-21 12:06:50.772214] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:22:14.390 [2024-10-21 12:06:50.772284] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:14.390 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:14.390 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:14.390 { 00:22:14.390 "params": { 00:22:14.390 "name": "Nvme$subsystem", 00:22:14.390 "trtype": "$TEST_TRANSPORT", 00:22:14.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:14.390 "adrfam": "ipv4", 00:22:14.390 "trsvcid": "$NVMF_PORT", 00:22:14.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:14.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:14.390 "hdgst": ${hdgst:-false}, 00:22:14.390 "ddgst": ${ddgst:-false} 00:22:14.390 }, 00:22:14.390 "method": "bdev_nvme_attach_controller" 00:22:14.390 } 00:22:14.390 EOF 00:22:14.390 )") 00:22:14.390 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:14.390 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:14.390 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:14.390 { 00:22:14.390 "params": { 00:22:14.390 "name": "Nvme$subsystem", 00:22:14.390 "trtype": "$TEST_TRANSPORT", 00:22:14.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:14.390 "adrfam": "ipv4", 00:22:14.390 "trsvcid": "$NVMF_PORT", 00:22:14.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:14.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:14.390 "hdgst": ${hdgst:-false}, 00:22:14.390 "ddgst": ${ddgst:-false} 00:22:14.390 }, 00:22:14.390 "method": "bdev_nvme_attach_controller" 00:22:14.390 } 00:22:14.390 EOF 00:22:14.390 )") 00:22:14.390 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:14.390 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:14.390 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:14.390 { 00:22:14.390 "params": { 00:22:14.390 "name": "Nvme$subsystem", 00:22:14.390 "trtype": "$TEST_TRANSPORT", 00:22:14.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:14.390 "adrfam": "ipv4", 00:22:14.390 "trsvcid": "$NVMF_PORT", 00:22:14.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:14.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:14.390 "hdgst": ${hdgst:-false}, 00:22:14.390 "ddgst": ${ddgst:-false} 00:22:14.390 }, 00:22:14.390 "method": "bdev_nvme_attach_controller" 00:22:14.390 } 00:22:14.390 EOF 00:22:14.390 )") 00:22:14.390 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:14.390 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:14.390 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:14.390 { 00:22:14.390 "params": { 00:22:14.390 "name": "Nvme$subsystem", 00:22:14.390 "trtype": "$TEST_TRANSPORT", 00:22:14.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:14.390 "adrfam": "ipv4", 00:22:14.390 "trsvcid": "$NVMF_PORT", 00:22:14.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:14.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:14.390 "hdgst": ${hdgst:-false}, 00:22:14.390 "ddgst": ${ddgst:-false} 00:22:14.390 }, 00:22:14.390 "method": "bdev_nvme_attach_controller" 00:22:14.390 } 00:22:14.390 EOF 00:22:14.390 )") 00:22:14.390 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:14.390 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:22:14.390 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:22:14.391 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:22:14.391 "params": { 00:22:14.391 "name": "Nvme1", 00:22:14.391 "trtype": "tcp", 00:22:14.391 "traddr": "10.0.0.2", 00:22:14.391 "adrfam": "ipv4", 00:22:14.391 "trsvcid": "4420", 00:22:14.391 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:14.391 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:14.391 "hdgst": false, 00:22:14.391 "ddgst": false 00:22:14.391 }, 00:22:14.391 "method": "bdev_nvme_attach_controller" 00:22:14.391 },{ 00:22:14.391 "params": { 00:22:14.391 "name": "Nvme2", 00:22:14.391 "trtype": "tcp", 00:22:14.391 "traddr": "10.0.0.2", 00:22:14.391 "adrfam": "ipv4", 00:22:14.391 "trsvcid": "4420", 00:22:14.391 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:14.391 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:14.391 "hdgst": false, 00:22:14.391 "ddgst": false 00:22:14.391 }, 00:22:14.391 "method": "bdev_nvme_attach_controller" 00:22:14.391 },{ 00:22:14.391 "params": { 00:22:14.391 "name": "Nvme3", 00:22:14.391 "trtype": "tcp", 00:22:14.391 "traddr": "10.0.0.2", 00:22:14.391 "adrfam": "ipv4", 00:22:14.391 "trsvcid": "4420", 00:22:14.391 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:14.391 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:14.391 "hdgst": false, 00:22:14.391 "ddgst": false 00:22:14.391 }, 00:22:14.391 "method": "bdev_nvme_attach_controller" 00:22:14.391 },{ 00:22:14.391 "params": { 00:22:14.391 "name": "Nvme4", 00:22:14.391 "trtype": "tcp", 00:22:14.391 "traddr": "10.0.0.2", 00:22:14.391 "adrfam": "ipv4", 00:22:14.391 "trsvcid": "4420", 00:22:14.391 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:14.391 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:14.391 "hdgst": false, 00:22:14.391 "ddgst": false 00:22:14.391 }, 00:22:14.391 "method": "bdev_nvme_attach_controller" 00:22:14.391 },{ 00:22:14.391 "params": { 00:22:14.391 "name": "Nvme5", 00:22:14.391 "trtype": "tcp", 00:22:14.391 "traddr": "10.0.0.2", 00:22:14.391 "adrfam": "ipv4", 00:22:14.391 "trsvcid": "4420", 00:22:14.391 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:14.391 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:14.391 "hdgst": false, 00:22:14.391 "ddgst": false 00:22:14.391 }, 00:22:14.391 "method": "bdev_nvme_attach_controller" 00:22:14.391 },{ 00:22:14.391 "params": { 00:22:14.391 "name": "Nvme6", 00:22:14.391 "trtype": "tcp", 00:22:14.391 "traddr": "10.0.0.2", 00:22:14.391 "adrfam": "ipv4", 00:22:14.391 "trsvcid": "4420", 00:22:14.391 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:14.391 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:14.391 "hdgst": false, 00:22:14.391 "ddgst": false 00:22:14.391 }, 00:22:14.391 "method": "bdev_nvme_attach_controller" 00:22:14.391 },{ 00:22:14.391 "params": { 00:22:14.391 "name": "Nvme7", 00:22:14.391 "trtype": "tcp", 00:22:14.391 "traddr": "10.0.0.2", 00:22:14.391 "adrfam": "ipv4", 00:22:14.391 "trsvcid": "4420", 00:22:14.391 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:14.391 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:14.391 "hdgst": false, 00:22:14.391 "ddgst": false 00:22:14.391 }, 00:22:14.391 "method": "bdev_nvme_attach_controller" 00:22:14.391 },{ 00:22:14.391 "params": { 00:22:14.391 "name": "Nvme8", 00:22:14.391 "trtype": "tcp", 00:22:14.391 "traddr": "10.0.0.2", 00:22:14.391 "adrfam": "ipv4", 00:22:14.391 "trsvcid": "4420", 00:22:14.391 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:14.391 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:14.391 "hdgst": false, 00:22:14.391 "ddgst": false 00:22:14.391 }, 00:22:14.391 "method": "bdev_nvme_attach_controller" 00:22:14.391 },{ 00:22:14.391 "params": { 00:22:14.391 "name": "Nvme9", 00:22:14.391 "trtype": "tcp", 00:22:14.391 "traddr": "10.0.0.2", 00:22:14.391 "adrfam": "ipv4", 00:22:14.391 "trsvcid": "4420", 00:22:14.391 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:14.391 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:14.391 "hdgst": false, 00:22:14.391 "ddgst": false 00:22:14.391 }, 00:22:14.391 "method": "bdev_nvme_attach_controller" 00:22:14.391 },{ 00:22:14.391 "params": { 00:22:14.391 "name": "Nvme10", 00:22:14.391 "trtype": "tcp", 00:22:14.391 "traddr": "10.0.0.2", 00:22:14.391 "adrfam": "ipv4", 00:22:14.391 "trsvcid": "4420", 00:22:14.391 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:14.391 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:14.391 "hdgst": false, 00:22:14.391 "ddgst": false 00:22:14.391 }, 00:22:14.391 "method": "bdev_nvme_attach_controller" 00:22:14.391 }' 00:22:14.391 [2024-10-21 12:06:50.858414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.391 [2024-10-21 12:06:50.912600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:15.776 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:15.776 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:22:15.776 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:15.776 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.776 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:15.776 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.776 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1040916 00:22:15.776 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:22:15.776 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:22:16.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1040916 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:16.717 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1040591 00:22:16.717 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:16.717 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:16.717 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:22:16.717 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:22:16.717 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:16.717 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:16.717 { 00:22:16.717 "params": { 00:22:16.717 "name": "Nvme$subsystem", 00:22:16.717 "trtype": "$TEST_TRANSPORT", 00:22:16.717 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:16.717 "adrfam": "ipv4", 00:22:16.717 "trsvcid": "$NVMF_PORT", 00:22:16.717 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:16.717 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:16.717 "hdgst": ${hdgst:-false}, 00:22:16.717 "ddgst": ${ddgst:-false} 00:22:16.717 }, 00:22:16.717 "method": "bdev_nvme_attach_controller" 00:22:16.717 } 00:22:16.717 EOF 00:22:16.717 )") 00:22:16.717 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:16.717 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:16.717 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:16.717 { 00:22:16.717 "params": { 00:22:16.717 "name": "Nvme$subsystem", 00:22:16.717 "trtype": "$TEST_TRANSPORT", 00:22:16.717 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:16.717 "adrfam": "ipv4", 00:22:16.717 "trsvcid": "$NVMF_PORT", 00:22:16.717 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:16.717 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:16.717 "hdgst": ${hdgst:-false}, 00:22:16.717 "ddgst": ${ddgst:-false} 00:22:16.717 }, 00:22:16.717 "method": "bdev_nvme_attach_controller" 00:22:16.717 } 00:22:16.717 EOF 00:22:16.717 )") 00:22:16.717 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:16.717 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:16.717 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:16.717 { 00:22:16.717 "params": { 00:22:16.717 "name": "Nvme$subsystem", 00:22:16.717 "trtype": "$TEST_TRANSPORT", 00:22:16.717 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:16.717 "adrfam": "ipv4", 00:22:16.717 "trsvcid": "$NVMF_PORT", 00:22:16.717 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:16.717 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:16.717 "hdgst": ${hdgst:-false}, 00:22:16.717 "ddgst": ${ddgst:-false} 00:22:16.717 }, 00:22:16.717 "method": "bdev_nvme_attach_controller" 00:22:16.717 } 00:22:16.717 EOF 00:22:16.717 )") 00:22:16.717 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:16.717 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:16.717 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:16.717 { 00:22:16.717 "params": { 00:22:16.717 "name": "Nvme$subsystem", 00:22:16.717 "trtype": "$TEST_TRANSPORT", 00:22:16.717 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:16.717 "adrfam": "ipv4", 00:22:16.717 "trsvcid": "$NVMF_PORT", 00:22:16.717 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:16.717 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:16.717 "hdgst": ${hdgst:-false}, 00:22:16.717 "ddgst": ${ddgst:-false} 00:22:16.717 }, 00:22:16.717 "method": "bdev_nvme_attach_controller" 00:22:16.717 } 00:22:16.717 EOF 00:22:16.717 )") 00:22:16.717 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:16.717 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:16.717 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:16.717 { 00:22:16.717 "params": { 00:22:16.717 "name": "Nvme$subsystem", 00:22:16.717 "trtype": "$TEST_TRANSPORT", 00:22:16.717 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:16.717 "adrfam": "ipv4", 00:22:16.717 "trsvcid": "$NVMF_PORT", 00:22:16.717 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:16.717 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:16.717 "hdgst": ${hdgst:-false}, 00:22:16.717 "ddgst": ${ddgst:-false} 00:22:16.717 }, 00:22:16.717 "method": "bdev_nvme_attach_controller" 00:22:16.717 } 00:22:16.717 EOF 00:22:16.717 )") 00:22:16.717 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:16.717 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:16.717 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:16.717 { 00:22:16.717 "params": { 00:22:16.717 "name": "Nvme$subsystem", 00:22:16.717 "trtype": "$TEST_TRANSPORT", 00:22:16.717 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:16.717 "adrfam": "ipv4", 00:22:16.717 "trsvcid": "$NVMF_PORT", 00:22:16.717 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:16.717 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:16.717 "hdgst": ${hdgst:-false}, 00:22:16.717 "ddgst": ${ddgst:-false} 00:22:16.717 }, 00:22:16.717 "method": "bdev_nvme_attach_controller" 00:22:16.717 } 00:22:16.717 EOF 00:22:16.717 )") 00:22:16.717 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:16.717 [2024-10-21 12:06:53.304340] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:22:16.717 [2024-10-21 12:06:53.304394] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1041582 ] 00:22:16.717 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:16.717 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:16.717 { 00:22:16.717 "params": { 00:22:16.717 "name": "Nvme$subsystem", 00:22:16.717 "trtype": "$TEST_TRANSPORT", 00:22:16.717 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:16.717 "adrfam": "ipv4", 00:22:16.717 "trsvcid": "$NVMF_PORT", 00:22:16.717 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:16.717 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:16.717 "hdgst": ${hdgst:-false}, 00:22:16.717 "ddgst": ${ddgst:-false} 00:22:16.717 }, 00:22:16.717 "method": "bdev_nvme_attach_controller" 00:22:16.717 } 00:22:16.717 EOF 00:22:16.717 )") 00:22:16.717 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:16.978 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:16.978 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:16.978 { 00:22:16.978 "params": { 00:22:16.978 "name": "Nvme$subsystem", 00:22:16.978 "trtype": "$TEST_TRANSPORT", 00:22:16.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:16.978 "adrfam": "ipv4", 00:22:16.978 "trsvcid": "$NVMF_PORT", 00:22:16.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:16.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:16.978 "hdgst": ${hdgst:-false}, 00:22:16.978 "ddgst": ${ddgst:-false} 00:22:16.978 }, 00:22:16.978 "method": "bdev_nvme_attach_controller" 00:22:16.978 } 00:22:16.978 EOF 00:22:16.978 )") 00:22:16.978 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:16.978 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:16.978 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:16.978 { 00:22:16.978 "params": { 00:22:16.978 "name": "Nvme$subsystem", 00:22:16.978 "trtype": "$TEST_TRANSPORT", 00:22:16.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:16.978 "adrfam": "ipv4", 00:22:16.978 "trsvcid": "$NVMF_PORT", 00:22:16.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:16.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:16.978 "hdgst": ${hdgst:-false}, 00:22:16.978 "ddgst": ${ddgst:-false} 00:22:16.978 }, 00:22:16.978 "method": "bdev_nvme_attach_controller" 00:22:16.978 } 00:22:16.978 EOF 00:22:16.978 )") 00:22:16.978 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:16.978 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:16.978 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:16.978 { 00:22:16.978 "params": { 00:22:16.978 "name": "Nvme$subsystem", 00:22:16.978 "trtype": "$TEST_TRANSPORT", 00:22:16.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:16.978 "adrfam": "ipv4", 00:22:16.978 "trsvcid": "$NVMF_PORT", 00:22:16.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:16.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:16.978 "hdgst": ${hdgst:-false}, 00:22:16.978 "ddgst": ${ddgst:-false} 00:22:16.978 }, 00:22:16.978 "method": "bdev_nvme_attach_controller" 00:22:16.978 } 00:22:16.978 EOF 00:22:16.978 )") 00:22:16.978 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:16.978 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:22:16.978 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:22:16.978 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:22:16.978 "params": { 00:22:16.978 "name": "Nvme1", 00:22:16.978 "trtype": "tcp", 00:22:16.978 "traddr": "10.0.0.2", 00:22:16.978 "adrfam": "ipv4", 00:22:16.978 "trsvcid": "4420", 00:22:16.978 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:16.978 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:16.978 "hdgst": false, 00:22:16.978 "ddgst": false 00:22:16.978 }, 00:22:16.978 "method": "bdev_nvme_attach_controller" 00:22:16.978 },{ 00:22:16.978 "params": { 00:22:16.978 "name": "Nvme2", 00:22:16.978 "trtype": "tcp", 00:22:16.978 "traddr": "10.0.0.2", 00:22:16.978 "adrfam": "ipv4", 00:22:16.978 "trsvcid": "4420", 00:22:16.978 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:16.978 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:16.978 "hdgst": false, 00:22:16.978 "ddgst": false 00:22:16.978 }, 00:22:16.978 "method": "bdev_nvme_attach_controller" 00:22:16.978 },{ 00:22:16.978 "params": { 00:22:16.978 "name": "Nvme3", 00:22:16.978 "trtype": "tcp", 00:22:16.978 "traddr": "10.0.0.2", 00:22:16.978 "adrfam": "ipv4", 00:22:16.978 "trsvcid": "4420", 00:22:16.978 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:16.978 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:16.978 "hdgst": false, 00:22:16.978 "ddgst": false 00:22:16.978 }, 00:22:16.978 "method": "bdev_nvme_attach_controller" 00:22:16.978 },{ 00:22:16.978 "params": { 00:22:16.978 "name": "Nvme4", 00:22:16.978 "trtype": "tcp", 00:22:16.978 "traddr": "10.0.0.2", 00:22:16.978 "adrfam": "ipv4", 00:22:16.978 "trsvcid": "4420", 00:22:16.978 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:16.978 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:16.978 "hdgst": false, 00:22:16.978 "ddgst": false 00:22:16.978 }, 00:22:16.978 "method": "bdev_nvme_attach_controller" 00:22:16.978 },{ 00:22:16.978 "params": { 00:22:16.978 "name": "Nvme5", 00:22:16.978 "trtype": "tcp", 00:22:16.978 "traddr": "10.0.0.2", 00:22:16.978 "adrfam": "ipv4", 00:22:16.978 "trsvcid": "4420", 00:22:16.978 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:16.978 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:16.978 "hdgst": false, 00:22:16.978 "ddgst": false 00:22:16.978 }, 00:22:16.978 "method": "bdev_nvme_attach_controller" 00:22:16.978 },{ 00:22:16.978 "params": { 00:22:16.978 "name": "Nvme6", 00:22:16.978 "trtype": "tcp", 00:22:16.978 "traddr": "10.0.0.2", 00:22:16.978 "adrfam": "ipv4", 00:22:16.978 "trsvcid": "4420", 00:22:16.978 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:16.978 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:16.978 "hdgst": false, 00:22:16.978 "ddgst": false 00:22:16.978 }, 00:22:16.978 "method": "bdev_nvme_attach_controller" 00:22:16.978 },{ 00:22:16.978 "params": { 00:22:16.978 "name": "Nvme7", 00:22:16.978 "trtype": "tcp", 00:22:16.978 "traddr": "10.0.0.2", 00:22:16.978 "adrfam": "ipv4", 00:22:16.978 "trsvcid": "4420", 00:22:16.978 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:16.978 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:16.978 "hdgst": false, 00:22:16.978 "ddgst": false 00:22:16.978 }, 00:22:16.978 "method": "bdev_nvme_attach_controller" 00:22:16.978 },{ 00:22:16.978 "params": { 00:22:16.978 "name": "Nvme8", 00:22:16.978 "trtype": "tcp", 00:22:16.978 "traddr": "10.0.0.2", 00:22:16.978 "adrfam": "ipv4", 00:22:16.978 "trsvcid": "4420", 00:22:16.978 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:16.978 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:16.978 "hdgst": false, 00:22:16.978 "ddgst": false 00:22:16.978 }, 00:22:16.978 "method": "bdev_nvme_attach_controller" 00:22:16.978 },{ 00:22:16.978 "params": { 00:22:16.978 "name": "Nvme9", 00:22:16.978 "trtype": "tcp", 00:22:16.978 "traddr": "10.0.0.2", 00:22:16.978 "adrfam": "ipv4", 00:22:16.978 "trsvcid": "4420", 00:22:16.978 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:16.978 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:16.978 "hdgst": false, 00:22:16.979 "ddgst": false 00:22:16.979 }, 00:22:16.979 "method": "bdev_nvme_attach_controller" 00:22:16.979 },{ 00:22:16.979 "params": { 00:22:16.979 "name": "Nvme10", 00:22:16.979 "trtype": "tcp", 00:22:16.979 "traddr": "10.0.0.2", 00:22:16.979 "adrfam": "ipv4", 00:22:16.979 "trsvcid": "4420", 00:22:16.979 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:16.979 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:16.979 "hdgst": false, 00:22:16.979 "ddgst": false 00:22:16.979 }, 00:22:16.979 "method": "bdev_nvme_attach_controller" 00:22:16.979 }' 00:22:16.979 [2024-10-21 12:06:53.382819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.979 [2024-10-21 12:06:53.418959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:18.362 Running I/O for 1 seconds... 00:22:19.304 1861.00 IOPS, 116.31 MiB/s 00:22:19.304 Latency(us) 00:22:19.304 [2024-10-21T10:06:55.899Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:19.304 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:19.304 Verification LBA range: start 0x0 length 0x400 00:22:19.304 Nvme1n1 : 1.15 222.20 13.89 0.00 0.00 285126.61 16602.45 248162.99 00:22:19.304 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:19.304 Verification LBA range: start 0x0 length 0x400 00:22:19.304 Nvme2n1 : 1.14 225.22 14.08 0.00 0.00 276476.59 19770.03 227191.47 00:22:19.304 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:19.304 Verification LBA range: start 0x0 length 0x400 00:22:19.304 Nvme3n1 : 1.11 231.31 14.46 0.00 0.00 264352.21 17803.95 244667.73 00:22:19.304 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:19.304 Verification LBA range: start 0x0 length 0x400 00:22:19.304 Nvme4n1 : 1.10 232.53 14.53 0.00 0.00 258218.45 18131.63 258648.75 00:22:19.304 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:19.304 Verification LBA range: start 0x0 length 0x400 00:22:19.304 Nvme5n1 : 1.14 224.13 14.01 0.00 0.00 263806.93 20206.93 246415.36 00:22:19.304 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:19.304 Verification LBA range: start 0x0 length 0x400 00:22:19.304 Nvme6n1 : 1.15 223.09 13.94 0.00 0.00 260427.31 20643.84 242920.11 00:22:19.304 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:19.304 Verification LBA range: start 0x0 length 0x400 00:22:19.304 Nvme7n1 : 1.20 269.71 16.86 0.00 0.00 204263.22 12561.07 263891.63 00:22:19.304 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:19.304 Verification LBA range: start 0x0 length 0x400 00:22:19.304 Nvme8n1 : 1.15 277.52 17.34 0.00 0.00 201385.30 13544.11 241172.48 00:22:19.304 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:19.304 Verification LBA range: start 0x0 length 0x400 00:22:19.304 Nvme9n1 : 1.18 270.33 16.90 0.00 0.00 204065.28 14308.69 248162.99 00:22:19.304 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:19.304 Verification LBA range: start 0x0 length 0x400 00:22:19.304 Nvme10n1 : 1.20 267.00 16.69 0.00 0.00 203277.48 9229.65 267386.88 00:22:19.304 [2024-10-21T10:06:55.899Z] =================================================================================================================== 00:22:19.304 [2024-10-21T10:06:55.899Z] Total : 2443.04 152.69 0.00 0.00 238555.58 9229.65 267386.88 00:22:19.565 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:22:19.565 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:19.565 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:19.565 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:19.565 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:19.565 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:19.565 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:22:19.565 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:19.565 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:22:19.565 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:19.565 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:19.565 rmmod nvme_tcp 00:22:19.565 rmmod nvme_fabrics 00:22:19.565 rmmod nvme_keyring 00:22:19.565 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:19.565 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:22:19.565 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:22:19.565 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@515 -- # '[' -n 1040591 ']' 00:22:19.565 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # killprocess 1040591 00:22:19.565 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 1040591 ']' 00:22:19.565 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 1040591 00:22:19.565 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:22:19.565 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:19.565 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1040591 00:22:19.565 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:19.565 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:19.565 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1040591' 00:22:19.565 killing process with pid 1040591 00:22:19.565 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 1040591 00:22:19.565 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 1040591 00:22:19.825 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:19.825 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:19.825 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:19.825 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:22:19.825 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-save 00:22:19.825 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:19.825 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-restore 00:22:19.825 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:19.825 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:19.825 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:19.825 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:19.825 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:22.375 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:22.375 00:22:22.375 real 0m16.833s 00:22:22.375 user 0m33.632s 00:22:22.375 sys 0m6.999s 00:22:22.375 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:22.375 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:22.375 ************************************ 00:22:22.375 END TEST nvmf_shutdown_tc1 00:22:22.375 ************************************ 00:22:22.375 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:22.375 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:22.375 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:22.375 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:22.375 ************************************ 00:22:22.375 START TEST nvmf_shutdown_tc2 00:22:22.375 ************************************ 00:22:22.375 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:22:22.375 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:22:22.375 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:22.375 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:22.375 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:22.375 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:22.376 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:22.376 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:22.376 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.376 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:22.376 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:22.376 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:22.376 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:22.376 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:22.376 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:22.376 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:22.376 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:22.376 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:22.376 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:22.376 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:22.376 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:22.376 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:22.376 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:22:22.376 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:22.376 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:22:22.376 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:22:22.376 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:22:22.377 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:22:22.377 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:22:22.377 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:22.377 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:22.377 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:22.377 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:22.377 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:22.377 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:22.377 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:22.377 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:22.377 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:22.377 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:22.377 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:22.377 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:22.377 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:22.377 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:22.377 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:22.377 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:22.377 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:22.377 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:22.377 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:22.377 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:22.377 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:22.377 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:22.377 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:22.377 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:22.378 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:22.378 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:22.378 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:22.378 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:22.378 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:22.378 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:22.378 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:22.378 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:22.378 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:22.378 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:22.378 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:22.378 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:22.378 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:22.378 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:22.378 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:22.378 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:22.378 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:22.378 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:22.378 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:22.378 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:22.378 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:22.378 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:22.378 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:22.378 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:22.378 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:22.378 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:22.378 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:22.378 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:22.378 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:22.378 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:22.379 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:22.379 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:22.379 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:22.379 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:22.379 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:22.379 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # is_hw=yes 00:22:22.379 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:22.379 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:22.379 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:22.379 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:22.379 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:22.379 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:22.379 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:22.379 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:22.379 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:22.379 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:22.379 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:22.379 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:22.379 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:22.379 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:22.379 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:22.379 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:22.379 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:22.379 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:22.379 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:22.379 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:22.379 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:22.379 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:22.379 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:22.379 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:22.379 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:22.379 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:22.379 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:22.379 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.630 ms 00:22:22.379 00:22:22.379 --- 10.0.0.2 ping statistics --- 00:22:22.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.379 rtt min/avg/max/mdev = 0.630/0.630/0.630/0.000 ms 00:22:22.379 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:22.379 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:22.380 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:22:22.380 00:22:22.380 --- 10.0.0.1 ping statistics --- 00:22:22.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.380 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:22:22.380 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:22.380 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # return 0 00:22:22.380 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:22.380 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:22.380 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:22.380 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:22.380 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:22.380 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:22.380 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:22.380 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:22.380 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:22.380 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:22.380 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:22.380 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # nvmfpid=1042719 00:22:22.380 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # waitforlisten 1042719 00:22:22.380 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:22.380 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1042719 ']' 00:22:22.380 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:22.381 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:22.381 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:22.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:22.381 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:22.381 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:22.381 [2024-10-21 12:06:58.910171] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:22:22.381 [2024-10-21 12:06:58.910224] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:22.643 [2024-10-21 12:06:58.996190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:22.643 [2024-10-21 12:06:59.037764] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:22.643 [2024-10-21 12:06:59.037807] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:22.643 [2024-10-21 12:06:59.037814] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:22.643 [2024-10-21 12:06:59.037819] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:22.643 [2024-10-21 12:06:59.037824] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:22.643 [2024-10-21 12:06:59.039396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:22.643 [2024-10-21 12:06:59.039709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:22.643 [2024-10-21 12:06:59.039871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:22.643 [2024-10-21 12:06:59.039872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:23.216 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:23.216 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:22:23.216 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:23.216 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:23.216 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:23.216 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:23.216 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:23.216 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.216 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:23.216 [2024-10-21 12:06:59.753871] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:23.216 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.216 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:23.216 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:23.216 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:23.216 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:23.216 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:23.216 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:23.216 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:23.216 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:23.216 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:23.216 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:23.216 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:23.216 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:23.216 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:23.216 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:23.216 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:23.216 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:23.216 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:23.216 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:23.216 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:23.216 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:23.216 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:23.216 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:23.216 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:23.477 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:23.477 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:23.477 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:23.477 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.477 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:23.477 Malloc1 00:22:23.477 [2024-10-21 12:06:59.864033] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:23.477 Malloc2 00:22:23.477 Malloc3 00:22:23.477 Malloc4 00:22:23.477 Malloc5 00:22:23.477 Malloc6 00:22:23.477 Malloc7 00:22:23.739 Malloc8 00:22:23.739 Malloc9 00:22:23.739 Malloc10 00:22:23.739 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.739 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:23.739 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:23.739 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:23.739 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1043051 00:22:23.739 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1043051 /var/tmp/bdevperf.sock 00:22:23.739 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1043051 ']' 00:22:23.739 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:23.739 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:23.739 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:23.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:23.739 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:23.739 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:23.739 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:23.739 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:23.739 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # config=() 00:22:23.739 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # local subsystem config 00:22:23.739 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:23.739 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:23.739 { 00:22:23.739 "params": { 00:22:23.739 "name": "Nvme$subsystem", 00:22:23.739 "trtype": "$TEST_TRANSPORT", 00:22:23.739 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:23.739 "adrfam": "ipv4", 00:22:23.739 "trsvcid": "$NVMF_PORT", 00:22:23.739 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:23.739 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:23.739 "hdgst": ${hdgst:-false}, 00:22:23.739 "ddgst": ${ddgst:-false} 00:22:23.739 }, 00:22:23.739 "method": "bdev_nvme_attach_controller" 00:22:23.739 } 00:22:23.739 EOF 00:22:23.739 )") 00:22:23.739 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:23.739 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:23.739 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:23.739 { 00:22:23.739 "params": { 00:22:23.739 "name": "Nvme$subsystem", 00:22:23.739 "trtype": "$TEST_TRANSPORT", 00:22:23.739 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:23.739 "adrfam": "ipv4", 00:22:23.739 "trsvcid": "$NVMF_PORT", 00:22:23.739 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:23.739 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:23.739 "hdgst": ${hdgst:-false}, 00:22:23.739 "ddgst": ${ddgst:-false} 00:22:23.739 }, 00:22:23.739 "method": "bdev_nvme_attach_controller" 00:22:23.739 } 00:22:23.739 EOF 00:22:23.739 )") 00:22:23.739 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:23.739 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:23.739 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:23.739 { 00:22:23.739 "params": { 00:22:23.739 "name": "Nvme$subsystem", 00:22:23.739 "trtype": "$TEST_TRANSPORT", 00:22:23.739 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:23.739 "adrfam": "ipv4", 00:22:23.739 "trsvcid": "$NVMF_PORT", 00:22:23.739 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:23.739 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:23.739 "hdgst": ${hdgst:-false}, 00:22:23.739 "ddgst": ${ddgst:-false} 00:22:23.739 }, 00:22:23.739 "method": "bdev_nvme_attach_controller" 00:22:23.739 } 00:22:23.739 EOF 00:22:23.739 )") 00:22:23.739 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:23.739 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:23.739 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:23.739 { 00:22:23.739 "params": { 00:22:23.739 "name": "Nvme$subsystem", 00:22:23.739 "trtype": "$TEST_TRANSPORT", 00:22:23.739 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:23.739 "adrfam": "ipv4", 00:22:23.739 "trsvcid": "$NVMF_PORT", 00:22:23.739 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:23.739 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:23.739 "hdgst": ${hdgst:-false}, 00:22:23.739 "ddgst": ${ddgst:-false} 00:22:23.739 }, 00:22:23.739 "method": "bdev_nvme_attach_controller" 00:22:23.739 } 00:22:23.739 EOF 00:22:23.739 )") 00:22:23.739 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:23.739 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:23.739 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:23.739 { 00:22:23.739 "params": { 00:22:23.739 "name": "Nvme$subsystem", 00:22:23.739 "trtype": "$TEST_TRANSPORT", 00:22:23.739 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:23.739 "adrfam": "ipv4", 00:22:23.739 "trsvcid": "$NVMF_PORT", 00:22:23.739 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:23.739 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:23.739 "hdgst": ${hdgst:-false}, 00:22:23.739 "ddgst": ${ddgst:-false} 00:22:23.739 }, 00:22:23.739 "method": "bdev_nvme_attach_controller" 00:22:23.739 } 00:22:23.739 EOF 00:22:23.739 )") 00:22:23.739 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:23.739 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:23.739 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:23.739 { 00:22:23.739 "params": { 00:22:23.739 "name": "Nvme$subsystem", 00:22:23.739 "trtype": "$TEST_TRANSPORT", 00:22:23.739 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:23.739 "adrfam": "ipv4", 00:22:23.739 "trsvcid": "$NVMF_PORT", 00:22:23.739 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:23.739 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:23.739 "hdgst": ${hdgst:-false}, 00:22:23.739 "ddgst": ${ddgst:-false} 00:22:23.739 }, 00:22:23.739 "method": "bdev_nvme_attach_controller" 00:22:23.739 } 00:22:23.739 EOF 00:22:23.739 )") 00:22:23.739 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:23.739 [2024-10-21 12:07:00.312338] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:22:23.739 [2024-10-21 12:07:00.312392] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1043051 ] 00:22:23.739 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:23.739 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:23.739 { 00:22:23.740 "params": { 00:22:23.740 "name": "Nvme$subsystem", 00:22:23.740 "trtype": "$TEST_TRANSPORT", 00:22:23.740 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:23.740 "adrfam": "ipv4", 00:22:23.740 "trsvcid": "$NVMF_PORT", 00:22:23.740 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:23.740 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:23.740 "hdgst": ${hdgst:-false}, 00:22:23.740 "ddgst": ${ddgst:-false} 00:22:23.740 }, 00:22:23.740 "method": "bdev_nvme_attach_controller" 00:22:23.740 } 00:22:23.740 EOF 00:22:23.740 )") 00:22:23.740 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:23.740 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:23.740 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:23.740 { 00:22:23.740 "params": { 00:22:23.740 "name": "Nvme$subsystem", 00:22:23.740 "trtype": "$TEST_TRANSPORT", 00:22:23.740 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:23.740 "adrfam": "ipv4", 00:22:23.740 "trsvcid": "$NVMF_PORT", 00:22:23.740 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:23.740 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:23.740 "hdgst": ${hdgst:-false}, 00:22:23.740 "ddgst": ${ddgst:-false} 00:22:23.740 }, 00:22:23.740 "method": "bdev_nvme_attach_controller" 00:22:23.740 } 00:22:23.740 EOF 00:22:23.740 )") 00:22:23.740 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:23.740 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:23.740 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:23.740 { 00:22:23.740 "params": { 00:22:23.740 "name": "Nvme$subsystem", 00:22:23.740 "trtype": "$TEST_TRANSPORT", 00:22:23.740 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:23.740 "adrfam": "ipv4", 00:22:23.740 "trsvcid": "$NVMF_PORT", 00:22:23.740 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:23.740 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:23.740 "hdgst": ${hdgst:-false}, 00:22:23.740 "ddgst": ${ddgst:-false} 00:22:23.740 }, 00:22:23.740 "method": "bdev_nvme_attach_controller" 00:22:23.740 } 00:22:23.740 EOF 00:22:23.740 )") 00:22:23.740 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:24.000 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:24.000 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:24.000 { 00:22:24.000 "params": { 00:22:24.000 "name": "Nvme$subsystem", 00:22:24.000 "trtype": "$TEST_TRANSPORT", 00:22:24.000 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:24.000 "adrfam": "ipv4", 00:22:24.000 "trsvcid": "$NVMF_PORT", 00:22:24.000 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:24.001 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:24.001 "hdgst": ${hdgst:-false}, 00:22:24.001 "ddgst": ${ddgst:-false} 00:22:24.001 }, 00:22:24.001 "method": "bdev_nvme_attach_controller" 00:22:24.001 } 00:22:24.001 EOF 00:22:24.001 )") 00:22:24.001 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:24.001 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # jq . 00:22:24.001 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@583 -- # IFS=, 00:22:24.001 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:22:24.001 "params": { 00:22:24.001 "name": "Nvme1", 00:22:24.001 "trtype": "tcp", 00:22:24.001 "traddr": "10.0.0.2", 00:22:24.001 "adrfam": "ipv4", 00:22:24.001 "trsvcid": "4420", 00:22:24.001 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:24.001 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:24.001 "hdgst": false, 00:22:24.001 "ddgst": false 00:22:24.001 }, 00:22:24.001 "method": "bdev_nvme_attach_controller" 00:22:24.001 },{ 00:22:24.001 "params": { 00:22:24.001 "name": "Nvme2", 00:22:24.001 "trtype": "tcp", 00:22:24.001 "traddr": "10.0.0.2", 00:22:24.001 "adrfam": "ipv4", 00:22:24.001 "trsvcid": "4420", 00:22:24.001 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:24.001 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:24.001 "hdgst": false, 00:22:24.001 "ddgst": false 00:22:24.001 }, 00:22:24.001 "method": "bdev_nvme_attach_controller" 00:22:24.001 },{ 00:22:24.001 "params": { 00:22:24.001 "name": "Nvme3", 00:22:24.001 "trtype": "tcp", 00:22:24.001 "traddr": "10.0.0.2", 00:22:24.001 "adrfam": "ipv4", 00:22:24.001 "trsvcid": "4420", 00:22:24.001 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:24.001 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:24.001 "hdgst": false, 00:22:24.001 "ddgst": false 00:22:24.001 }, 00:22:24.001 "method": "bdev_nvme_attach_controller" 00:22:24.001 },{ 00:22:24.001 "params": { 00:22:24.001 "name": "Nvme4", 00:22:24.001 "trtype": "tcp", 00:22:24.001 "traddr": "10.0.0.2", 00:22:24.001 "adrfam": "ipv4", 00:22:24.001 "trsvcid": "4420", 00:22:24.001 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:24.001 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:24.001 "hdgst": false, 00:22:24.001 "ddgst": false 00:22:24.001 }, 00:22:24.001 "method": "bdev_nvme_attach_controller" 00:22:24.001 },{ 00:22:24.001 "params": { 00:22:24.001 "name": "Nvme5", 00:22:24.001 "trtype": "tcp", 00:22:24.001 "traddr": "10.0.0.2", 00:22:24.001 "adrfam": "ipv4", 00:22:24.001 "trsvcid": "4420", 00:22:24.001 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:24.001 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:24.001 "hdgst": false, 00:22:24.001 "ddgst": false 00:22:24.001 }, 00:22:24.001 "method": "bdev_nvme_attach_controller" 00:22:24.001 },{ 00:22:24.001 "params": { 00:22:24.001 "name": "Nvme6", 00:22:24.001 "trtype": "tcp", 00:22:24.001 "traddr": "10.0.0.2", 00:22:24.001 "adrfam": "ipv4", 00:22:24.001 "trsvcid": "4420", 00:22:24.001 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:24.001 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:24.001 "hdgst": false, 00:22:24.001 "ddgst": false 00:22:24.001 }, 00:22:24.001 "method": "bdev_nvme_attach_controller" 00:22:24.001 },{ 00:22:24.001 "params": { 00:22:24.001 "name": "Nvme7", 00:22:24.001 "trtype": "tcp", 00:22:24.001 "traddr": "10.0.0.2", 00:22:24.001 "adrfam": "ipv4", 00:22:24.001 "trsvcid": "4420", 00:22:24.001 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:24.001 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:24.001 "hdgst": false, 00:22:24.001 "ddgst": false 00:22:24.001 }, 00:22:24.001 "method": "bdev_nvme_attach_controller" 00:22:24.001 },{ 00:22:24.001 "params": { 00:22:24.001 "name": "Nvme8", 00:22:24.001 "trtype": "tcp", 00:22:24.001 "traddr": "10.0.0.2", 00:22:24.001 "adrfam": "ipv4", 00:22:24.001 "trsvcid": "4420", 00:22:24.001 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:24.001 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:24.001 "hdgst": false, 00:22:24.001 "ddgst": false 00:22:24.001 }, 00:22:24.001 "method": "bdev_nvme_attach_controller" 00:22:24.001 },{ 00:22:24.001 "params": { 00:22:24.001 "name": "Nvme9", 00:22:24.001 "trtype": "tcp", 00:22:24.001 "traddr": "10.0.0.2", 00:22:24.001 "adrfam": "ipv4", 00:22:24.001 "trsvcid": "4420", 00:22:24.001 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:24.001 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:24.001 "hdgst": false, 00:22:24.001 "ddgst": false 00:22:24.001 }, 00:22:24.001 "method": "bdev_nvme_attach_controller" 00:22:24.001 },{ 00:22:24.001 "params": { 00:22:24.001 "name": "Nvme10", 00:22:24.001 "trtype": "tcp", 00:22:24.001 "traddr": "10.0.0.2", 00:22:24.001 "adrfam": "ipv4", 00:22:24.001 "trsvcid": "4420", 00:22:24.001 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:24.001 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:24.001 "hdgst": false, 00:22:24.001 "ddgst": false 00:22:24.001 }, 00:22:24.001 "method": "bdev_nvme_attach_controller" 00:22:24.001 }' 00:22:24.001 [2024-10-21 12:07:00.390579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.001 [2024-10-21 12:07:00.426868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:25.387 Running I/O for 10 seconds... 00:22:25.387 12:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:25.387 12:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:22:25.387 12:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:25.387 12:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.387 12:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:25.648 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.648 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:25.648 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:25.648 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:25.648 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:22:25.648 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:22:25.648 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:25.648 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:25.648 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:25.648 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:25.648 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.648 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:25.648 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.648 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:25.648 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:25.648 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:25.909 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:25.909 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:25.909 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:25.909 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:25.909 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.909 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:25.909 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.909 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:25.909 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:25.909 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:26.169 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:26.169 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:26.169 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:26.169 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:26.169 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.169 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:26.170 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.170 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:26.170 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:26.170 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:22:26.170 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:22:26.170 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:22:26.170 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1043051 00:22:26.170 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1043051 ']' 00:22:26.170 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1043051 00:22:26.170 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:22:26.170 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:26.170 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1043051 00:22:26.429 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:26.429 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:26.429 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1043051' 00:22:26.429 killing process with pid 1043051 00:22:26.429 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1043051 00:22:26.429 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1043051 00:22:26.429 Received shutdown signal, test time was about 0.982249 seconds 00:22:26.429 00:22:26.429 Latency(us) 00:22:26.429 [2024-10-21T10:07:03.024Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.429 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:26.429 Verification LBA range: start 0x0 length 0x400 00:22:26.429 Nvme1n1 : 0.97 263.13 16.45 0.00 0.00 240337.71 18131.63 251658.24 00:22:26.429 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:26.429 Verification LBA range: start 0x0 length 0x400 00:22:26.429 Nvme2n1 : 0.98 260.86 16.30 0.00 0.00 237655.68 16493.23 246415.36 00:22:26.429 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:26.429 Verification LBA range: start 0x0 length 0x400 00:22:26.429 Nvme3n1 : 0.98 261.58 16.35 0.00 0.00 232052.27 21626.88 251658.24 00:22:26.429 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:26.429 Verification LBA range: start 0x0 length 0x400 00:22:26.429 Nvme4n1 : 0.98 262.31 16.39 0.00 0.00 226521.17 20753.07 225443.84 00:22:26.429 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:26.429 Verification LBA range: start 0x0 length 0x400 00:22:26.429 Nvme5n1 : 0.95 203.15 12.70 0.00 0.00 285548.66 18896.21 251658.24 00:22:26.429 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:26.430 Verification LBA range: start 0x0 length 0x400 00:22:26.430 Nvme6n1 : 0.96 200.41 12.53 0.00 0.00 283342.51 17148.59 253405.87 00:22:26.430 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:26.430 Verification LBA range: start 0x0 length 0x400 00:22:26.430 Nvme7n1 : 0.97 265.21 16.58 0.00 0.00 209481.49 13871.79 246415.36 00:22:26.430 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:26.430 Verification LBA range: start 0x0 length 0x400 00:22:26.430 Nvme8n1 : 0.96 266.01 16.63 0.00 0.00 203820.37 29709.65 230686.72 00:22:26.430 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:26.430 Verification LBA range: start 0x0 length 0x400 00:22:26.430 Nvme9n1 : 0.95 201.47 12.59 0.00 0.00 262219.66 21189.97 246415.36 00:22:26.430 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:26.430 Verification LBA range: start 0x0 length 0x400 00:22:26.430 Nvme10n1 : 0.97 198.46 12.40 0.00 0.00 260746.81 19551.57 270882.13 00:22:26.430 [2024-10-21T10:07:03.025Z] =================================================================================================================== 00:22:26.430 [2024-10-21T10:07:03.025Z] Total : 2382.58 148.91 0.00 0.00 240973.55 13871.79 270882.13 00:22:26.430 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:22:27.814 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1042719 00:22:27.814 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:22:27.814 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:27.814 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:27.814 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:27.814 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:27.814 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:27.814 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:22:27.814 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:27.814 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:22:27.814 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:27.814 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:27.814 rmmod nvme_tcp 00:22:27.814 rmmod nvme_fabrics 00:22:27.814 rmmod nvme_keyring 00:22:27.814 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:27.814 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:22:27.814 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:22:27.814 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@515 -- # '[' -n 1042719 ']' 00:22:27.814 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # killprocess 1042719 00:22:27.814 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1042719 ']' 00:22:27.814 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1042719 00:22:27.814 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:22:27.814 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:27.814 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1042719 00:22:27.814 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:27.814 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:27.814 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1042719' 00:22:27.814 killing process with pid 1042719 00:22:27.814 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1042719 00:22:27.814 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1042719 00:22:27.814 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:27.814 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:27.814 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:27.814 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:22:27.814 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-save 00:22:27.814 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:27.814 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-restore 00:22:27.814 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:27.814 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:27.814 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:27.814 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:27.814 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:30.363 00:22:30.363 real 0m7.935s 00:22:30.363 user 0m24.124s 00:22:30.363 sys 0m1.288s 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:30.363 ************************************ 00:22:30.363 END TEST nvmf_shutdown_tc2 00:22:30.363 ************************************ 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:30.363 ************************************ 00:22:30.363 START TEST nvmf_shutdown_tc3 00:22:30.363 ************************************ 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:30.363 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:30.363 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:30.363 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:30.363 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:30.364 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # is_hw=yes 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:30.364 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:30.364 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.552 ms 00:22:30.364 00:22:30.364 --- 10.0.0.2 ping statistics --- 00:22:30.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.364 rtt min/avg/max/mdev = 0.552/0.552/0.552/0.000 ms 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:30.364 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:30.364 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:22:30.364 00:22:30.364 --- 10.0.0.1 ping statistics --- 00:22:30.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.364 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # return 0 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # nvmfpid=1044316 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # waitforlisten 1044316 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1044316 ']' 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:30.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:30.364 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:30.364 [2024-10-21 12:07:06.937231] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:22:30.364 [2024-10-21 12:07:06.937293] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:30.625 [2024-10-21 12:07:06.998674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:30.625 [2024-10-21 12:07:07.030265] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:30.625 [2024-10-21 12:07:07.030292] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:30.625 [2024-10-21 12:07:07.030297] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:30.625 [2024-10-21 12:07:07.030302] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:30.625 [2024-10-21 12:07:07.030306] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:30.625 [2024-10-21 12:07:07.031571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:30.625 [2024-10-21 12:07:07.031724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:30.625 [2024-10-21 12:07:07.031872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:30.625 [2024-10-21 12:07:07.031874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:30.625 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:30.625 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:22:30.625 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:30.625 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:30.625 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:30.625 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:30.625 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:30.625 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.625 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:30.625 [2024-10-21 12:07:07.154488] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:30.625 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.625 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:30.625 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:30.625 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:30.625 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:30.625 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:30.625 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:30.625 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:30.625 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:30.625 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:30.625 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:30.625 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:30.625 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:30.625 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:30.625 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:30.625 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:30.625 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:30.625 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:30.625 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:30.625 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:30.626 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:30.626 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:30.626 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:30.626 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:30.626 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:30.626 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:30.626 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:30.886 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.886 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:30.886 Malloc1 00:22:30.886 [2024-10-21 12:07:07.267125] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:30.886 Malloc2 00:22:30.886 Malloc3 00:22:30.886 Malloc4 00:22:30.886 Malloc5 00:22:30.886 Malloc6 00:22:30.886 Malloc7 00:22:31.147 Malloc8 00:22:31.147 Malloc9 00:22:31.147 Malloc10 00:22:31.147 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.147 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:31.147 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:31.147 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:31.147 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1044612 00:22:31.147 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1044612 /var/tmp/bdevperf.sock 00:22:31.147 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1044612 ']' 00:22:31.147 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:31.147 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:31.147 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:31.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:31.147 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:31.147 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:31.147 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:31.147 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:31.147 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # config=() 00:22:31.147 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # local subsystem config 00:22:31.147 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:31.147 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:31.147 { 00:22:31.147 "params": { 00:22:31.147 "name": "Nvme$subsystem", 00:22:31.147 "trtype": "$TEST_TRANSPORT", 00:22:31.147 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.147 "adrfam": "ipv4", 00:22:31.147 "trsvcid": "$NVMF_PORT", 00:22:31.147 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.147 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.147 "hdgst": ${hdgst:-false}, 00:22:31.147 "ddgst": ${ddgst:-false} 00:22:31.147 }, 00:22:31.147 "method": "bdev_nvme_attach_controller" 00:22:31.147 } 00:22:31.147 EOF 00:22:31.147 )") 00:22:31.147 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:31.147 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:31.147 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:31.147 { 00:22:31.147 "params": { 00:22:31.147 "name": "Nvme$subsystem", 00:22:31.147 "trtype": "$TEST_TRANSPORT", 00:22:31.147 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.147 "adrfam": "ipv4", 00:22:31.147 "trsvcid": "$NVMF_PORT", 00:22:31.147 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.147 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.147 "hdgst": ${hdgst:-false}, 00:22:31.147 "ddgst": ${ddgst:-false} 00:22:31.147 }, 00:22:31.147 "method": "bdev_nvme_attach_controller" 00:22:31.147 } 00:22:31.147 EOF 00:22:31.147 )") 00:22:31.147 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:31.147 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:31.147 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:31.147 { 00:22:31.147 "params": { 00:22:31.147 "name": "Nvme$subsystem", 00:22:31.147 "trtype": "$TEST_TRANSPORT", 00:22:31.147 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.147 "adrfam": "ipv4", 00:22:31.147 "trsvcid": "$NVMF_PORT", 00:22:31.147 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.147 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.147 "hdgst": ${hdgst:-false}, 00:22:31.147 "ddgst": ${ddgst:-false} 00:22:31.147 }, 00:22:31.147 "method": "bdev_nvme_attach_controller" 00:22:31.147 } 00:22:31.147 EOF 00:22:31.147 )") 00:22:31.147 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:31.147 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:31.147 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:31.147 { 00:22:31.147 "params": { 00:22:31.147 "name": "Nvme$subsystem", 00:22:31.147 "trtype": "$TEST_TRANSPORT", 00:22:31.147 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.147 "adrfam": "ipv4", 00:22:31.147 "trsvcid": "$NVMF_PORT", 00:22:31.147 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.147 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.147 "hdgst": ${hdgst:-false}, 00:22:31.147 "ddgst": ${ddgst:-false} 00:22:31.147 }, 00:22:31.147 "method": "bdev_nvme_attach_controller" 00:22:31.147 } 00:22:31.147 EOF 00:22:31.147 )") 00:22:31.147 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:31.147 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:31.147 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:31.147 { 00:22:31.147 "params": { 00:22:31.147 "name": "Nvme$subsystem", 00:22:31.147 "trtype": "$TEST_TRANSPORT", 00:22:31.147 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.147 "adrfam": "ipv4", 00:22:31.147 "trsvcid": "$NVMF_PORT", 00:22:31.147 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.147 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.147 "hdgst": ${hdgst:-false}, 00:22:31.147 "ddgst": ${ddgst:-false} 00:22:31.147 }, 00:22:31.147 "method": "bdev_nvme_attach_controller" 00:22:31.147 } 00:22:31.147 EOF 00:22:31.147 )") 00:22:31.147 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:31.147 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:31.147 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:31.147 { 00:22:31.147 "params": { 00:22:31.147 "name": "Nvme$subsystem", 00:22:31.147 "trtype": "$TEST_TRANSPORT", 00:22:31.147 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.147 "adrfam": "ipv4", 00:22:31.147 "trsvcid": "$NVMF_PORT", 00:22:31.147 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.147 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.147 "hdgst": ${hdgst:-false}, 00:22:31.147 "ddgst": ${ddgst:-false} 00:22:31.147 }, 00:22:31.147 "method": "bdev_nvme_attach_controller" 00:22:31.147 } 00:22:31.147 EOF 00:22:31.147 )") 00:22:31.148 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:31.148 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:31.148 [2024-10-21 12:07:07.712616] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:22:31.148 [2024-10-21 12:07:07.712669] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1044612 ] 00:22:31.148 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:31.148 { 00:22:31.148 "params": { 00:22:31.148 "name": "Nvme$subsystem", 00:22:31.148 "trtype": "$TEST_TRANSPORT", 00:22:31.148 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.148 "adrfam": "ipv4", 00:22:31.148 "trsvcid": "$NVMF_PORT", 00:22:31.148 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.148 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.148 "hdgst": ${hdgst:-false}, 00:22:31.148 "ddgst": ${ddgst:-false} 00:22:31.148 }, 00:22:31.148 "method": "bdev_nvme_attach_controller" 00:22:31.148 } 00:22:31.148 EOF 00:22:31.148 )") 00:22:31.148 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:31.148 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:31.148 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:31.148 { 00:22:31.148 "params": { 00:22:31.148 "name": "Nvme$subsystem", 00:22:31.148 "trtype": "$TEST_TRANSPORT", 00:22:31.148 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.148 "adrfam": "ipv4", 00:22:31.148 "trsvcid": "$NVMF_PORT", 00:22:31.148 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.148 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.148 "hdgst": ${hdgst:-false}, 00:22:31.148 "ddgst": ${ddgst:-false} 00:22:31.148 }, 00:22:31.148 "method": "bdev_nvme_attach_controller" 00:22:31.148 } 00:22:31.148 EOF 00:22:31.148 )") 00:22:31.148 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:31.148 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:31.148 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:31.148 { 00:22:31.148 "params": { 00:22:31.148 "name": "Nvme$subsystem", 00:22:31.148 "trtype": "$TEST_TRANSPORT", 00:22:31.148 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.148 "adrfam": "ipv4", 00:22:31.148 "trsvcid": "$NVMF_PORT", 00:22:31.148 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.148 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.148 "hdgst": ${hdgst:-false}, 00:22:31.148 "ddgst": ${ddgst:-false} 00:22:31.148 }, 00:22:31.148 "method": "bdev_nvme_attach_controller" 00:22:31.148 } 00:22:31.148 EOF 00:22:31.148 )") 00:22:31.148 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:31.148 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:31.148 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:31.148 { 00:22:31.148 "params": { 00:22:31.148 "name": "Nvme$subsystem", 00:22:31.148 "trtype": "$TEST_TRANSPORT", 00:22:31.148 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.148 "adrfam": "ipv4", 00:22:31.148 "trsvcid": "$NVMF_PORT", 00:22:31.148 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.148 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.148 "hdgst": ${hdgst:-false}, 00:22:31.148 "ddgst": ${ddgst:-false} 00:22:31.148 }, 00:22:31.148 "method": "bdev_nvme_attach_controller" 00:22:31.148 } 00:22:31.148 EOF 00:22:31.148 )") 00:22:31.148 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:31.408 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # jq . 00:22:31.408 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@583 -- # IFS=, 00:22:31.408 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:22:31.408 "params": { 00:22:31.408 "name": "Nvme1", 00:22:31.408 "trtype": "tcp", 00:22:31.408 "traddr": "10.0.0.2", 00:22:31.408 "adrfam": "ipv4", 00:22:31.408 "trsvcid": "4420", 00:22:31.408 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:31.408 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:31.408 "hdgst": false, 00:22:31.408 "ddgst": false 00:22:31.408 }, 00:22:31.408 "method": "bdev_nvme_attach_controller" 00:22:31.408 },{ 00:22:31.408 "params": { 00:22:31.408 "name": "Nvme2", 00:22:31.408 "trtype": "tcp", 00:22:31.408 "traddr": "10.0.0.2", 00:22:31.408 "adrfam": "ipv4", 00:22:31.408 "trsvcid": "4420", 00:22:31.408 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:31.408 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:31.408 "hdgst": false, 00:22:31.408 "ddgst": false 00:22:31.408 }, 00:22:31.408 "method": "bdev_nvme_attach_controller" 00:22:31.408 },{ 00:22:31.408 "params": { 00:22:31.408 "name": "Nvme3", 00:22:31.408 "trtype": "tcp", 00:22:31.408 "traddr": "10.0.0.2", 00:22:31.408 "adrfam": "ipv4", 00:22:31.408 "trsvcid": "4420", 00:22:31.408 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:31.408 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:31.408 "hdgst": false, 00:22:31.408 "ddgst": false 00:22:31.408 }, 00:22:31.408 "method": "bdev_nvme_attach_controller" 00:22:31.408 },{ 00:22:31.408 "params": { 00:22:31.408 "name": "Nvme4", 00:22:31.408 "trtype": "tcp", 00:22:31.408 "traddr": "10.0.0.2", 00:22:31.408 "adrfam": "ipv4", 00:22:31.408 "trsvcid": "4420", 00:22:31.408 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:31.408 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:31.408 "hdgst": false, 00:22:31.408 "ddgst": false 00:22:31.408 }, 00:22:31.408 "method": "bdev_nvme_attach_controller" 00:22:31.408 },{ 00:22:31.408 "params": { 00:22:31.408 "name": "Nvme5", 00:22:31.408 "trtype": "tcp", 00:22:31.408 "traddr": "10.0.0.2", 00:22:31.408 "adrfam": "ipv4", 00:22:31.408 "trsvcid": "4420", 00:22:31.408 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:31.408 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:31.409 "hdgst": false, 00:22:31.409 "ddgst": false 00:22:31.409 }, 00:22:31.409 "method": "bdev_nvme_attach_controller" 00:22:31.409 },{ 00:22:31.409 "params": { 00:22:31.409 "name": "Nvme6", 00:22:31.409 "trtype": "tcp", 00:22:31.409 "traddr": "10.0.0.2", 00:22:31.409 "adrfam": "ipv4", 00:22:31.409 "trsvcid": "4420", 00:22:31.409 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:31.409 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:31.409 "hdgst": false, 00:22:31.409 "ddgst": false 00:22:31.409 }, 00:22:31.409 "method": "bdev_nvme_attach_controller" 00:22:31.409 },{ 00:22:31.409 "params": { 00:22:31.409 "name": "Nvme7", 00:22:31.409 "trtype": "tcp", 00:22:31.409 "traddr": "10.0.0.2", 00:22:31.409 "adrfam": "ipv4", 00:22:31.409 "trsvcid": "4420", 00:22:31.409 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:31.409 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:31.409 "hdgst": false, 00:22:31.409 "ddgst": false 00:22:31.409 }, 00:22:31.409 "method": "bdev_nvme_attach_controller" 00:22:31.409 },{ 00:22:31.409 "params": { 00:22:31.409 "name": "Nvme8", 00:22:31.409 "trtype": "tcp", 00:22:31.409 "traddr": "10.0.0.2", 00:22:31.409 "adrfam": "ipv4", 00:22:31.409 "trsvcid": "4420", 00:22:31.409 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:31.409 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:31.409 "hdgst": false, 00:22:31.409 "ddgst": false 00:22:31.409 }, 00:22:31.409 "method": "bdev_nvme_attach_controller" 00:22:31.409 },{ 00:22:31.409 "params": { 00:22:31.409 "name": "Nvme9", 00:22:31.409 "trtype": "tcp", 00:22:31.409 "traddr": "10.0.0.2", 00:22:31.409 "adrfam": "ipv4", 00:22:31.409 "trsvcid": "4420", 00:22:31.409 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:31.409 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:31.409 "hdgst": false, 00:22:31.409 "ddgst": false 00:22:31.409 }, 00:22:31.409 "method": "bdev_nvme_attach_controller" 00:22:31.409 },{ 00:22:31.409 "params": { 00:22:31.409 "name": "Nvme10", 00:22:31.409 "trtype": "tcp", 00:22:31.409 "traddr": "10.0.0.2", 00:22:31.409 "adrfam": "ipv4", 00:22:31.409 "trsvcid": "4420", 00:22:31.409 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:31.409 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:31.409 "hdgst": false, 00:22:31.409 "ddgst": false 00:22:31.409 }, 00:22:31.409 "method": "bdev_nvme_attach_controller" 00:22:31.409 }' 00:22:31.409 [2024-10-21 12:07:07.790565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.409 [2024-10-21 12:07:07.827048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.795 Running I/O for 10 seconds... 00:22:32.795 12:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:32.795 12:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:22:32.795 12:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:32.795 12:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.795 12:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:32.795 12:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.795 12:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:32.795 12:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:32.795 12:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:32.795 12:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:32.795 12:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:22:32.795 12:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:22:32.795 12:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:32.795 12:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:32.795 12:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:32.795 12:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:32.795 12:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.795 12:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:33.055 12:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.055 12:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:33.055 12:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:33.055 12:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:33.315 12:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:33.315 12:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:33.315 12:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:33.315 12:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:33.315 12:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.315 12:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:33.315 12:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.315 12:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:33.315 12:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:33.315 12:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:33.590 12:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:33.590 12:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:33.590 12:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:33.590 12:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:33.590 12:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.590 12:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:33.590 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.590 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:33.590 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:33.590 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:22:33.590 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:22:33.590 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:22:33.590 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1044316 00:22:33.590 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 1044316 ']' 00:22:33.590 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 1044316 00:22:33.590 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:22:33.590 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:33.590 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1044316 00:22:33.590 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:33.590 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:33.590 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1044316' 00:22:33.590 killing process with pid 1044316 00:22:33.590 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 1044316 00:22:33.590 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 1044316 00:22:33.590 [2024-10-21 12:07:10.099981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.590 [2024-10-21 12:07:10.100033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.590 [2024-10-21 12:07:10.100039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.590 [2024-10-21 12:07:10.100044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.590 [2024-10-21 12:07:10.100049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.590 [2024-10-21 12:07:10.100054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.590 [2024-10-21 12:07:10.100061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.590 [2024-10-21 12:07:10.100066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.590 [2024-10-21 12:07:10.100071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.590 [2024-10-21 12:07:10.100080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.590 [2024-10-21 12:07:10.100085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.590 [2024-10-21 12:07:10.100090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.590 [2024-10-21 12:07:10.100096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.590 [2024-10-21 12:07:10.100101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.590 [2024-10-21 12:07:10.100106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.590 [2024-10-21 12:07:10.100111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.590 [2024-10-21 12:07:10.100116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.590 [2024-10-21 12:07:10.100120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.590 [2024-10-21 12:07:10.100126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.590 [2024-10-21 12:07:10.100131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.590 [2024-10-21 12:07:10.100136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.590 [2024-10-21 12:07:10.100141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.590 [2024-10-21 12:07:10.100145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.590 [2024-10-21 12:07:10.100151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.590 [2024-10-21 12:07:10.100155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.590 [2024-10-21 12:07:10.100160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.590 [2024-10-21 12:07:10.100165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.590 [2024-10-21 12:07:10.100171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.590 [2024-10-21 12:07:10.100176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.590 [2024-10-21 12:07:10.100181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.590 [2024-10-21 12:07:10.100186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.590 [2024-10-21 12:07:10.100190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.590 [2024-10-21 12:07:10.100195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.590 [2024-10-21 12:07:10.100200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.590 [2024-10-21 12:07:10.100205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.590 [2024-10-21 12:07:10.100210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.590 [2024-10-21 12:07:10.100216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.590 [2024-10-21 12:07:10.100221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.590 [2024-10-21 12:07:10.100226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.590 [2024-10-21 12:07:10.100230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.590 [2024-10-21 12:07:10.100236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.590 [2024-10-21 12:07:10.100240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.590 [2024-10-21 12:07:10.100245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.590 [2024-10-21 12:07:10.100250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.590 [2024-10-21 12:07:10.100255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.590 [2024-10-21 12:07:10.100259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.590 [2024-10-21 12:07:10.100264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.590 [2024-10-21 12:07:10.100269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.590 [2024-10-21 12:07:10.100274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.590 [2024-10-21 12:07:10.100278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.100283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.100288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.100293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.100297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.100302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.100307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.100312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.100317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.100325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.100330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.100335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.100339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.100344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de6d0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.101408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc010 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.591 [2024-10-21 12:07:10.102676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.102680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc4e0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.103869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dc9d0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.104506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.104529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.104535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.104540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.104546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.104551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.104556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.104561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.104566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.104575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.104580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.104584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.104590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.104596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.104601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.104605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.592 [2024-10-21 12:07:10.104610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.104615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.104620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.104625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.104629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.104633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.104638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.104643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.104649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.104654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.104659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.104663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.104668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.104672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.104678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.104682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.104687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.104692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.104697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.104701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.104712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.104717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.104721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.104727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.104731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.104736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.104741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.104746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.104750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.104755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.104760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.104765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.104769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.104774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.104779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.104783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.104788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.104793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.104798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.104803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.104808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.104812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.104817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.104822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.104826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.104831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.104835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dcea0 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.105570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.105583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.105588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.105593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.105599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.105604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.105609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.105614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.105619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.105624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.105629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.105634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.105639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.105644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.105648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.105653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.105658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.105663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.105668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.105673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.105677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.105683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.105688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.105693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.593 [2024-10-21 12:07:10.105697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.105702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.105707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.105714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.105719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.105724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.105728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.105733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.105739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.105743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.105748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.105753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.105757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.105762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.105768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.105773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.105778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.105783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.105788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.105793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.105798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.105802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.105807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.105812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.105816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.105821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.105826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.105831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.105835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.105840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.105846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.105851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.105856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.105860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.105865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.105869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.105874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.105878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.105883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd370 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.106693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.106708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.106713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.106718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.106723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.106728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.106734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.106739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.106744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.106749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.106754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.106759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.106763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.106768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.106773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.106777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.106783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.106789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.106796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.106802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.106806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.106811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.106816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.106820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.106825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.106830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.106835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.106840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.106845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.106850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.106854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.106859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.106864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.594 [2024-10-21 12:07:10.106868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.106873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.106878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.106883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.106888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.106893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.106898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.106903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.106908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.106912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.106917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.106923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.106932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.106937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.106943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.106947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.106952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.106957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.106962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.106967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.106971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.106976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.106981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.106985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.106990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.106995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.107000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.107005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.107009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.107014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dd840 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.107632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.107646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.107652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.107656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.107662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.107667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.107674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.107679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.107684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.107692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.107697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.107701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.107706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.107711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.107716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.107720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.107725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.107730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.107735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.107740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.107745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.107749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.107754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.107759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.107764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.107768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.107774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.107780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.107785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.107790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.107795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.107799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.107804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.107809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.107814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.118215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.595 [2024-10-21 12:07:10.118256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.595 [2024-10-21 12:07:10.118267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.595 [2024-10-21 12:07:10.118276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.595 [2024-10-21 12:07:10.118285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.595 [2024-10-21 12:07:10.118292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.595 [2024-10-21 12:07:10.118300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.595 [2024-10-21 12:07:10.118308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.595 [2024-10-21 12:07:10.118316] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad32c0 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.118360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.595 [2024-10-21 12:07:10.118370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.595 [2024-10-21 12:07:10.118379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.595 [2024-10-21 12:07:10.118387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.595 [2024-10-21 12:07:10.118397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.595 [2024-10-21 12:07:10.118404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.595 [2024-10-21 12:07:10.118412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.595 [2024-10-21 12:07:10.118420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.595 [2024-10-21 12:07:10.118428] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad0990 is same with the state(6) to be set 00:22:33.595 [2024-10-21 12:07:10.118453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.595 [2024-10-21 12:07:10.118462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.595 [2024-10-21 12:07:10.118471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.595 [2024-10-21 12:07:10.118478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.595 [2024-10-21 12:07:10.118487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.596 [2024-10-21 12:07:10.118494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.596 [2024-10-21 12:07:10.118504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.596 [2024-10-21 12:07:10.118511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.596 [2024-10-21 12:07:10.118519] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bd610 is same with the state(6) to be set 00:22:33.596 [2024-10-21 12:07:10.118547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.596 [2024-10-21 12:07:10.118556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.596 [2024-10-21 12:07:10.118564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.596 [2024-10-21 12:07:10.118572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.596 [2024-10-21 12:07:10.118580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.596 [2024-10-21 12:07:10.118589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.596 [2024-10-21 12:07:10.118597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.596 [2024-10-21 12:07:10.118605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.596 [2024-10-21 12:07:10.118612] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb11ed0 is same with the state(6) to be set 00:22:33.596 [2024-10-21 12:07:10.118647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.596 [2024-10-21 12:07:10.118656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.596 [2024-10-21 12:07:10.118665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.596 [2024-10-21 12:07:10.118672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.596 [2024-10-21 12:07:10.118680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.596 [2024-10-21 12:07:10.118687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.596 [2024-10-21 12:07:10.118696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.596 [2024-10-21 12:07:10.118703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.596 [2024-10-21 12:07:10.118711] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69b780 is same with the state(6) to be set 00:22:33.596 [2024-10-21 12:07:10.118733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.596 [2024-10-21 12:07:10.118743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.596 [2024-10-21 12:07:10.118751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.596 [2024-10-21 12:07:10.118758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.596 [2024-10-21 12:07:10.118766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.596 [2024-10-21 12:07:10.118773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.596 [2024-10-21 12:07:10.118781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.596 [2024-10-21 12:07:10.118790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.596 [2024-10-21 12:07:10.118800] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacd650 is same with the state(6) to be set 00:22:33.596 [2024-10-21 12:07:10.118816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with [2024-10-21 12:07:10.118825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsthe state(6) to be set 00:22:33.596 id:0 cdw10:00000000 cdw11:00000000 00:22:33.596 [2024-10-21 12:07:10.118837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.596 [2024-10-21 12:07:10.118839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.596 [2024-10-21 12:07:10.118847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-10-21 12:07:10.118847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with id:0 cdw10:00000000 cdw11:00000000 00:22:33.596 the state(6) to be set 00:22:33.596 [2024-10-21 12:07:10.118857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with [2024-10-21 12:07:10.118856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:22:33.596 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.596 [2024-10-21 12:07:10.118864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.596 [2024-10-21 12:07:10.118868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.596 [2024-10-21 12:07:10.118870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.596 [2024-10-21 12:07:10.118876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.596 [2024-10-21 12:07:10.118876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.596 [2024-10-21 12:07:10.118881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.596 [2024-10-21 12:07:10.118885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-10-21 12:07:10.118887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with id:0 cdw10:00000000 cdw11:00000000 00:22:33.596 the state(6) to be set 00:22:33.596 [2024-10-21 12:07:10.118894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.596 [2024-10-21 12:07:10.118894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.596 [2024-10-21 12:07:10.118900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.596 [2024-10-21 12:07:10.118903] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a30f0 is same with the state(6) to be set 00:22:33.596 [2024-10-21 12:07:10.118905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.596 [2024-10-21 12:07:10.118911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.596 [2024-10-21 12:07:10.118916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.596 [2024-10-21 12:07:10.118921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.596 [2024-10-21 12:07:10.118925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with [2024-10-21 12:07:10.118926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsthe state(6) to be set 00:22:33.596 id:0 cdw10:00000000 cdw11:00000000 00:22:33.596 [2024-10-21 12:07:10.118935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.596 [2024-10-21 12:07:10.118938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.596 [2024-10-21 12:07:10.118941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.596 [2024-10-21 12:07:10.118947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.596 [2024-10-21 12:07:10.118948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.596 [2024-10-21 12:07:10.118952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.596 [2024-10-21 12:07:10.118956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-10-21 12:07:10.118958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.596 the state(6) to be set 00:22:33.596 [2024-10-21 12:07:10.118964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.596 [2024-10-21 12:07:10.118966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.596 [2024-10-21 12:07:10.118969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.596 [2024-10-21 12:07:10.118974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-10-21 12:07:10.118975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.596 the state(6) to be set 00:22:33.596 [2024-10-21 12:07:10.118983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.596 [2024-10-21 12:07:10.118984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.596 [2024-10-21 12:07:10.118988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.596 [2024-10-21 12:07:10.118992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-10-21 12:07:10.118993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.596 the state(6) to be set 00:22:33.596 [2024-10-21 12:07:10.119002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddd30 is same with the state(6) to be set 00:22:33.596 [2024-10-21 12:07:10.119002] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69c700 is same with the state(6) to be set 00:22:33.596 [2024-10-21 12:07:10.119025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.597 [2024-10-21 12:07:10.119035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.597 [2024-10-21 12:07:10.119044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.597 [2024-10-21 12:07:10.119052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.597 [2024-10-21 12:07:10.119060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.597 [2024-10-21 12:07:10.119070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.597 [2024-10-21 12:07:10.119078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.597 [2024-10-21 12:07:10.119086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.597 [2024-10-21 12:07:10.119093] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68d250 is same with the state(6) to be set 00:22:33.597 [2024-10-21 12:07:10.119137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.597 [2024-10-21 12:07:10.119148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.597 [2024-10-21 12:07:10.119163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.597 [2024-10-21 12:07:10.119170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.597 [2024-10-21 12:07:10.119180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.597 [2024-10-21 12:07:10.119188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.597 [2024-10-21 12:07:10.119198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.597 [2024-10-21 12:07:10.119205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.597 [2024-10-21 12:07:10.119215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.597 [2024-10-21 12:07:10.119223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.597 [2024-10-21 12:07:10.119233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.597 [2024-10-21 12:07:10.119241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.597 [2024-10-21 12:07:10.119250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.597 [2024-10-21 12:07:10.119258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.597 [2024-10-21 12:07:10.119268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.597 [2024-10-21 12:07:10.119275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.597 [2024-10-21 12:07:10.119287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.597 [2024-10-21 12:07:10.119295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.597 [2024-10-21 12:07:10.119305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.597 [2024-10-21 12:07:10.119312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.597 [2024-10-21 12:07:10.119328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.597 [2024-10-21 12:07:10.119340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.597 [2024-10-21 12:07:10.119350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.597 [2024-10-21 12:07:10.119357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.597 [2024-10-21 12:07:10.119367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.597 [2024-10-21 12:07:10.119375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.597 [2024-10-21 12:07:10.119384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.597 [2024-10-21 12:07:10.119392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.597 [2024-10-21 12:07:10.119401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.597 [2024-10-21 12:07:10.119409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.597 [2024-10-21 12:07:10.119419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.597 [2024-10-21 12:07:10.119426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.597 [2024-10-21 12:07:10.119436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.597 [2024-10-21 12:07:10.119443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.597 [2024-10-21 12:07:10.119452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.597 [2024-10-21 12:07:10.119460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.597 [2024-10-21 12:07:10.119469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.597 [2024-10-21 12:07:10.119476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.597 [2024-10-21 12:07:10.119486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.597 [2024-10-21 12:07:10.119483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with the state(6) to be set 00:22:33.597 [2024-10-21 12:07:10.119493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.597 [2024-10-21 12:07:10.119499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with the state(6) to be set 00:22:33.597 [2024-10-21 12:07:10.119503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:12[2024-10-21 12:07:10.119505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.597 the state(6) to be set 00:22:33.597 [2024-10-21 12:07:10.119512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with the state(6) to be set 00:22:33.597 [2024-10-21 12:07:10.119512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.597 [2024-10-21 12:07:10.119518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with the state(6) to be set 00:22:33.597 [2024-10-21 12:07:10.119523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.597 [2024-10-21 12:07:10.119527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with the state(6) to be set 00:22:33.597 [2024-10-21 12:07:10.119531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-21 12:07:10.119533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.597 the state(6) to be set 00:22:33.597 [2024-10-21 12:07:10.119540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with the state(6) to be set 00:22:33.597 [2024-10-21 12:07:10.119543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.597 [2024-10-21 12:07:10.119545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with the state(6) to be set 00:22:33.597 [2024-10-21 12:07:10.119551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with the state(6) to be set 00:22:33.597 [2024-10-21 12:07:10.119551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.597 [2024-10-21 12:07:10.119557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with the state(6) to be set 00:22:33.597 [2024-10-21 12:07:10.119563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with [2024-10-21 12:07:10.119562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:12the state(6) to be set 00:22:33.597 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.597 [2024-10-21 12:07:10.119571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with the state(6) to be set 00:22:33.597 [2024-10-21 12:07:10.119573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.597 [2024-10-21 12:07:10.119577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with the state(6) to be set 00:22:33.597 [2024-10-21 12:07:10.119582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with the state(6) to be set 00:22:33.597 [2024-10-21 12:07:10.119583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.597 [2024-10-21 12:07:10.119587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with the state(6) to be set 00:22:33.597 [2024-10-21 12:07:10.119591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.598 [2024-10-21 12:07:10.119593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with the state(6) to be set 00:22:33.598 [2024-10-21 12:07:10.119599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with the state(6) to be set 00:22:33.598 [2024-10-21 12:07:10.119602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.598 [2024-10-21 12:07:10.119604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with the state(6) to be set 00:22:33.598 [2024-10-21 12:07:10.119611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with [2024-10-21 12:07:10.119610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:22:33.598 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.598 [2024-10-21 12:07:10.119619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with the state(6) to be set 00:22:33.598 [2024-10-21 12:07:10.119623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.598 [2024-10-21 12:07:10.119626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with the state(6) to be set 00:22:33.598 [2024-10-21 12:07:10.119632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with the state(6) to be set 00:22:33.598 [2024-10-21 12:07:10.119631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.598 [2024-10-21 12:07:10.119638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with the state(6) to be set 00:22:33.598 [2024-10-21 12:07:10.119644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with [2024-10-21 12:07:10.119643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:12the state(6) to be set 00:22:33.598 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.598 [2024-10-21 12:07:10.119652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with the state(6) to be set 00:22:33.598 [2024-10-21 12:07:10.119653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.598 [2024-10-21 12:07:10.119657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with the state(6) to be set 00:22:33.598 [2024-10-21 12:07:10.119662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with the state(6) to be set 00:22:33.598 [2024-10-21 12:07:10.119664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.598 [2024-10-21 12:07:10.119667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with the state(6) to be set 00:22:33.598 [2024-10-21 12:07:10.119671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-21 12:07:10.119673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.598 the state(6) to be set 00:22:33.598 [2024-10-21 12:07:10.119681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with the state(6) to be set 00:22:33.598 [2024-10-21 12:07:10.119683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.598 [2024-10-21 12:07:10.119686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with the state(6) to be set 00:22:33.598 [2024-10-21 12:07:10.119691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with [2024-10-21 12:07:10.119691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:22:33.598 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.598 [2024-10-21 12:07:10.119699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with the state(6) to be set 00:22:33.598 [2024-10-21 12:07:10.119704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with [2024-10-21 12:07:10.119703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:12the state(6) to be set 00:22:33.598 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.598 [2024-10-21 12:07:10.119711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with the state(6) to be set 00:22:33.598 [2024-10-21 12:07:10.119713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.598 [2024-10-21 12:07:10.119716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with the state(6) to be set 00:22:33.598 [2024-10-21 12:07:10.119722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with the state(6) to be set 00:22:33.598 [2024-10-21 12:07:10.119723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.598 [2024-10-21 12:07:10.119728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with the state(6) to be set 00:22:33.598 [2024-10-21 12:07:10.119731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-21 12:07:10.119733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.598 the state(6) to be set 00:22:33.598 [2024-10-21 12:07:10.119740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with the state(6) to be set 00:22:33.598 [2024-10-21 12:07:10.119743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.598 [2024-10-21 12:07:10.119745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with the state(6) to be set 00:22:33.598 [2024-10-21 12:07:10.119751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with the state(6) to be set 00:22:33.598 [2024-10-21 12:07:10.119751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.598 [2024-10-21 12:07:10.119756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with the state(6) to be set 00:22:33.598 [2024-10-21 12:07:10.119762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with [2024-10-21 12:07:10.119762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:12the state(6) to be set 00:22:33.598 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.598 [2024-10-21 12:07:10.119769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with the state(6) to be set 00:22:33.598 [2024-10-21 12:07:10.119771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.598 [2024-10-21 12:07:10.119774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with the state(6) to be set 00:22:33.598 [2024-10-21 12:07:10.119780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with the state(6) to be set 00:22:33.598 [2024-10-21 12:07:10.119782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.598 [2024-10-21 12:07:10.119785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with the state(6) to be set 00:22:33.598 [2024-10-21 12:07:10.119789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-21 12:07:10.119790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.598 the state(6) to be set 00:22:33.598 [2024-10-21 12:07:10.119799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with the state(6) to be set 00:22:33.598 [2024-10-21 12:07:10.119801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.598 [2024-10-21 12:07:10.119804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with the state(6) to be set 00:22:33.598 [2024-10-21 12:07:10.119809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with the state(6) to be set 00:22:33.598 [2024-10-21 12:07:10.119809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.598 [2024-10-21 12:07:10.119814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with the state(6) to be set 00:22:33.598 [2024-10-21 12:07:10.119821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with the state(6) to be set 00:22:33.598 [2024-10-21 12:07:10.119821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.598 [2024-10-21 12:07:10.119826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with the state(6) to be set 00:22:33.598 [2024-10-21 12:07:10.119830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-21 12:07:10.119831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.598 the state(6) to be set 00:22:33.598 [2024-10-21 12:07:10.119838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with the state(6) to be set 00:22:33.598 [2024-10-21 12:07:10.119841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:12[2024-10-21 12:07:10.119843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.598 the state(6) to be set 00:22:33.598 [2024-10-21 12:07:10.119850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with the state(6) to be set 00:22:33.598 [2024-10-21 12:07:10.119850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.598 [2024-10-21 12:07:10.119855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with the state(6) to be set 00:22:33.598 [2024-10-21 12:07:10.119861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with the state(6) to be set 00:22:33.598 [2024-10-21 12:07:10.119861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.598 [2024-10-21 12:07:10.119867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de200 is same with the state(6) to be set 00:22:33.598 [2024-10-21 12:07:10.119870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.598 [2024-10-21 12:07:10.119880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.599 [2024-10-21 12:07:10.119888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.599 [2024-10-21 12:07:10.119897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.599 [2024-10-21 12:07:10.119905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.599 [2024-10-21 12:07:10.119914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.599 [2024-10-21 12:07:10.119922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.599 [2024-10-21 12:07:10.119932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.599 [2024-10-21 12:07:10.119939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.599 [2024-10-21 12:07:10.119949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.599 [2024-10-21 12:07:10.119956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.599 [2024-10-21 12:07:10.119966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.599 [2024-10-21 12:07:10.119975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.599 [2024-10-21 12:07:10.119985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.599 [2024-10-21 12:07:10.119993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.599 [2024-10-21 12:07:10.120003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.599 [2024-10-21 12:07:10.120010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.599 [2024-10-21 12:07:10.120020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.599 [2024-10-21 12:07:10.120027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.599 [2024-10-21 12:07:10.120036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.599 [2024-10-21 12:07:10.120044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.599 [2024-10-21 12:07:10.120053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.599 [2024-10-21 12:07:10.120061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.599 [2024-10-21 12:07:10.120070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.599 [2024-10-21 12:07:10.120077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.599 [2024-10-21 12:07:10.120087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.599 [2024-10-21 12:07:10.120097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.599 [2024-10-21 12:07:10.120107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.599 [2024-10-21 12:07:10.120115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.599 [2024-10-21 12:07:10.120125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.599 [2024-10-21 12:07:10.120132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.599 [2024-10-21 12:07:10.120142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.599 [2024-10-21 12:07:10.120149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.599 [2024-10-21 12:07:10.120159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.599 [2024-10-21 12:07:10.120166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.599 [2024-10-21 12:07:10.120182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.599 [2024-10-21 12:07:10.120190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.599 [2024-10-21 12:07:10.120201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.599 [2024-10-21 12:07:10.120209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.599 [2024-10-21 12:07:10.120219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.599 [2024-10-21 12:07:10.120226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.599 [2024-10-21 12:07:10.120236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.599 [2024-10-21 12:07:10.120243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.599 [2024-10-21 12:07:10.120253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.599 [2024-10-21 12:07:10.120260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.599 [2024-10-21 12:07:10.120269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.599 [2024-10-21 12:07:10.120277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.599 [2024-10-21 12:07:10.120286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.599 [2024-10-21 12:07:10.120294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.599 [2024-10-21 12:07:10.120304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.599 [2024-10-21 12:07:10.120311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.599 [2024-10-21 12:07:10.120367] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8a6920 was disconnected and freed. reset controller. 00:22:33.599 [2024-10-21 12:07:10.120401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.599 [2024-10-21 12:07:10.120410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.599 [2024-10-21 12:07:10.120421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.599 [2024-10-21 12:07:10.120429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.599 [2024-10-21 12:07:10.120439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.599 [2024-10-21 12:07:10.120446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.599 [2024-10-21 12:07:10.120456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.599 [2024-10-21 12:07:10.120463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.599 [2024-10-21 12:07:10.120472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.599 [2024-10-21 12:07:10.120480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.599 [2024-10-21 12:07:10.120491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.599 [2024-10-21 12:07:10.120499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.599 [2024-10-21 12:07:10.120508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.600 [2024-10-21 12:07:10.120516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.600 [2024-10-21 12:07:10.120525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.600 [2024-10-21 12:07:10.120532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.600 [2024-10-21 12:07:10.120542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.600 [2024-10-21 12:07:10.120549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.600 [2024-10-21 12:07:10.120559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.600 [2024-10-21 12:07:10.120566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.600 [2024-10-21 12:07:10.120576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.600 [2024-10-21 12:07:10.120584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.600 [2024-10-21 12:07:10.120593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.600 [2024-10-21 12:07:10.120600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.600 [2024-10-21 12:07:10.120610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.600 [2024-10-21 12:07:10.120617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.600 [2024-10-21 12:07:10.120627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.600 [2024-10-21 12:07:10.120634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.600 [2024-10-21 12:07:10.120643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.600 [2024-10-21 12:07:10.120651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.600 [2024-10-21 12:07:10.120660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.600 [2024-10-21 12:07:10.120667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.600 [2024-10-21 12:07:10.120677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.600 [2024-10-21 12:07:10.120685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.600 [2024-10-21 12:07:10.120694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.600 [2024-10-21 12:07:10.120703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.600 [2024-10-21 12:07:10.120713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.600 [2024-10-21 12:07:10.120721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.600 [2024-10-21 12:07:10.120732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.600 [2024-10-21 12:07:10.120740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.600 [2024-10-21 12:07:10.120749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.600 [2024-10-21 12:07:10.120757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.600 [2024-10-21 12:07:10.120766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.600 [2024-10-21 12:07:10.120774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.600 [2024-10-21 12:07:10.120783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.600 [2024-10-21 12:07:10.120791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.600 [2024-10-21 12:07:10.120800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.600 [2024-10-21 12:07:10.120811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.600 [2024-10-21 12:07:10.120820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.600 [2024-10-21 12:07:10.120828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.600 [2024-10-21 12:07:10.120838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.600 [2024-10-21 12:07:10.120845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.600 [2024-10-21 12:07:10.120855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.600 [2024-10-21 12:07:10.120862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.600 [2024-10-21 12:07:10.120871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.600 [2024-10-21 12:07:10.120879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.600 [2024-10-21 12:07:10.120888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.600 [2024-10-21 12:07:10.120895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.600 [2024-10-21 12:07:10.120905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.600 [2024-10-21 12:07:10.120912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.600 [2024-10-21 12:07:10.120926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.600 [2024-10-21 12:07:10.120933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.600 [2024-10-21 12:07:10.120943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.600 [2024-10-21 12:07:10.120950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.600 [2024-10-21 12:07:10.120960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.600 [2024-10-21 12:07:10.120967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.600 [2024-10-21 12:07:10.120977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.600 [2024-10-21 12:07:10.120984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.600 [2024-10-21 12:07:10.120993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.600 [2024-10-21 12:07:10.131516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.600 [2024-10-21 12:07:10.131565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.600 [2024-10-21 12:07:10.131576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.600 [2024-10-21 12:07:10.131586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.600 [2024-10-21 12:07:10.131595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.600 [2024-10-21 12:07:10.131604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.600 [2024-10-21 12:07:10.131613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.600 [2024-10-21 12:07:10.131623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.600 [2024-10-21 12:07:10.131630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.600 [2024-10-21 12:07:10.131640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.600 [2024-10-21 12:07:10.131647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.600 [2024-10-21 12:07:10.131656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.600 [2024-10-21 12:07:10.131664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.600 [2024-10-21 12:07:10.131674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.600 [2024-10-21 12:07:10.131681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.600 [2024-10-21 12:07:10.131692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.600 [2024-10-21 12:07:10.131704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.600 [2024-10-21 12:07:10.131714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.600 [2024-10-21 12:07:10.131722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.601 [2024-10-21 12:07:10.131732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.601 [2024-10-21 12:07:10.131740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.601 [2024-10-21 12:07:10.131750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.601 [2024-10-21 12:07:10.131757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.601 [2024-10-21 12:07:10.131768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.601 [2024-10-21 12:07:10.131776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.601 [2024-10-21 12:07:10.131785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.601 [2024-10-21 12:07:10.131793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.601 [2024-10-21 12:07:10.131803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.601 [2024-10-21 12:07:10.131811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.601 [2024-10-21 12:07:10.131821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.601 [2024-10-21 12:07:10.131828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.601 [2024-10-21 12:07:10.131837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.601 [2024-10-21 12:07:10.131845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.601 [2024-10-21 12:07:10.131857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.601 [2024-10-21 12:07:10.131864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.601 [2024-10-21 12:07:10.131874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.601 [2024-10-21 12:07:10.131882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.601 [2024-10-21 12:07:10.131892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.601 [2024-10-21 12:07:10.131901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.601 [2024-10-21 12:07:10.131910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.601 [2024-10-21 12:07:10.131918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.601 [2024-10-21 12:07:10.131929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.601 [2024-10-21 12:07:10.131938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.601 [2024-10-21 12:07:10.131948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.601 [2024-10-21 12:07:10.131955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.601 [2024-10-21 12:07:10.131965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.601 [2024-10-21 12:07:10.131973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.601 [2024-10-21 12:07:10.131983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.601 [2024-10-21 12:07:10.131990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.601 [2024-10-21 12:07:10.132000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.601 [2024-10-21 12:07:10.132007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.601 [2024-10-21 12:07:10.132017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.601 [2024-10-21 12:07:10.132025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.601 [2024-10-21 12:07:10.132035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.601 [2024-10-21 12:07:10.132043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.601 [2024-10-21 12:07:10.132053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.601 [2024-10-21 12:07:10.132060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.601 [2024-10-21 12:07:10.132069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.601 [2024-10-21 12:07:10.132077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.601 [2024-10-21 12:07:10.132152] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8a7ad0 was disconnected and freed. reset controller. 00:22:33.601 [2024-10-21 12:07:10.132597] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad32c0 (9): Bad file descriptor 00:22:33.601 [2024-10-21 12:07:10.132628] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad0990 (9): Bad file descriptor 00:22:33.601 [2024-10-21 12:07:10.132646] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bd610 (9): Bad file descriptor 00:22:33.601 [2024-10-21 12:07:10.132662] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb11ed0 (9): Bad file descriptor 00:22:33.601 [2024-10-21 12:07:10.132692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.601 [2024-10-21 12:07:10.132703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.601 [2024-10-21 12:07:10.132716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.601 [2024-10-21 12:07:10.132725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.601 [2024-10-21 12:07:10.132734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.601 [2024-10-21 12:07:10.132741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.601 [2024-10-21 12:07:10.132749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.601 [2024-10-21 12:07:10.132757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.601 [2024-10-21 12:07:10.132765] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafd370 is same with the state(6) to be set 00:22:33.601 [2024-10-21 12:07:10.132785] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x69b780 (9): Bad file descriptor 00:22:33.601 [2024-10-21 12:07:10.132802] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacd650 (9): Bad file descriptor 00:22:33.601 [2024-10-21 12:07:10.132820] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6a30f0 (9): Bad file descriptor 00:22:33.601 [2024-10-21 12:07:10.132836] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x69c700 (9): Bad file descriptor 00:22:33.601 [2024-10-21 12:07:10.132851] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68d250 (9): Bad file descriptor 00:22:33.601 [2024-10-21 12:07:10.135581] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:33.601 [2024-10-21 12:07:10.135608] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:33.601 [2024-10-21 12:07:10.136901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:33.601 [2024-10-21 12:07:10.136926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x68d250 with addr=10.0.0.2, port=4420 00:22:33.601 [2024-10-21 12:07:10.136936] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68d250 is same with the state(6) to be set 00:22:33.601 [2024-10-21 12:07:10.137137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:33.601 [2024-10-21 12:07:10.137147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x69c700 with addr=10.0.0.2, port=4420 00:22:33.601 [2024-10-21 12:07:10.137155] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69c700 is same with the state(6) to be set 00:22:33.601 [2024-10-21 12:07:10.137736] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:33.601 [2024-10-21 12:07:10.137775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.601 [2024-10-21 12:07:10.137785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.601 [2024-10-21 12:07:10.137800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.601 [2024-10-21 12:07:10.137808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.601 [2024-10-21 12:07:10.137817] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa5c80 is same with the state(6) to be set 00:22:33.601 [2024-10-21 12:07:10.137867] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xaa5c80 was disconnected and freed. reset controller. 00:22:33.601 [2024-10-21 12:07:10.137911] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:33.601 [2024-10-21 12:07:10.137956] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:33.601 [2024-10-21 12:07:10.137995] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:33.601 [2024-10-21 12:07:10.138170] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:33.601 [2024-10-21 12:07:10.138194] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68d250 (9): Bad file descriptor 00:22:33.601 [2024-10-21 12:07:10.138206] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x69c700 (9): Bad file descriptor 00:22:33.601 [2024-10-21 12:07:10.139219] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:33.601 [2024-10-21 12:07:10.139262] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:33.601 [2024-10-21 12:07:10.139283] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:33.601 [2024-10-21 12:07:10.139307] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:33.602 [2024-10-21 12:07:10.139316] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:33.602 [2024-10-21 12:07:10.139331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:33.602 [2024-10-21 12:07:10.139346] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:33.602 [2024-10-21 12:07:10.139355] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:33.602 [2024-10-21 12:07:10.139363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:33.602 [2024-10-21 12:07:10.139439] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:33.602 [2024-10-21 12:07:10.139449] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:33.602 [2024-10-21 12:07:10.139667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:33.602 [2024-10-21 12:07:10.139680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x69b780 with addr=10.0.0.2, port=4420 00:22:33.602 [2024-10-21 12:07:10.139689] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69b780 is same with the state(6) to be set 00:22:33.602 [2024-10-21 12:07:10.139992] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x69b780 (9): Bad file descriptor 00:22:33.602 [2024-10-21 12:07:10.140041] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:33.602 [2024-10-21 12:07:10.140050] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:33.602 [2024-10-21 12:07:10.140057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:33.602 [2024-10-21 12:07:10.140104] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:33.602 [2024-10-21 12:07:10.142623] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xafd370 (9): Bad file descriptor 00:22:33.602 [2024-10-21 12:07:10.142744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.602 [2024-10-21 12:07:10.142756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.602 [2024-10-21 12:07:10.142768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.602 [2024-10-21 12:07:10.142777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.602 [2024-10-21 12:07:10.142786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.602 [2024-10-21 12:07:10.142797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.602 [2024-10-21 12:07:10.142807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.602 [2024-10-21 12:07:10.142814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.602 [2024-10-21 12:07:10.142824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.602 [2024-10-21 12:07:10.142832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.602 [2024-10-21 12:07:10.142841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.602 [2024-10-21 12:07:10.142849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.602 [2024-10-21 12:07:10.142858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.602 [2024-10-21 12:07:10.142865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.602 [2024-10-21 12:07:10.142875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.602 [2024-10-21 12:07:10.142882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.602 [2024-10-21 12:07:10.142892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.602 [2024-10-21 12:07:10.142900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.602 [2024-10-21 12:07:10.142910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.602 [2024-10-21 12:07:10.142917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.602 [2024-10-21 12:07:10.142927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.602 [2024-10-21 12:07:10.142934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.602 [2024-10-21 12:07:10.142943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.602 [2024-10-21 12:07:10.142951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.602 [2024-10-21 12:07:10.142960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.602 [2024-10-21 12:07:10.142968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.602 [2024-10-21 12:07:10.142978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.602 [2024-10-21 12:07:10.142986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.602 [2024-10-21 12:07:10.142995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.602 [2024-10-21 12:07:10.143002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.602 [2024-10-21 12:07:10.143011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.602 [2024-10-21 12:07:10.143021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.602 [2024-10-21 12:07:10.143031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.602 [2024-10-21 12:07:10.143038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.602 [2024-10-21 12:07:10.143048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.602 [2024-10-21 12:07:10.143055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.602 [2024-10-21 12:07:10.143064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.602 [2024-10-21 12:07:10.143072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.602 [2024-10-21 12:07:10.143081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.602 [2024-10-21 12:07:10.143089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.602 [2024-10-21 12:07:10.143099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.602 [2024-10-21 12:07:10.143106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.602 [2024-10-21 12:07:10.143116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.602 [2024-10-21 12:07:10.143123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.602 [2024-10-21 12:07:10.143133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.602 [2024-10-21 12:07:10.143140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.602 [2024-10-21 12:07:10.143150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.602 [2024-10-21 12:07:10.143157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.602 [2024-10-21 12:07:10.143167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.602 [2024-10-21 12:07:10.143174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.602 [2024-10-21 12:07:10.143184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.602 [2024-10-21 12:07:10.143192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.602 [2024-10-21 12:07:10.143201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.602 [2024-10-21 12:07:10.143209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.602 [2024-10-21 12:07:10.143219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.602 [2024-10-21 12:07:10.143226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.602 [2024-10-21 12:07:10.143237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.603 [2024-10-21 12:07:10.143245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.603 [2024-10-21 12:07:10.143255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.603 [2024-10-21 12:07:10.143262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.603 [2024-10-21 12:07:10.143271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.603 [2024-10-21 12:07:10.143278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.603 [2024-10-21 12:07:10.143289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.603 [2024-10-21 12:07:10.143296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.603 [2024-10-21 12:07:10.143305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.603 [2024-10-21 12:07:10.143314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.603 [2024-10-21 12:07:10.143328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.603 [2024-10-21 12:07:10.143335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.603 [2024-10-21 12:07:10.143346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.603 [2024-10-21 12:07:10.143355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.603 [2024-10-21 12:07:10.143365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.603 [2024-10-21 12:07:10.143372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.603 [2024-10-21 12:07:10.143383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.603 [2024-10-21 12:07:10.143391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.603 [2024-10-21 12:07:10.143400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.603 [2024-10-21 12:07:10.143408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.603 [2024-10-21 12:07:10.143418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.603 [2024-10-21 12:07:10.143426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.603 [2024-10-21 12:07:10.143436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.603 [2024-10-21 12:07:10.143443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.603 [2024-10-21 12:07:10.143453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.603 [2024-10-21 12:07:10.143462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.603 [2024-10-21 12:07:10.143472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.603 [2024-10-21 12:07:10.143479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.603 [2024-10-21 12:07:10.143489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.603 [2024-10-21 12:07:10.143497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.603 [2024-10-21 12:07:10.143506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.603 [2024-10-21 12:07:10.143514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.603 [2024-10-21 12:07:10.143524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.603 [2024-10-21 12:07:10.143532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.603 [2024-10-21 12:07:10.143541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.603 [2024-10-21 12:07:10.143548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.603 [2024-10-21 12:07:10.143558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.603 [2024-10-21 12:07:10.143565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.603 [2024-10-21 12:07:10.143575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.603 [2024-10-21 12:07:10.143582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.603 [2024-10-21 12:07:10.143592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.603 [2024-10-21 12:07:10.143600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.603 [2024-10-21 12:07:10.143609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.603 [2024-10-21 12:07:10.143617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.603 [2024-10-21 12:07:10.143626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.603 [2024-10-21 12:07:10.143633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.603 [2024-10-21 12:07:10.143643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.603 [2024-10-21 12:07:10.143651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.603 [2024-10-21 12:07:10.143660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.603 [2024-10-21 12:07:10.143668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.603 [2024-10-21 12:07:10.143679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.603 [2024-10-21 12:07:10.143686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.603 [2024-10-21 12:07:10.143695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.603 [2024-10-21 12:07:10.143703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.603 [2024-10-21 12:07:10.143712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.603 [2024-10-21 12:07:10.143719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.603 [2024-10-21 12:07:10.143729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.603 [2024-10-21 12:07:10.143736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.603 [2024-10-21 12:07:10.143747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.603 [2024-10-21 12:07:10.143755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.603 [2024-10-21 12:07:10.143764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.603 [2024-10-21 12:07:10.143772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.603 [2024-10-21 12:07:10.143783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.603 [2024-10-21 12:07:10.143790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.603 [2024-10-21 12:07:10.143800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.603 [2024-10-21 12:07:10.143808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.603 [2024-10-21 12:07:10.143817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.603 [2024-10-21 12:07:10.143824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.603 [2024-10-21 12:07:10.143834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.603 [2024-10-21 12:07:10.143841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.603 [2024-10-21 12:07:10.143851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.603 [2024-10-21 12:07:10.143859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.603 [2024-10-21 12:07:10.143867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(6) to be set 00:22:33.603 [2024-10-21 12:07:10.145147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.603 [2024-10-21 12:07:10.145161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.603 [2024-10-21 12:07:10.145177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.603 [2024-10-21 12:07:10.145186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.603 [2024-10-21 12:07:10.145197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.603 [2024-10-21 12:07:10.145204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.604 [2024-10-21 12:07:10.145215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.604 [2024-10-21 12:07:10.145222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.604 [2024-10-21 12:07:10.145232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.604 [2024-10-21 12:07:10.145240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.604 [2024-10-21 12:07:10.145250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.604 [2024-10-21 12:07:10.145258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.604 [2024-10-21 12:07:10.145268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.604 [2024-10-21 12:07:10.145276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.604 [2024-10-21 12:07:10.145286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.604 [2024-10-21 12:07:10.145294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.604 [2024-10-21 12:07:10.145304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.604 [2024-10-21 12:07:10.145311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.604 [2024-10-21 12:07:10.145325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.604 [2024-10-21 12:07:10.145333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.604 [2024-10-21 12:07:10.145344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.604 [2024-10-21 12:07:10.145352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.604 [2024-10-21 12:07:10.145362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.604 [2024-10-21 12:07:10.145370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.604 [2024-10-21 12:07:10.145380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.604 [2024-10-21 12:07:10.145389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.604 [2024-10-21 12:07:10.145398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.604 [2024-10-21 12:07:10.145408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.604 [2024-10-21 12:07:10.145418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.604 [2024-10-21 12:07:10.145426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.604 [2024-10-21 12:07:10.145436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.604 [2024-10-21 12:07:10.145444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.604 [2024-10-21 12:07:10.145454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.604 [2024-10-21 12:07:10.145461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.604 [2024-10-21 12:07:10.145471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.604 [2024-10-21 12:07:10.145479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.604 [2024-10-21 12:07:10.145489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.604 [2024-10-21 12:07:10.145497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.604 [2024-10-21 12:07:10.145506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.604 [2024-10-21 12:07:10.145514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.604 [2024-10-21 12:07:10.145524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.604 [2024-10-21 12:07:10.145531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.604 [2024-10-21 12:07:10.145541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.604 [2024-10-21 12:07:10.145549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.604 [2024-10-21 12:07:10.145558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.604 [2024-10-21 12:07:10.145566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.604 [2024-10-21 12:07:10.145576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.604 [2024-10-21 12:07:10.145584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.604 [2024-10-21 12:07:10.145593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.604 [2024-10-21 12:07:10.145601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.604 [2024-10-21 12:07:10.145611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.604 [2024-10-21 12:07:10.145618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.604 [2024-10-21 12:07:10.145630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.604 [2024-10-21 12:07:10.145637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.604 [2024-10-21 12:07:10.145647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.604 [2024-10-21 12:07:10.145656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.604 [2024-10-21 12:07:10.145666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.604 [2024-10-21 12:07:10.145674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.604 [2024-10-21 12:07:10.145684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.604 [2024-10-21 12:07:10.145693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.604 [2024-10-21 12:07:10.145703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.604 [2024-10-21 12:07:10.145710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.604 [2024-10-21 12:07:10.145720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.604 [2024-10-21 12:07:10.145727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.604 [2024-10-21 12:07:10.145737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.604 [2024-10-21 12:07:10.145745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.604 [2024-10-21 12:07:10.145754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.604 [2024-10-21 12:07:10.145762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.604 [2024-10-21 12:07:10.145772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.604 [2024-10-21 12:07:10.145779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.604 [2024-10-21 12:07:10.145789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.604 [2024-10-21 12:07:10.145797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.604 [2024-10-21 12:07:10.145807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.604 [2024-10-21 12:07:10.145814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.604 [2024-10-21 12:07:10.145824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.605 [2024-10-21 12:07:10.145832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.605 [2024-10-21 12:07:10.145841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.605 [2024-10-21 12:07:10.145850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.605 [2024-10-21 12:07:10.145860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.605 [2024-10-21 12:07:10.145867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.605 [2024-10-21 12:07:10.145877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.605 [2024-10-21 12:07:10.145885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.605 [2024-10-21 12:07:10.145895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.605 [2024-10-21 12:07:10.145902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.605 [2024-10-21 12:07:10.145912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.605 [2024-10-21 12:07:10.145919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.605 [2024-10-21 12:07:10.145929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.605 [2024-10-21 12:07:10.145936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.605 [2024-10-21 12:07:10.145946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.605 [2024-10-21 12:07:10.145954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.605 [2024-10-21 12:07:10.145963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.605 [2024-10-21 12:07:10.145971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.605 [2024-10-21 12:07:10.145980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.605 [2024-10-21 12:07:10.145988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.605 [2024-10-21 12:07:10.145998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.605 [2024-10-21 12:07:10.146005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.605 [2024-10-21 12:07:10.146015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.605 [2024-10-21 12:07:10.146023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.605 [2024-10-21 12:07:10.146032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.605 [2024-10-21 12:07:10.146040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.605 [2024-10-21 12:07:10.146050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.605 [2024-10-21 12:07:10.146057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.605 [2024-10-21 12:07:10.146072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.605 [2024-10-21 12:07:10.146080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.605 [2024-10-21 12:07:10.146089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.605 [2024-10-21 12:07:10.146096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.605 [2024-10-21 12:07:10.146106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.605 [2024-10-21 12:07:10.146115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.605 [2024-10-21 12:07:10.146125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.605 [2024-10-21 12:07:10.146132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.605 [2024-10-21 12:07:10.146142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.605 [2024-10-21 12:07:10.146150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.605 [2024-10-21 12:07:10.146160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.605 [2024-10-21 12:07:10.146167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.605 [2024-10-21 12:07:10.146177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.605 [2024-10-21 12:07:10.146185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.605 [2024-10-21 12:07:10.146195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.605 [2024-10-21 12:07:10.146202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.605 [2024-10-21 12:07:10.146212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.605 [2024-10-21 12:07:10.146220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.605 [2024-10-21 12:07:10.146231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.605 [2024-10-21 12:07:10.146239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.605 [2024-10-21 12:07:10.146248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.605 [2024-10-21 12:07:10.146256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.605 [2024-10-21 12:07:10.146266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.605 [2024-10-21 12:07:10.146273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.605 [2024-10-21 12:07:10.146283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.605 [2024-10-21 12:07:10.146292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.605 [2024-10-21 12:07:10.146302] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa71b0 is same with the state(6) to be set 00:22:33.605 [2024-10-21 12:07:10.147574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.605 [2024-10-21 12:07:10.147587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.605 [2024-10-21 12:07:10.147599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.605 [2024-10-21 12:07:10.147607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.605 [2024-10-21 12:07:10.147617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.605 [2024-10-21 12:07:10.147625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.605 [2024-10-21 12:07:10.147635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.605 [2024-10-21 12:07:10.147643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.605 [2024-10-21 12:07:10.147653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.605 [2024-10-21 12:07:10.147661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.605 [2024-10-21 12:07:10.147671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.605 [2024-10-21 12:07:10.147680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.605 [2024-10-21 12:07:10.147689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.605 [2024-10-21 12:07:10.147697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.605 [2024-10-21 12:07:10.147707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.605 [2024-10-21 12:07:10.147715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.605 [2024-10-21 12:07:10.147726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.605 [2024-10-21 12:07:10.147733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.605 [2024-10-21 12:07:10.147743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.605 [2024-10-21 12:07:10.147750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.605 [2024-10-21 12:07:10.147760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.605 [2024-10-21 12:07:10.147768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.605 [2024-10-21 12:07:10.147778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.606 [2024-10-21 12:07:10.147789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.606 [2024-10-21 12:07:10.147799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.606 [2024-10-21 12:07:10.147807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.606 [2024-10-21 12:07:10.147817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.606 [2024-10-21 12:07:10.147825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.606 [2024-10-21 12:07:10.147835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.606 [2024-10-21 12:07:10.147843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.606 [2024-10-21 12:07:10.147853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.606 [2024-10-21 12:07:10.147861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.606 [2024-10-21 12:07:10.147871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.606 [2024-10-21 12:07:10.147879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.606 [2024-10-21 12:07:10.147889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.606 [2024-10-21 12:07:10.147897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.606 [2024-10-21 12:07:10.147907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.606 [2024-10-21 12:07:10.147914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.606 [2024-10-21 12:07:10.147924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.606 [2024-10-21 12:07:10.147932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.606 [2024-10-21 12:07:10.147941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.606 [2024-10-21 12:07:10.147949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.606 [2024-10-21 12:07:10.147958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.606 [2024-10-21 12:07:10.147966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.606 [2024-10-21 12:07:10.147975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.606 [2024-10-21 12:07:10.147983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.606 [2024-10-21 12:07:10.147993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.606 [2024-10-21 12:07:10.148001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.606 [2024-10-21 12:07:10.148013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.606 [2024-10-21 12:07:10.148020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.606 [2024-10-21 12:07:10.148030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.606 [2024-10-21 12:07:10.148037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.606 [2024-10-21 12:07:10.148048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.606 [2024-10-21 12:07:10.148056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.606 [2024-10-21 12:07:10.148066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.606 [2024-10-21 12:07:10.148074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.606 [2024-10-21 12:07:10.148084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.606 [2024-10-21 12:07:10.148092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.606 [2024-10-21 12:07:10.148102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.606 [2024-10-21 12:07:10.148110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.606 [2024-10-21 12:07:10.148120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.606 [2024-10-21 12:07:10.148127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.606 [2024-10-21 12:07:10.148137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.606 [2024-10-21 12:07:10.148145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.606 [2024-10-21 12:07:10.148154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.606 [2024-10-21 12:07:10.148162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.606 [2024-10-21 12:07:10.148171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.606 [2024-10-21 12:07:10.148179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.606 [2024-10-21 12:07:10.148189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.606 [2024-10-21 12:07:10.148196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.606 [2024-10-21 12:07:10.148206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.606 [2024-10-21 12:07:10.148213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.606 [2024-10-21 12:07:10.148224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.606 [2024-10-21 12:07:10.148233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.606 [2024-10-21 12:07:10.148243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.606 [2024-10-21 12:07:10.148251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.606 [2024-10-21 12:07:10.148260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.606 [2024-10-21 12:07:10.148268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.606 [2024-10-21 12:07:10.148278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.606 [2024-10-21 12:07:10.148286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.606 [2024-10-21 12:07:10.148296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.606 [2024-10-21 12:07:10.148304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.606 [2024-10-21 12:07:10.148314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.606 [2024-10-21 12:07:10.148325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.606 [2024-10-21 12:07:10.148335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.606 [2024-10-21 12:07:10.148343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.606 [2024-10-21 12:07:10.148352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.606 [2024-10-21 12:07:10.148360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.606 [2024-10-21 12:07:10.148370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.606 [2024-10-21 12:07:10.148377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.606 [2024-10-21 12:07:10.148387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.606 [2024-10-21 12:07:10.148394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.606 [2024-10-21 12:07:10.148404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.606 [2024-10-21 12:07:10.148412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.606 [2024-10-21 12:07:10.148421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.606 [2024-10-21 12:07:10.148429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.606 [2024-10-21 12:07:10.148439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.606 [2024-10-21 12:07:10.148446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.606 [2024-10-21 12:07:10.148458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.606 [2024-10-21 12:07:10.148467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.607 [2024-10-21 12:07:10.148477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.607 [2024-10-21 12:07:10.148485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.607 [2024-10-21 12:07:10.148495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.607 [2024-10-21 12:07:10.148503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.607 [2024-10-21 12:07:10.148513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.607 [2024-10-21 12:07:10.148521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.607 [2024-10-21 12:07:10.148531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.607 [2024-10-21 12:07:10.148538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.607 [2024-10-21 12:07:10.148549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.607 [2024-10-21 12:07:10.148556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.607 [2024-10-21 12:07:10.148566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.607 [2024-10-21 12:07:10.148573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.607 [2024-10-21 12:07:10.148583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.607 [2024-10-21 12:07:10.148590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.607 [2024-10-21 12:07:10.148600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.607 [2024-10-21 12:07:10.148608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.607 [2024-10-21 12:07:10.148618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.607 [2024-10-21 12:07:10.148626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.607 [2024-10-21 12:07:10.148636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.607 [2024-10-21 12:07:10.148644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.607 [2024-10-21 12:07:10.148654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.607 [2024-10-21 12:07:10.148662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.607 [2024-10-21 12:07:10.148672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.607 [2024-10-21 12:07:10.148680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.607 [2024-10-21 12:07:10.148692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.607 [2024-10-21 12:07:10.148699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.607 [2024-10-21 12:07:10.148709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.607 [2024-10-21 12:07:10.148717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.607 [2024-10-21 12:07:10.148725] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa86e0 is same with the state(6) to be set 00:22:33.607 [2024-10-21 12:07:10.149999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.607 [2024-10-21 12:07:10.150014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.607 [2024-10-21 12:07:10.150027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.607 [2024-10-21 12:07:10.150037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.607 [2024-10-21 12:07:10.150049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.607 [2024-10-21 12:07:10.150059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.607 [2024-10-21 12:07:10.150071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.607 [2024-10-21 12:07:10.150079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.607 [2024-10-21 12:07:10.150090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.607 [2024-10-21 12:07:10.150097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.607 [2024-10-21 12:07:10.150107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.607 [2024-10-21 12:07:10.150115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.607 [2024-10-21 12:07:10.150124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.607 [2024-10-21 12:07:10.150132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.607 [2024-10-21 12:07:10.150141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.607 [2024-10-21 12:07:10.150150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.607 [2024-10-21 12:07:10.150159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.607 [2024-10-21 12:07:10.150167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.607 [2024-10-21 12:07:10.150177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.607 [2024-10-21 12:07:10.150185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.607 [2024-10-21 12:07:10.150197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.607 [2024-10-21 12:07:10.150205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.607 [2024-10-21 12:07:10.150215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.607 [2024-10-21 12:07:10.150223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.607 [2024-10-21 12:07:10.150233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.607 [2024-10-21 12:07:10.150240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.607 [2024-10-21 12:07:10.150250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.607 [2024-10-21 12:07:10.150258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.607 [2024-10-21 12:07:10.150267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.607 [2024-10-21 12:07:10.150276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.607 [2024-10-21 12:07:10.150285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.607 [2024-10-21 12:07:10.150293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.607 [2024-10-21 12:07:10.150302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.607 [2024-10-21 12:07:10.150310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.607 [2024-10-21 12:07:10.150325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.607 [2024-10-21 12:07:10.150333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.607 [2024-10-21 12:07:10.150343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.607 [2024-10-21 12:07:10.150351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.607 [2024-10-21 12:07:10.150361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.607 [2024-10-21 12:07:10.150368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.607 [2024-10-21 12:07:10.150378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.608 [2024-10-21 12:07:10.150386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.608 [2024-10-21 12:07:10.150395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.608 [2024-10-21 12:07:10.150403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.608 [2024-10-21 12:07:10.150413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.608 [2024-10-21 12:07:10.150423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.608 [2024-10-21 12:07:10.150433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.608 [2024-10-21 12:07:10.150440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.608 [2024-10-21 12:07:10.150450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.608 [2024-10-21 12:07:10.150457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.608 [2024-10-21 12:07:10.150467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.608 [2024-10-21 12:07:10.150475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.608 [2024-10-21 12:07:10.150485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.608 [2024-10-21 12:07:10.150493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.608 [2024-10-21 12:07:10.150504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.608 [2024-10-21 12:07:10.150512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.608 [2024-10-21 12:07:10.150522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.608 [2024-10-21 12:07:10.150529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.608 [2024-10-21 12:07:10.150539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.608 [2024-10-21 12:07:10.150547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.608 [2024-10-21 12:07:10.150557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.608 [2024-10-21 12:07:10.150564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.608 [2024-10-21 12:07:10.150574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.608 [2024-10-21 12:07:10.150582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.608 [2024-10-21 12:07:10.150592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.608 [2024-10-21 12:07:10.150600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.608 [2024-10-21 12:07:10.150610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.608 [2024-10-21 12:07:10.150618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.608 [2024-10-21 12:07:10.150628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.608 [2024-10-21 12:07:10.150636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.608 [2024-10-21 12:07:10.150647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.608 [2024-10-21 12:07:10.150655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.608 [2024-10-21 12:07:10.150665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.608 [2024-10-21 12:07:10.150673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.608 [2024-10-21 12:07:10.150682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.608 [2024-10-21 12:07:10.150690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.608 [2024-10-21 12:07:10.150701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.608 [2024-10-21 12:07:10.150709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.608 [2024-10-21 12:07:10.150718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.608 [2024-10-21 12:07:10.150726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.608 [2024-10-21 12:07:10.150736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.608 [2024-10-21 12:07:10.150744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.608 [2024-10-21 12:07:10.150754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.608 [2024-10-21 12:07:10.150762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.608 [2024-10-21 12:07:10.150771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.608 [2024-10-21 12:07:10.150779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.608 [2024-10-21 12:07:10.150788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.608 [2024-10-21 12:07:10.150796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.608 [2024-10-21 12:07:10.150806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.608 [2024-10-21 12:07:10.150813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.608 [2024-10-21 12:07:10.150823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.608 [2024-10-21 12:07:10.150831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.608 [2024-10-21 12:07:10.150840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.608 [2024-10-21 12:07:10.150848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.608 [2024-10-21 12:07:10.150857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.608 [2024-10-21 12:07:10.150867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.608 [2024-10-21 12:07:10.150877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.608 [2024-10-21 12:07:10.150884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.608 [2024-10-21 12:07:10.150894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.608 [2024-10-21 12:07:10.150902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.608 [2024-10-21 12:07:10.150911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.608 [2024-10-21 12:07:10.150918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.608 [2024-10-21 12:07:10.150928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.608 [2024-10-21 12:07:10.150936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.608 [2024-10-21 12:07:10.150946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.608 [2024-10-21 12:07:10.150953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.608 [2024-10-21 12:07:10.150963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.608 [2024-10-21 12:07:10.150970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.608 [2024-10-21 12:07:10.150981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.609 [2024-10-21 12:07:10.150989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.609 [2024-10-21 12:07:10.150999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.609 [2024-10-21 12:07:10.151007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.609 [2024-10-21 12:07:10.151017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.609 [2024-10-21 12:07:10.151025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.609 [2024-10-21 12:07:10.151035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.609 [2024-10-21 12:07:10.151043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.609 [2024-10-21 12:07:10.151052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.609 [2024-10-21 12:07:10.151060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.609 [2024-10-21 12:07:10.151071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.609 [2024-10-21 12:07:10.151079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.609 [2024-10-21 12:07:10.151090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.609 [2024-10-21 12:07:10.151097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.609 [2024-10-21 12:07:10.151108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.609 [2024-10-21 12:07:10.151115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.609 [2024-10-21 12:07:10.151125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.609 [2024-10-21 12:07:10.151133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.609 [2024-10-21 12:07:10.151142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.609 [2024-10-21 12:07:10.151150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.609 [2024-10-21 12:07:10.151158] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa9c60 is same with the state(6) to be set 00:22:33.609 [2024-10-21 12:07:10.152424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.609 [2024-10-21 12:07:10.152438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.609 [2024-10-21 12:07:10.152452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.609 [2024-10-21 12:07:10.152461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.609 [2024-10-21 12:07:10.152473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.609 [2024-10-21 12:07:10.152482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.609 [2024-10-21 12:07:10.152494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.609 [2024-10-21 12:07:10.152501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.609 [2024-10-21 12:07:10.152512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.609 [2024-10-21 12:07:10.152519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.609 [2024-10-21 12:07:10.152529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.609 [2024-10-21 12:07:10.152537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.609 [2024-10-21 12:07:10.152546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.609 [2024-10-21 12:07:10.152554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.609 [2024-10-21 12:07:10.152564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.609 [2024-10-21 12:07:10.152572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.609 [2024-10-21 12:07:10.152593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.609 [2024-10-21 12:07:10.152600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.609 [2024-10-21 12:07:10.152610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.609 [2024-10-21 12:07:10.152618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.609 [2024-10-21 12:07:10.152629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.609 [2024-10-21 12:07:10.152637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.609 [2024-10-21 12:07:10.152647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.609 [2024-10-21 12:07:10.152654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.609 [2024-10-21 12:07:10.152664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.609 [2024-10-21 12:07:10.152672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.609 [2024-10-21 12:07:10.152681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.609 [2024-10-21 12:07:10.152689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.609 [2024-10-21 12:07:10.152699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.609 [2024-10-21 12:07:10.152707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.609 [2024-10-21 12:07:10.152716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.609 [2024-10-21 12:07:10.152724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.609 [2024-10-21 12:07:10.152734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.609 [2024-10-21 12:07:10.152741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.609 [2024-10-21 12:07:10.152751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.609 [2024-10-21 12:07:10.152759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.609 [2024-10-21 12:07:10.152769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.609 [2024-10-21 12:07:10.152776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.609 [2024-10-21 12:07:10.152786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.609 [2024-10-21 12:07:10.152794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.609 [2024-10-21 12:07:10.152803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.609 [2024-10-21 12:07:10.152812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.609 [2024-10-21 12:07:10.152823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.609 [2024-10-21 12:07:10.152830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.609 [2024-10-21 12:07:10.152840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.609 [2024-10-21 12:07:10.152847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.609 [2024-10-21 12:07:10.152857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.609 [2024-10-21 12:07:10.152864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.609 [2024-10-21 12:07:10.152874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.609 [2024-10-21 12:07:10.152882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.609 [2024-10-21 12:07:10.152891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.609 [2024-10-21 12:07:10.152899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.609 [2024-10-21 12:07:10.152909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.609 [2024-10-21 12:07:10.152917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.610 [2024-10-21 12:07:10.152927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.610 [2024-10-21 12:07:10.152935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.610 [2024-10-21 12:07:10.152945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.610 [2024-10-21 12:07:10.152952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.610 [2024-10-21 12:07:10.152962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.610 [2024-10-21 12:07:10.152970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.610 [2024-10-21 12:07:10.152980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.610 [2024-10-21 12:07:10.152988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.610 [2024-10-21 12:07:10.152999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.610 [2024-10-21 12:07:10.153007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.610 [2024-10-21 12:07:10.153017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.610 [2024-10-21 12:07:10.153025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.610 [2024-10-21 12:07:10.153036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.610 [2024-10-21 12:07:10.153044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.610 [2024-10-21 12:07:10.153054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.610 [2024-10-21 12:07:10.153062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.610 [2024-10-21 12:07:10.153072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.610 [2024-10-21 12:07:10.153079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.610 [2024-10-21 12:07:10.153089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.610 [2024-10-21 12:07:10.153097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.610 [2024-10-21 12:07:10.153107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.610 [2024-10-21 12:07:10.153114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.610 [2024-10-21 12:07:10.153123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.610 [2024-10-21 12:07:10.153131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.610 [2024-10-21 12:07:10.153141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.610 [2024-10-21 12:07:10.153149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.610 [2024-10-21 12:07:10.153158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.610 [2024-10-21 12:07:10.153166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.610 [2024-10-21 12:07:10.153176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.610 [2024-10-21 12:07:10.153183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.610 [2024-10-21 12:07:10.153193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.610 [2024-10-21 12:07:10.153200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.610 [2024-10-21 12:07:10.153210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.610 [2024-10-21 12:07:10.153218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.610 [2024-10-21 12:07:10.153228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.610 [2024-10-21 12:07:10.153235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.610 [2024-10-21 12:07:10.153245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.610 [2024-10-21 12:07:10.153256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.610 [2024-10-21 12:07:10.153267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.610 [2024-10-21 12:07:10.153274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.610 [2024-10-21 12:07:10.153284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.610 [2024-10-21 12:07:10.153291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.610 [2024-10-21 12:07:10.153301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.610 [2024-10-21 12:07:10.153308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.610 [2024-10-21 12:07:10.153318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.610 [2024-10-21 12:07:10.153330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.610 [2024-10-21 12:07:10.153340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.610 [2024-10-21 12:07:10.153348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.610 [2024-10-21 12:07:10.153357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.610 [2024-10-21 12:07:10.153365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.610 [2024-10-21 12:07:10.153374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.610 [2024-10-21 12:07:10.153381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.610 [2024-10-21 12:07:10.153391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.610 [2024-10-21 12:07:10.153399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.610 [2024-10-21 12:07:10.153409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.610 [2024-10-21 12:07:10.153417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.610 [2024-10-21 12:07:10.153429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.610 [2024-10-21 12:07:10.153437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.610 [2024-10-21 12:07:10.153447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.610 [2024-10-21 12:07:10.153454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.610 [2024-10-21 12:07:10.153464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.610 [2024-10-21 12:07:10.153472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.610 [2024-10-21 12:07:10.153484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.610 [2024-10-21 12:07:10.153492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.610 [2024-10-21 12:07:10.153501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.610 [2024-10-21 12:07:10.153509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.610 [2024-10-21 12:07:10.153519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.610 [2024-10-21 12:07:10.153527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.610 [2024-10-21 12:07:10.153537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.610 [2024-10-21 12:07:10.153544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.610 [2024-10-21 12:07:10.153554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.610 [2024-10-21 12:07:10.153562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.610 [2024-10-21 12:07:10.153572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.610 [2024-10-21 12:07:10.153579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.610 [2024-10-21 12:07:10.153588] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ee030 is same with the state(6) to be set 00:22:33.611 [2024-10-21 12:07:10.154864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.611 [2024-10-21 12:07:10.154880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.611 [2024-10-21 12:07:10.154893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.611 [2024-10-21 12:07:10.154902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.611 [2024-10-21 12:07:10.154913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.611 [2024-10-21 12:07:10.154923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.611 [2024-10-21 12:07:10.154935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.611 [2024-10-21 12:07:10.154944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.611 [2024-10-21 12:07:10.154956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.611 [2024-10-21 12:07:10.154966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.611 [2024-10-21 12:07:10.154977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.611 [2024-10-21 12:07:10.154985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.611 [2024-10-21 12:07:10.154997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.611 [2024-10-21 12:07:10.155006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.611 [2024-10-21 12:07:10.155016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.611 [2024-10-21 12:07:10.155024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.611 [2024-10-21 12:07:10.155034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.611 [2024-10-21 12:07:10.155041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.611 [2024-10-21 12:07:10.155051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.611 [2024-10-21 12:07:10.155058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.611 [2024-10-21 12:07:10.155068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.611 [2024-10-21 12:07:10.155076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.611 [2024-10-21 12:07:10.155085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.611 [2024-10-21 12:07:10.155093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.611 [2024-10-21 12:07:10.155102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.611 [2024-10-21 12:07:10.155110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.611 [2024-10-21 12:07:10.155119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.611 [2024-10-21 12:07:10.155127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.611 [2024-10-21 12:07:10.155136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.611 [2024-10-21 12:07:10.155144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.611 [2024-10-21 12:07:10.155154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.611 [2024-10-21 12:07:10.155162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.611 [2024-10-21 12:07:10.155171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.611 [2024-10-21 12:07:10.155179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.611 [2024-10-21 12:07:10.155188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.611 [2024-10-21 12:07:10.155196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.611 [2024-10-21 12:07:10.155206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.611 [2024-10-21 12:07:10.155216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.611 [2024-10-21 12:07:10.155225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.611 [2024-10-21 12:07:10.155233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.611 [2024-10-21 12:07:10.155243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.611 [2024-10-21 12:07:10.155251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.611 [2024-10-21 12:07:10.155260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.611 [2024-10-21 12:07:10.155268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.611 [2024-10-21 12:07:10.155277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.611 [2024-10-21 12:07:10.155285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.611 [2024-10-21 12:07:10.155294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.611 [2024-10-21 12:07:10.155302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.611 [2024-10-21 12:07:10.155311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.611 [2024-10-21 12:07:10.155319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.611 [2024-10-21 12:07:10.155333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.611 [2024-10-21 12:07:10.155341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.611 [2024-10-21 12:07:10.155352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.611 [2024-10-21 12:07:10.155360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.611 [2024-10-21 12:07:10.155369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.611 [2024-10-21 12:07:10.155377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.611 [2024-10-21 12:07:10.155387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.611 [2024-10-21 12:07:10.155394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.611 [2024-10-21 12:07:10.155404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.611 [2024-10-21 12:07:10.155412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.611 [2024-10-21 12:07:10.155422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.611 [2024-10-21 12:07:10.155430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.611 [2024-10-21 12:07:10.155441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.611 [2024-10-21 12:07:10.155450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.611 [2024-10-21 12:07:10.155459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.611 [2024-10-21 12:07:10.155467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.611 [2024-10-21 12:07:10.155476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.611 [2024-10-21 12:07:10.155484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.611 [2024-10-21 12:07:10.155494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.611 [2024-10-21 12:07:10.155501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.611 [2024-10-21 12:07:10.155511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.611 [2024-10-21 12:07:10.155518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.611 [2024-10-21 12:07:10.155528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.611 [2024-10-21 12:07:10.155535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.611 [2024-10-21 12:07:10.155545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.611 [2024-10-21 12:07:10.155553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.611 [2024-10-21 12:07:10.155562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.612 [2024-10-21 12:07:10.155570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.612 [2024-10-21 12:07:10.155580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.612 [2024-10-21 12:07:10.155587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.612 [2024-10-21 12:07:10.155598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.612 [2024-10-21 12:07:10.155605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.612 [2024-10-21 12:07:10.155615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.612 [2024-10-21 12:07:10.155622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.612 [2024-10-21 12:07:10.155632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.612 [2024-10-21 12:07:10.155639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.612 [2024-10-21 12:07:10.155649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.612 [2024-10-21 12:07:10.155658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.612 [2024-10-21 12:07:10.155668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.612 [2024-10-21 12:07:10.155677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.612 [2024-10-21 12:07:10.155686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.612 [2024-10-21 12:07:10.155694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.612 [2024-10-21 12:07:10.155703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.612 [2024-10-21 12:07:10.155711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.612 [2024-10-21 12:07:10.155720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.612 [2024-10-21 12:07:10.155728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.612 [2024-10-21 12:07:10.155738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.612 [2024-10-21 12:07:10.155746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.612 [2024-10-21 12:07:10.155756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.612 [2024-10-21 12:07:10.155763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.612 [2024-10-21 12:07:10.155773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.612 [2024-10-21 12:07:10.155780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.612 [2024-10-21 12:07:10.155790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.612 [2024-10-21 12:07:10.155797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.612 [2024-10-21 12:07:10.155808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.612 [2024-10-21 12:07:10.155815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.612 [2024-10-21 12:07:10.155824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.612 [2024-10-21 12:07:10.155832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.612 [2024-10-21 12:07:10.155841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.612 [2024-10-21 12:07:10.155849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.612 [2024-10-21 12:07:10.155859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.612 [2024-10-21 12:07:10.155866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.612 [2024-10-21 12:07:10.155876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.612 [2024-10-21 12:07:10.155886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.612 [2024-10-21 12:07:10.155895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.612 [2024-10-21 12:07:10.155904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.612 [2024-10-21 12:07:10.155914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.612 [2024-10-21 12:07:10.155922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.612 [2024-10-21 12:07:10.155932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.612 [2024-10-21 12:07:10.155939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.612 [2024-10-21 12:07:10.155949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.612 [2024-10-21 12:07:10.155957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.612 [2024-10-21 12:07:10.155966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.612 [2024-10-21 12:07:10.155974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.612 [2024-10-21 12:07:10.155984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.612 [2024-10-21 12:07:10.155992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.612 [2024-10-21 12:07:10.156001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.612 [2024-10-21 12:07:10.156009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.612 [2024-10-21 12:07:10.156017] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92dfd0 is same with the state(6) to be set 00:22:33.612 [2024-10-21 12:07:10.157259] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:33.612 [2024-10-21 12:07:10.157277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:33.612 [2024-10-21 12:07:10.157289] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:33.612 [2024-10-21 12:07:10.157300] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:33.612 [2024-10-21 12:07:10.157393] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:33.612 [2024-10-21 12:07:10.157411] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:33.612 [2024-10-21 12:07:10.157495] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:33.612 [2024-10-21 12:07:10.157507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:33.612 [2024-10-21 12:07:10.157896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:33.612 [2024-10-21 12:07:10.157913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a30f0 with addr=10.0.0.2, port=4420 00:22:33.612 [2024-10-21 12:07:10.157921] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a30f0 is same with the state(6) to be set 00:22:33.612 [2024-10-21 12:07:10.158133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:33.612 [2024-10-21 12:07:10.158143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad0990 with addr=10.0.0.2, port=4420 00:22:33.612 [2024-10-21 12:07:10.158152] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad0990 is same with the state(6) to be set 00:22:33.612 [2024-10-21 12:07:10.158208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:33.612 [2024-10-21 12:07:10.158222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd650 with addr=10.0.0.2, port=4420 00:22:33.612 [2024-10-21 12:07:10.158232] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacd650 is same with the state(6) to be set 00:22:33.612 [2024-10-21 12:07:10.158530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:33.612 [2024-10-21 12:07:10.158543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bd610 with addr=10.0.0.2, port=4420 00:22:33.612 [2024-10-21 12:07:10.158551] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bd610 is same with the state(6) to be set 00:22:33.612 [2024-10-21 12:07:10.159883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.612 [2024-10-21 12:07:10.159897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.612 [2024-10-21 12:07:10.159909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.612 [2024-10-21 12:07:10.159917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.612 [2024-10-21 12:07:10.159927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.613 [2024-10-21 12:07:10.159935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.613 [2024-10-21 12:07:10.159945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.613 [2024-10-21 12:07:10.159952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.613 [2024-10-21 12:07:10.159962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.613 [2024-10-21 12:07:10.159969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.613 [2024-10-21 12:07:10.159979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.613 [2024-10-21 12:07:10.159987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.613 [2024-10-21 12:07:10.159996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.613 [2024-10-21 12:07:10.160004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.613 [2024-10-21 12:07:10.160014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.613 [2024-10-21 12:07:10.160022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.613 [2024-10-21 12:07:10.160032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.613 [2024-10-21 12:07:10.160044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.613 [2024-10-21 12:07:10.160053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.613 [2024-10-21 12:07:10.160061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.613 [2024-10-21 12:07:10.160070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.613 [2024-10-21 12:07:10.160078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.613 [2024-10-21 12:07:10.160087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.613 [2024-10-21 12:07:10.160095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.613 [2024-10-21 12:07:10.160104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.613 [2024-10-21 12:07:10.160111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.613 [2024-10-21 12:07:10.160120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.613 [2024-10-21 12:07:10.160128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.613 [2024-10-21 12:07:10.160137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.613 [2024-10-21 12:07:10.160145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.613 [2024-10-21 12:07:10.160155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.613 [2024-10-21 12:07:10.160163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.613 [2024-10-21 12:07:10.160172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.613 [2024-10-21 12:07:10.160179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.613 [2024-10-21 12:07:10.160189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.613 [2024-10-21 12:07:10.160196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.613 [2024-10-21 12:07:10.160206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.613 [2024-10-21 12:07:10.160214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.613 [2024-10-21 12:07:10.160224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.613 [2024-10-21 12:07:10.160231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.613 [2024-10-21 12:07:10.160240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.613 [2024-10-21 12:07:10.160248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.613 [2024-10-21 12:07:10.160259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.613 [2024-10-21 12:07:10.160267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.613 [2024-10-21 12:07:10.160277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.613 [2024-10-21 12:07:10.160284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.613 [2024-10-21 12:07:10.160293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.613 [2024-10-21 12:07:10.160301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.613 [2024-10-21 12:07:10.160310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.613 [2024-10-21 12:07:10.160318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.613 [2024-10-21 12:07:10.160332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.613 [2024-10-21 12:07:10.160339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.613 [2024-10-21 12:07:10.160349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.613 [2024-10-21 12:07:10.160357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.613 [2024-10-21 12:07:10.160366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.613 [2024-10-21 12:07:10.160374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.613 [2024-10-21 12:07:10.160384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.613 [2024-10-21 12:07:10.160392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.613 [2024-10-21 12:07:10.160402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.613 [2024-10-21 12:07:10.160409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.613 [2024-10-21 12:07:10.160418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.613 [2024-10-21 12:07:10.160426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.613 [2024-10-21 12:07:10.160435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.613 [2024-10-21 12:07:10.160442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.613 [2024-10-21 12:07:10.160452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.613 [2024-10-21 12:07:10.160460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.613 [2024-10-21 12:07:10.160469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.613 [2024-10-21 12:07:10.160478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.613 [2024-10-21 12:07:10.160488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.613 [2024-10-21 12:07:10.160496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.613 [2024-10-21 12:07:10.160506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.614 [2024-10-21 12:07:10.160514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.614 [2024-10-21 12:07:10.160523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.614 [2024-10-21 12:07:10.160531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.614 [2024-10-21 12:07:10.160542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.614 [2024-10-21 12:07:10.160549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.614 [2024-10-21 12:07:10.160559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.614 [2024-10-21 12:07:10.160566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.614 [2024-10-21 12:07:10.160576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.614 [2024-10-21 12:07:10.160584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.614 [2024-10-21 12:07:10.160593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.614 [2024-10-21 12:07:10.160601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.614 [2024-10-21 12:07:10.160611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.614 [2024-10-21 12:07:10.160619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.614 [2024-10-21 12:07:10.160628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.614 [2024-10-21 12:07:10.160636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.614 [2024-10-21 12:07:10.160646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.614 [2024-10-21 12:07:10.160653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.614 [2024-10-21 12:07:10.160663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.614 [2024-10-21 12:07:10.160671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.614 [2024-10-21 12:07:10.160680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.614 [2024-10-21 12:07:10.160688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.614 [2024-10-21 12:07:10.160699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.614 [2024-10-21 12:07:10.160706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.614 [2024-10-21 12:07:10.160716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.614 [2024-10-21 12:07:10.160724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.614 [2024-10-21 12:07:10.160733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.614 [2024-10-21 12:07:10.160741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.614 [2024-10-21 12:07:10.160750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.614 [2024-10-21 12:07:10.160759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.614 [2024-10-21 12:07:10.160768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.614 [2024-10-21 12:07:10.160776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.614 [2024-10-21 12:07:10.160786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.614 [2024-10-21 12:07:10.160793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.614 [2024-10-21 12:07:10.160803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.614 [2024-10-21 12:07:10.160810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.614 [2024-10-21 12:07:10.160820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.614 [2024-10-21 12:07:10.160828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.614 [2024-10-21 12:07:10.160837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.614 [2024-10-21 12:07:10.160844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.614 [2024-10-21 12:07:10.160854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.614 [2024-10-21 12:07:10.160862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.614 [2024-10-21 12:07:10.160872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.614 [2024-10-21 12:07:10.160880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.614 [2024-10-21 12:07:10.160890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.614 [2024-10-21 12:07:10.160898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.614 [2024-10-21 12:07:10.160906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cac0 is same with the state(6) to be set 00:22:33.614 [2024-10-21 12:07:10.162654] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:33.614 [2024-10-21 12:07:10.162681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:33.614 [2024-10-21 12:07:10.162692] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:33.875 task offset: 32128 on job bdev=Nvme1n1 fails 00:22:33.875 00:22:33.875 Latency(us) 00:22:33.875 [2024-10-21T10:07:10.470Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:33.875 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:33.875 Job: Nvme1n1 ended in about 0.96 seconds with error 00:22:33.875 Verification LBA range: start 0x0 length 0x400 00:22:33.875 Nvme1n1 : 0.96 200.57 12.54 66.86 0.00 236630.83 14527.15 255153.49 00:22:33.875 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:33.875 Job: Nvme2n1 ended in about 0.96 seconds with error 00:22:33.875 Verification LBA range: start 0x0 length 0x400 00:22:33.875 Nvme2n1 : 0.96 200.32 12.52 66.77 0.00 232053.76 16384.00 234181.97 00:22:33.875 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:33.875 Job: Nvme3n1 ended in about 0.97 seconds with error 00:22:33.875 Verification LBA range: start 0x0 length 0x400 00:22:33.875 Nvme3n1 : 0.97 198.29 12.39 66.10 0.00 229645.87 20534.61 228939.09 00:22:33.875 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:33.875 Job: Nvme4n1 ended in about 0.96 seconds with error 00:22:33.875 Verification LBA range: start 0x0 length 0x400 00:22:33.875 Nvme4n1 : 0.96 197.43 12.34 2.08 0.00 297211.16 18896.21 265639.25 00:22:33.875 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:33.875 Job: Nvme5n1 ended in about 0.97 seconds with error 00:22:33.875 Verification LBA range: start 0x0 length 0x400 00:22:33.875 Nvme5n1 : 0.97 131.86 8.24 65.93 0.00 294233.60 25995.95 256901.12 00:22:33.875 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:33.875 Job: Nvme6n1 ended in about 0.97 seconds with error 00:22:33.875 Verification LBA range: start 0x0 length 0x400 00:22:33.875 Nvme6n1 : 0.97 197.31 12.33 65.77 0.00 216386.77 14308.69 251658.24 00:22:33.875 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:33.875 Job: Nvme7n1 ended in about 0.98 seconds with error 00:22:33.875 Verification LBA range: start 0x0 length 0x400 00:22:33.875 Nvme7n1 : 0.98 200.92 12.56 65.61 0.00 208850.63 20643.84 251658.24 00:22:33.875 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:33.875 Job: Nvme8n1 ended in about 0.98 seconds with error 00:22:33.875 Verification LBA range: start 0x0 length 0x400 00:22:33.875 Nvme8n1 : 0.98 198.37 12.40 65.44 0.00 206306.47 6225.92 253405.87 00:22:33.875 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:33.875 Job: Nvme9n1 ended in about 0.99 seconds with error 00:22:33.875 Verification LBA range: start 0x0 length 0x400 00:22:33.875 Nvme9n1 : 0.99 136.01 8.50 58.87 0.00 272532.76 15947.09 263891.63 00:22:33.875 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:33.875 Job: Nvme10n1 ended in about 0.98 seconds with error 00:22:33.875 Verification LBA range: start 0x0 length 0x400 00:22:33.875 Nvme10n1 : 0.98 135.66 8.48 65.28 0.00 258594.06 6007.47 265639.25 00:22:33.875 [2024-10-21T10:07:10.471Z] =================================================================================================================== 00:22:33.876 [2024-10-21T10:07:10.471Z] Total : 1796.74 112.30 588.70 0.00 241262.44 6007.47 265639.25 00:22:33.876 [2024-10-21 12:07:10.186542] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:33.876 [2024-10-21 12:07:10.186570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:33.876 [2024-10-21 12:07:10.186969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:33.876 [2024-10-21 12:07:10.186987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad32c0 with addr=10.0.0.2, port=4420 00:22:33.876 [2024-10-21 12:07:10.187002] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad32c0 is same with the state(6) to be set 00:22:33.876 [2024-10-21 12:07:10.187269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:33.876 [2024-10-21 12:07:10.187280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb11ed0 with addr=10.0.0.2, port=4420 00:22:33.876 [2024-10-21 12:07:10.187288] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb11ed0 is same with the state(6) to be set 00:22:33.876 [2024-10-21 12:07:10.187300] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6a30f0 (9): Bad file descriptor 00:22:33.876 [2024-10-21 12:07:10.187312] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad0990 (9): Bad file descriptor 00:22:33.876 [2024-10-21 12:07:10.187327] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacd650 (9): Bad file descriptor 00:22:33.876 [2024-10-21 12:07:10.187337] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bd610 (9): Bad file descriptor 00:22:33.876 [2024-10-21 12:07:10.187778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:33.876 [2024-10-21 12:07:10.187794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x69c700 with addr=10.0.0.2, port=4420 00:22:33.876 [2024-10-21 12:07:10.187802] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69c700 is same with the state(6) to be set 00:22:33.876 [2024-10-21 12:07:10.188075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:33.876 [2024-10-21 12:07:10.188087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x68d250 with addr=10.0.0.2, port=4420 00:22:33.876 [2024-10-21 12:07:10.188095] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68d250 is same with the state(6) to be set 00:22:33.876 [2024-10-21 12:07:10.188266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:33.876 [2024-10-21 12:07:10.188276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x69b780 with addr=10.0.0.2, port=4420 00:22:33.876 [2024-10-21 12:07:10.188283] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69b780 is same with the state(6) to be set 00:22:33.876 [2024-10-21 12:07:10.188616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:33.876 [2024-10-21 12:07:10.188628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xafd370 with addr=10.0.0.2, port=4420 00:22:33.876 [2024-10-21 12:07:10.188636] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafd370 is same with the state(6) to be set 00:22:33.876 [2024-10-21 12:07:10.188645] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad32c0 (9): Bad file descriptor 00:22:33.876 [2024-10-21 12:07:10.188654] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb11ed0 (9): Bad file descriptor 00:22:33.876 [2024-10-21 12:07:10.188663] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:33.876 [2024-10-21 12:07:10.188670] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:33.876 [2024-10-21 12:07:10.188679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:33.876 [2024-10-21 12:07:10.188691] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:33.876 [2024-10-21 12:07:10.188698] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:33.876 [2024-10-21 12:07:10.188706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:33.876 [2024-10-21 12:07:10.188717] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:33.876 [2024-10-21 12:07:10.188728] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:33.876 [2024-10-21 12:07:10.188735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:33.876 [2024-10-21 12:07:10.188745] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:22:33.876 [2024-10-21 12:07:10.188752] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:22:33.876 [2024-10-21 12:07:10.188759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:33.876 [2024-10-21 12:07:10.188790] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:33.876 [2024-10-21 12:07:10.188802] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:33.876 [2024-10-21 12:07:10.188814] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:33.876 [2024-10-21 12:07:10.188825] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:33.876 [2024-10-21 12:07:10.188836] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:33.876 [2024-10-21 12:07:10.188849] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:33.876 [2024-10-21 12:07:10.189185] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:33.876 [2024-10-21 12:07:10.189196] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:33.876 [2024-10-21 12:07:10.189203] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:33.876 [2024-10-21 12:07:10.189210] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:33.876 [2024-10-21 12:07:10.189218] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x69c700 (9): Bad file descriptor 00:22:33.876 [2024-10-21 12:07:10.189229] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68d250 (9): Bad file descriptor 00:22:33.876 [2024-10-21 12:07:10.189238] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x69b780 (9): Bad file descriptor 00:22:33.876 [2024-10-21 12:07:10.189248] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xafd370 (9): Bad file descriptor 00:22:33.876 [2024-10-21 12:07:10.189257] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:22:33.876 [2024-10-21 12:07:10.189264] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:22:33.876 [2024-10-21 12:07:10.189271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:33.876 [2024-10-21 12:07:10.189281] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:33.876 [2024-10-21 12:07:10.189288] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:33.876 [2024-10-21 12:07:10.189295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:33.876 [2024-10-21 12:07:10.189577] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:33.876 [2024-10-21 12:07:10.189590] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:33.876 [2024-10-21 12:07:10.189597] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:33.876 [2024-10-21 12:07:10.189604] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:33.876 [2024-10-21 12:07:10.189612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:33.876 [2024-10-21 12:07:10.189626] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:33.876 [2024-10-21 12:07:10.189634] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:33.876 [2024-10-21 12:07:10.189641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:33.876 [2024-10-21 12:07:10.189651] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:33.876 [2024-10-21 12:07:10.189658] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:33.876 [2024-10-21 12:07:10.189665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:33.876 [2024-10-21 12:07:10.189676] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:22:33.876 [2024-10-21 12:07:10.189683] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:22:33.876 [2024-10-21 12:07:10.189689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:33.876 [2024-10-21 12:07:10.189726] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:33.876 [2024-10-21 12:07:10.189735] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:33.876 [2024-10-21 12:07:10.189741] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:33.876 [2024-10-21 12:07:10.189748] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:33.876 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:22:34.822 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1044612 00:22:34.822 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:22:34.822 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1044612 00:22:34.822 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:22:34.822 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:34.822 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:22:34.822 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:34.822 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 1044612 00:22:34.822 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:22:34.822 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:34.822 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:22:34.822 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:22:34.822 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:22:34.822 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:34.822 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:22:34.822 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:34.822 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:34.822 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:34.822 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:34.822 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:34.822 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:22:34.822 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:34.822 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:22:34.822 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:34.822 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:34.822 rmmod nvme_tcp 00:22:34.822 rmmod nvme_fabrics 00:22:35.083 rmmod nvme_keyring 00:22:35.083 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:35.083 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:22:35.083 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:22:35.083 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@515 -- # '[' -n 1044316 ']' 00:22:35.083 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # killprocess 1044316 00:22:35.083 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 1044316 ']' 00:22:35.083 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 1044316 00:22:35.083 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1044316) - No such process 00:22:35.083 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@977 -- # echo 'Process with pid 1044316 is not found' 00:22:35.083 Process with pid 1044316 is not found 00:22:35.083 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:35.083 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:35.083 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:35.083 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:22:35.083 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-save 00:22:35.083 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:35.083 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-restore 00:22:35.083 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:35.083 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:35.083 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.083 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:35.083 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.996 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:36.996 00:22:36.996 real 0m7.033s 00:22:36.996 user 0m15.991s 00:22:36.996 sys 0m1.194s 00:22:36.996 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:36.996 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:36.996 ************************************ 00:22:36.996 END TEST nvmf_shutdown_tc3 00:22:36.996 ************************************ 00:22:36.996 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:22:36.996 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:22:36.996 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:22:36.996 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:36.996 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:36.996 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:37.258 ************************************ 00:22:37.258 START TEST nvmf_shutdown_tc4 00:22:37.258 ************************************ 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:37.258 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:37.258 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:37.258 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:37.259 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:37.259 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:37.259 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:37.259 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:37.259 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:37.259 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:37.259 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:37.259 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:37.259 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:37.259 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:37.259 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:37.259 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:37.259 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:37.259 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:37.259 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # is_hw=yes 00:22:37.259 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:37.259 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:37.259 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:37.259 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:37.259 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:37.259 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:37.259 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:37.259 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:37.259 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:37.259 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:37.259 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:37.259 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:37.259 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:37.259 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:37.259 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:37.259 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:37.259 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:37.259 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:37.259 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:37.259 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:37.259 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:37.259 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:37.520 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:37.520 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:37.520 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:37.520 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:37.520 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:37.520 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:22:37.520 00:22:37.520 --- 10.0.0.2 ping statistics --- 00:22:37.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.520 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:22:37.520 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:37.520 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:37.520 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:22:37.520 00:22:37.520 --- 10.0.0.1 ping statistics --- 00:22:37.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.520 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:22:37.520 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:37.520 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # return 0 00:22:37.520 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:37.520 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:37.520 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:37.520 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:37.520 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:37.520 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:37.521 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:37.521 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:37.521 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:37.521 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:37.521 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:37.521 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # nvmfpid=1045837 00:22:37.521 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # waitforlisten 1045837 00:22:37.521 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:37.521 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 1045837 ']' 00:22:37.521 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:37.521 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:37.521 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:37.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:37.521 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:37.521 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:37.521 [2024-10-21 12:07:14.055913] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:22:37.521 [2024-10-21 12:07:14.055980] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:37.782 [2024-10-21 12:07:14.146552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:37.782 [2024-10-21 12:07:14.182177] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:37.782 [2024-10-21 12:07:14.182210] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:37.782 [2024-10-21 12:07:14.182216] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:37.782 [2024-10-21 12:07:14.182221] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:37.782 [2024-10-21 12:07:14.182226] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:37.782 [2024-10-21 12:07:14.183832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:37.782 [2024-10-21 12:07:14.183983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:37.782 [2024-10-21 12:07:14.184134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:37.782 [2024-10-21 12:07:14.184137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:38.353 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:38.353 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:22:38.353 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:38.353 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:38.353 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:38.353 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:38.353 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:38.353 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.353 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:38.353 [2024-10-21 12:07:14.914612] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:38.353 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.353 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:38.353 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:38.353 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:38.353 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:38.353 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:38.353 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:38.353 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:38.353 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:38.353 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:38.353 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:38.353 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:38.353 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:38.353 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:38.613 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:38.613 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:38.613 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:38.613 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:38.613 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:38.613 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:38.613 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:38.613 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:38.613 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:38.613 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:38.613 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:38.613 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:38.613 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:38.613 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.613 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:38.613 Malloc1 00:22:38.613 [2024-10-21 12:07:15.028180] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:38.613 Malloc2 00:22:38.613 Malloc3 00:22:38.613 Malloc4 00:22:38.613 Malloc5 00:22:38.613 Malloc6 00:22:38.880 Malloc7 00:22:38.880 Malloc8 00:22:38.880 Malloc9 00:22:38.880 Malloc10 00:22:38.880 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.880 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:38.880 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:38.880 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:38.880 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1046143 00:22:38.880 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:22:38.880 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:22:39.154 [2024-10-21 12:07:15.496431] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:44.484 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:44.484 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1045837 00:22:44.484 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 1045837 ']' 00:22:44.484 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 1045837 00:22:44.484 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:22:44.484 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:44.484 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1045837 00:22:44.484 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:44.484 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:44.484 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1045837' 00:22:44.484 killing process with pid 1045837 00:22:44.484 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 1045837 00:22:44.484 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 1045837 00:22:44.484 [2024-10-21 12:07:20.502851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2e3a0 is same with the state(6) to be set 00:22:44.484 [2024-10-21 12:07:20.502897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2e3a0 is same with the state(6) to be set 00:22:44.484 [2024-10-21 12:07:20.502941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2e870 is same with the state(6) to be set 00:22:44.484 [2024-10-21 12:07:20.502971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2e870 is same with the state(6) to be set 00:22:44.484 [2024-10-21 12:07:20.502977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2e870 is same with the state(6) to be set 00:22:44.484 [2024-10-21 12:07:20.502983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2e870 is same with the state(6) to be set 00:22:44.484 [2024-10-21 12:07:20.503382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2ded0 is same with the state(6) to be set 00:22:44.484 [2024-10-21 12:07:20.503405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2ded0 is same with the state(6) to be set 00:22:44.484 [2024-10-21 12:07:20.503418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2ded0 is same with the state(6) to be set 00:22:44.484 [2024-10-21 12:07:20.503424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2ded0 is same with the state(6) to be set 00:22:44.484 [2024-10-21 12:07:20.503429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2ded0 is same with the state(6) to be set 00:22:44.484 [2024-10-21 12:07:20.503434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2ded0 is same with the state(6) to be set 00:22:44.484 [2024-10-21 12:07:20.503440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2ded0 is same with the state(6) to be set 00:22:44.484 [2024-10-21 12:07:20.503444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2ded0 is same with the state(6) to be set 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 [2024-10-21 12:07:20.504711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 [2024-10-21 12:07:20.505585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 Write completed with error (sct=0, sc=8) 00:22:44.485 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 [2024-10-21 12:07:20.506502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 [2024-10-21 12:07:20.507899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:44.486 NVMe io qpair process completion error 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 starting I/O failed: -6 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.486 Write completed with error (sct=0, sc=8) 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 [2024-10-21 12:07:20.509059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 [2024-10-21 12:07:20.509855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.487 [2024-10-21 12:07:20.510770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:44.487 Write completed with error (sct=0, sc=8) 00:22:44.487 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 [2024-10-21 12:07:20.512536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:44.488 NVMe io qpair process completion error 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 [2024-10-21 12:07:20.513860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 starting I/O failed: -6 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.488 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 [2024-10-21 12:07:20.514685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 [2024-10-21 12:07:20.515625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.489 Write completed with error (sct=0, sc=8) 00:22:44.489 starting I/O failed: -6 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 starting I/O failed: -6 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 starting I/O failed: -6 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 starting I/O failed: -6 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 starting I/O failed: -6 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 starting I/O failed: -6 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 starting I/O failed: -6 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 starting I/O failed: -6 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 starting I/O failed: -6 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 starting I/O failed: -6 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 starting I/O failed: -6 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 starting I/O failed: -6 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 starting I/O failed: -6 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 starting I/O failed: -6 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 starting I/O failed: -6 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 starting I/O failed: -6 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 starting I/O failed: -6 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 starting I/O failed: -6 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 starting I/O failed: -6 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 starting I/O failed: -6 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 starting I/O failed: -6 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 starting I/O failed: -6 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 starting I/O failed: -6 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 starting I/O failed: -6 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 starting I/O failed: -6 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 starting I/O failed: -6 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 starting I/O failed: -6 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 starting I/O failed: -6 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 starting I/O failed: -6 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 starting I/O failed: -6 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 starting I/O failed: -6 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 starting I/O failed: -6 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 starting I/O failed: -6 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 starting I/O failed: -6 00:22:44.490 [2024-10-21 12:07:20.517261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:44.490 NVMe io qpair process completion error 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 starting I/O failed: -6 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 starting I/O failed: -6 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 starting I/O failed: -6 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 starting I/O failed: -6 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 starting I/O failed: -6 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 starting I/O failed: -6 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 starting I/O failed: -6 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 starting I/O failed: -6 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 starting I/O failed: -6 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 starting I/O failed: -6 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 starting I/O failed: -6 00:22:44.490 [2024-10-21 12:07:20.518307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:44.490 starting I/O failed: -6 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 starting I/O failed: -6 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 starting I/O failed: -6 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 starting I/O failed: -6 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 starting I/O failed: -6 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 starting I/O failed: -6 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 starting I/O failed: -6 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 starting I/O failed: -6 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.490 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 [2024-10-21 12:07:20.519129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 [2024-10-21 12:07:20.520073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:44.491 starting I/O failed: -6 00:22:44.491 starting I/O failed: -6 00:22:44.491 starting I/O failed: -6 00:22:44.491 starting I/O failed: -6 00:22:44.491 starting I/O failed: -6 00:22:44.491 starting I/O failed: -6 00:22:44.491 starting I/O failed: -6 00:22:44.491 starting I/O failed: -6 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.491 Write completed with error (sct=0, sc=8) 00:22:44.491 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 [2024-10-21 12:07:20.522760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:44.492 NVMe io qpair process completion error 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 [2024-10-21 12:07:20.523783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 [2024-10-21 12:07:20.524598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.492 Write completed with error (sct=0, sc=8) 00:22:44.492 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 [2024-10-21 12:07:20.525534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.493 Write completed with error (sct=0, sc=8) 00:22:44.493 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 [2024-10-21 12:07:20.527197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:44.494 NVMe io qpair process completion error 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 [2024-10-21 12:07:20.528277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:44.494 starting I/O failed: -6 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 [2024-10-21 12:07:20.529133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.494 starting I/O failed: -6 00:22:44.494 Write completed with error (sct=0, sc=8) 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 [2024-10-21 12:07:20.530072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 [2024-10-21 12:07:20.533291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:44.495 NVMe io qpair process completion error 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 starting I/O failed: -6 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.495 Write completed with error (sct=0, sc=8) 00:22:44.496 starting I/O failed: -6 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 starting I/O failed: -6 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 starting I/O failed: -6 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 starting I/O failed: -6 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 starting I/O failed: -6 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 starting I/O failed: -6 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 [2024-10-21 12:07:20.534499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:44.496 starting I/O failed: -6 00:22:44.496 starting I/O failed: -6 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 starting I/O failed: -6 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 starting I/O failed: -6 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 starting I/O failed: -6 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 starting I/O failed: -6 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 starting I/O failed: -6 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 starting I/O failed: -6 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 starting I/O failed: -6 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 starting I/O failed: -6 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 starting I/O failed: -6 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 starting I/O failed: -6 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 starting I/O failed: -6 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 starting I/O failed: -6 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 starting I/O failed: -6 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 starting I/O failed: -6 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 starting I/O failed: -6 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 starting I/O failed: -6 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 starting I/O failed: -6 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 starting I/O failed: -6 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 starting I/O failed: -6 00:22:44.496 [2024-10-21 12:07:20.535474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 starting I/O failed: -6 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 starting I/O failed: -6 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 starting I/O failed: -6 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 starting I/O failed: -6 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 starting I/O failed: -6 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 starting I/O failed: -6 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 starting I/O failed: -6 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 starting I/O failed: -6 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 starting I/O failed: -6 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 starting I/O failed: -6 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 starting I/O failed: -6 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 starting I/O failed: -6 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.496 starting I/O failed: -6 00:22:44.496 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 [2024-10-21 12:07:20.536383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 [2024-10-21 12:07:20.538014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:44.497 NVMe io qpair process completion error 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.497 starting I/O failed: -6 00:22:44.497 Write completed with error (sct=0, sc=8) 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 [2024-10-21 12:07:20.539165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:44.498 starting I/O failed: -6 00:22:44.498 starting I/O failed: -6 00:22:44.498 starting I/O failed: -6 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 [2024-10-21 12:07:20.540161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 [2024-10-21 12:07:20.541082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.498 Write completed with error (sct=0, sc=8) 00:22:44.498 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 [2024-10-21 12:07:20.543778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:44.499 NVMe io qpair process completion error 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 [2024-10-21 12:07:20.544962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.499 starting I/O failed: -6 00:22:44.499 Write completed with error (sct=0, sc=8) 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 [2024-10-21 12:07:20.545802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 [2024-10-21 12:07:20.546742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.500 Write completed with error (sct=0, sc=8) 00:22:44.500 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 [2024-10-21 12:07:20.548198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:44.501 NVMe io qpair process completion error 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 [2024-10-21 12:07:20.549904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 Write completed with error (sct=0, sc=8) 00:22:44.501 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 [2024-10-21 12:07:20.550842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.502 starting I/O failed: -6 00:22:44.502 Write completed with error (sct=0, sc=8) 00:22:44.503 starting I/O failed: -6 00:22:44.503 Write completed with error (sct=0, sc=8) 00:22:44.503 starting I/O failed: -6 00:22:44.503 Write completed with error (sct=0, sc=8) 00:22:44.503 starting I/O failed: -6 00:22:44.503 Write completed with error (sct=0, sc=8) 00:22:44.503 starting I/O failed: -6 00:22:44.503 Write completed with error (sct=0, sc=8) 00:22:44.503 starting I/O failed: -6 00:22:44.503 Write completed with error (sct=0, sc=8) 00:22:44.503 starting I/O failed: -6 00:22:44.503 Write completed with error (sct=0, sc=8) 00:22:44.503 starting I/O failed: -6 00:22:44.503 [2024-10-21 12:07:20.552704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:44.503 NVMe io qpair process completion error 00:22:44.503 Initializing NVMe Controllers 00:22:44.503 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:22:44.503 Controller IO queue size 128, less than required. 00:22:44.503 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:44.503 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:22:44.503 Controller IO queue size 128, less than required. 00:22:44.503 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:44.503 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:22:44.503 Controller IO queue size 128, less than required. 00:22:44.503 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:44.503 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:22:44.503 Controller IO queue size 128, less than required. 00:22:44.503 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:44.503 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:22:44.503 Controller IO queue size 128, less than required. 00:22:44.503 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:44.503 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:22:44.503 Controller IO queue size 128, less than required. 00:22:44.503 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:44.503 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:22:44.503 Controller IO queue size 128, less than required. 00:22:44.503 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:44.503 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:22:44.503 Controller IO queue size 128, less than required. 00:22:44.503 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:44.503 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:44.503 Controller IO queue size 128, less than required. 00:22:44.503 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:44.503 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:22:44.503 Controller IO queue size 128, less than required. 00:22:44.503 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:44.503 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:22:44.503 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:22:44.503 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:22:44.503 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:22:44.503 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:22:44.503 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:22:44.503 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:22:44.503 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:22:44.503 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:44.503 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:22:44.503 Initialization complete. Launching workers. 00:22:44.503 ======================================================== 00:22:44.503 Latency(us) 00:22:44.503 Device Information : IOPS MiB/s Average min max 00:22:44.503 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1913.59 82.22 66910.38 701.16 119794.39 00:22:44.503 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1907.61 81.97 67141.02 691.21 123874.47 00:22:44.503 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1878.57 80.72 68210.69 829.05 123356.19 00:22:44.503 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1913.80 82.23 66977.17 628.16 122055.97 00:22:44.503 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1869.39 80.33 68613.92 698.65 128690.46 00:22:44.503 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1867.04 80.22 68723.90 677.19 130151.12 00:22:44.503 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1885.40 81.01 68091.80 861.18 125297.38 00:22:44.503 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1905.05 81.86 67404.73 952.03 134788.20 00:22:44.503 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1864.26 80.11 68184.99 884.70 123279.19 00:22:44.503 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1908.46 82.00 66623.20 794.76 126333.78 00:22:44.503 ======================================================== 00:22:44.503 Total : 18913.16 812.67 67681.18 628.16 134788.20 00:22:44.503 00:22:44.503 [2024-10-21 12:07:20.556779] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1815120 is same with the state(6) to be set 00:22:44.503 [2024-10-21 12:07:20.556829] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181e490 is same with the state(6) to be set 00:22:44.503 [2024-10-21 12:07:20.556859] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18004c0 is same with the state(6) to be set 00:22:44.503 [2024-10-21 12:07:20.556888] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a8870 is same with the state(6) to be set 00:22:44.503 [2024-10-21 12:07:20.556918] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b2680 is same with the state(6) to be set 00:22:44.503 [2024-10-21 12:07:20.556946] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ad780 is same with the state(6) to be set 00:22:44.503 [2024-10-21 12:07:20.556977] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7580 is same with the state(6) to be set 00:22:44.503 [2024-10-21 12:07:20.557008] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b050 is same with the state(6) to be set 00:22:44.503 [2024-10-21 12:07:20.557037] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181f720 is same with the state(6) to be set 00:22:44.503 [2024-10-21 12:07:20.557066] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18007f0 is same with the state(6) to be set 00:22:44.503 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:22:44.503 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:22:45.445 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1046143 00:22:45.445 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:22:45.445 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1046143 00:22:45.445 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:22:45.445 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:45.445 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:22:45.445 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:45.445 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 1046143 00:22:45.445 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:22:45.445 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:45.445 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:45.445 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:45.445 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:22:45.445 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:45.445 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:45.445 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:45.445 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:45.445 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:45.445 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:22:45.445 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:45.445 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:22:45.445 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:45.445 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:45.445 rmmod nvme_tcp 00:22:45.445 rmmod nvme_fabrics 00:22:45.445 rmmod nvme_keyring 00:22:45.445 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:45.445 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:22:45.445 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:22:45.445 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@515 -- # '[' -n 1045837 ']' 00:22:45.445 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # killprocess 1045837 00:22:45.445 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 1045837 ']' 00:22:45.445 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 1045837 00:22:45.445 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1045837) - No such process 00:22:45.445 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@977 -- # echo 'Process with pid 1045837 is not found' 00:22:45.445 Process with pid 1045837 is not found 00:22:45.445 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:45.445 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:45.445 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:45.445 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:22:45.445 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-save 00:22:45.445 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:45.445 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-restore 00:22:45.445 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:45.445 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:45.445 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.446 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:45.446 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.359 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:47.359 00:22:47.359 real 0m10.270s 00:22:47.359 user 0m28.071s 00:22:47.359 sys 0m3.876s 00:22:47.359 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:47.359 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:47.359 ************************************ 00:22:47.359 END TEST nvmf_shutdown_tc4 00:22:47.359 ************************************ 00:22:47.359 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:22:47.359 00:22:47.359 real 0m42.647s 00:22:47.359 user 1m42.075s 00:22:47.359 sys 0m13.710s 00:22:47.359 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:47.359 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:47.359 ************************************ 00:22:47.359 END TEST nvmf_shutdown 00:22:47.359 ************************************ 00:22:47.621 12:07:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:22:47.621 00:22:47.621 real 12m48.809s 00:22:47.621 user 27m10.828s 00:22:47.621 sys 3m50.381s 00:22:47.621 12:07:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:47.621 12:07:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:47.621 ************************************ 00:22:47.621 END TEST nvmf_target_extra 00:22:47.621 ************************************ 00:22:47.621 12:07:24 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:47.621 12:07:24 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:47.621 12:07:24 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:47.621 12:07:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:47.621 ************************************ 00:22:47.621 START TEST nvmf_host 00:22:47.621 ************************************ 00:22:47.621 12:07:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:47.621 * Looking for test storage... 00:22:47.621 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:22:47.621 12:07:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:47.621 12:07:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:22:47.621 12:07:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:47.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.883 --rc genhtml_branch_coverage=1 00:22:47.883 --rc genhtml_function_coverage=1 00:22:47.883 --rc genhtml_legend=1 00:22:47.883 --rc geninfo_all_blocks=1 00:22:47.883 --rc geninfo_unexecuted_blocks=1 00:22:47.883 00:22:47.883 ' 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:47.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.883 --rc genhtml_branch_coverage=1 00:22:47.883 --rc genhtml_function_coverage=1 00:22:47.883 --rc genhtml_legend=1 00:22:47.883 --rc geninfo_all_blocks=1 00:22:47.883 --rc geninfo_unexecuted_blocks=1 00:22:47.883 00:22:47.883 ' 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:47.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.883 --rc genhtml_branch_coverage=1 00:22:47.883 --rc genhtml_function_coverage=1 00:22:47.883 --rc genhtml_legend=1 00:22:47.883 --rc geninfo_all_blocks=1 00:22:47.883 --rc geninfo_unexecuted_blocks=1 00:22:47.883 00:22:47.883 ' 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:47.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.883 --rc genhtml_branch_coverage=1 00:22:47.883 --rc genhtml_function_coverage=1 00:22:47.883 --rc genhtml_legend=1 00:22:47.883 --rc geninfo_all_blocks=1 00:22:47.883 --rc geninfo_unexecuted_blocks=1 00:22:47.883 00:22:47.883 ' 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:47.883 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:22:47.883 12:07:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:47.884 12:07:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:47.884 12:07:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:47.884 12:07:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.884 ************************************ 00:22:47.884 START TEST nvmf_multicontroller 00:22:47.884 ************************************ 00:22:47.884 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:47.884 * Looking for test storage... 00:22:47.884 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:47.884 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:47.884 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:22:47.884 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:48.145 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:48.145 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:48.145 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:48.145 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:48.145 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:22:48.145 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:22:48.145 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:22:48.145 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:22:48.145 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:22:48.145 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:22:48.145 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:22:48.145 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:48.145 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:22:48.145 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:22:48.145 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:48.145 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:48.145 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:22:48.145 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:22:48.145 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:48.145 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:22:48.145 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:22:48.145 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:22:48.145 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:22:48.145 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:48.145 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:22:48.145 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:22:48.145 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:48.145 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:48.145 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:22:48.145 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:48.145 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:48.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:48.145 --rc genhtml_branch_coverage=1 00:22:48.145 --rc genhtml_function_coverage=1 00:22:48.145 --rc genhtml_legend=1 00:22:48.145 --rc geninfo_all_blocks=1 00:22:48.145 --rc geninfo_unexecuted_blocks=1 00:22:48.145 00:22:48.145 ' 00:22:48.145 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:48.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:48.145 --rc genhtml_branch_coverage=1 00:22:48.145 --rc genhtml_function_coverage=1 00:22:48.145 --rc genhtml_legend=1 00:22:48.145 --rc geninfo_all_blocks=1 00:22:48.145 --rc geninfo_unexecuted_blocks=1 00:22:48.145 00:22:48.145 ' 00:22:48.145 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:48.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:48.145 --rc genhtml_branch_coverage=1 00:22:48.146 --rc genhtml_function_coverage=1 00:22:48.146 --rc genhtml_legend=1 00:22:48.146 --rc geninfo_all_blocks=1 00:22:48.146 --rc geninfo_unexecuted_blocks=1 00:22:48.146 00:22:48.146 ' 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:48.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:48.146 --rc genhtml_branch_coverage=1 00:22:48.146 --rc genhtml_function_coverage=1 00:22:48.146 --rc genhtml_legend=1 00:22:48.146 --rc geninfo_all_blocks=1 00:22:48.146 --rc geninfo_unexecuted_blocks=1 00:22:48.146 00:22:48.146 ' 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:48.146 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:22:48.146 12:07:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:56.286 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:56.286 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:56.286 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:56.286 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # is_hw=yes 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:56.286 12:07:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:56.286 12:07:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:56.286 12:07:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:56.286 12:07:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:56.286 12:07:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:56.286 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:56.286 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:22:56.286 00:22:56.286 --- 10.0.0.2 ping statistics --- 00:22:56.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:56.286 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:22:56.286 12:07:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:56.286 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:56.286 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:22:56.286 00:22:56.286 --- 10.0.0.1 ping statistics --- 00:22:56.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:56.286 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:22:56.287 12:07:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:56.287 12:07:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # return 0 00:22:56.287 12:07:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:56.287 12:07:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:56.287 12:07:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:56.287 12:07:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:56.287 12:07:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:56.287 12:07:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:56.287 12:07:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:56.287 12:07:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:56.287 12:07:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:56.287 12:07:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:56.287 12:07:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.287 12:07:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # nvmfpid=1051715 00:22:56.287 12:07:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # waitforlisten 1051715 00:22:56.287 12:07:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:56.287 12:07:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1051715 ']' 00:22:56.287 12:07:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:56.287 12:07:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:56.287 12:07:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:56.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:56.287 12:07:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:56.287 12:07:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.287 [2024-10-21 12:07:32.220424] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:22:56.287 [2024-10-21 12:07:32.220488] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:56.287 [2024-10-21 12:07:32.312355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:56.287 [2024-10-21 12:07:32.364357] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:56.287 [2024-10-21 12:07:32.364412] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:56.287 [2024-10-21 12:07:32.364421] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:56.287 [2024-10-21 12:07:32.364429] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:56.287 [2024-10-21 12:07:32.364435] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:56.287 [2024-10-21 12:07:32.366498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:56.287 [2024-10-21 12:07:32.366659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:56.287 [2024-10-21 12:07:32.366660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:56.547 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:56.548 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:22:56.548 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:56.548 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:56.548 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.548 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:56.548 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:56.548 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.548 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.548 [2024-10-21 12:07:33.105886] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:56.548 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.548 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:56.548 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.548 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.808 Malloc0 00:22:56.809 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.809 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:56.809 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.809 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.809 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.809 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:56.809 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.809 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.809 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.809 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:56.809 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.809 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.809 [2024-10-21 12:07:33.178834] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:56.809 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.809 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:56.809 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.809 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.809 [2024-10-21 12:07:33.190676] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:56.809 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.809 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:56.809 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.809 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.809 Malloc1 00:22:56.809 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.809 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:56.809 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.809 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.809 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.809 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:56.809 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.809 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.809 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.809 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:56.809 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.809 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.809 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.809 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:56.809 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.809 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.809 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.809 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1051906 00:22:56.809 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:56.809 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:56.809 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1051906 /var/tmp/bdevperf.sock 00:22:56.809 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1051906 ']' 00:22:56.809 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:56.809 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:56.809 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:56.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:56.809 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:56.809 12:07:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.752 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:57.752 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:22:57.752 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:57.752 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.752 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.752 NVMe0n1 00:22:57.752 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.752 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:57.752 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:57.752 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.752 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.752 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.752 1 00:22:57.752 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:57.752 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:57.753 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:57.753 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:57.753 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:57.753 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:57.753 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:57.753 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:57.753 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.753 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.753 request: 00:22:57.753 { 00:22:57.753 "name": "NVMe0", 00:22:57.753 "trtype": "tcp", 00:22:57.753 "traddr": "10.0.0.2", 00:22:57.753 "adrfam": "ipv4", 00:22:57.753 "trsvcid": "4420", 00:22:57.753 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.753 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:57.753 "hostaddr": "10.0.0.1", 00:22:57.753 "prchk_reftag": false, 00:22:57.753 "prchk_guard": false, 00:22:57.753 "hdgst": false, 00:22:57.753 "ddgst": false, 00:22:57.753 "allow_unrecognized_csi": false, 00:22:57.753 "method": "bdev_nvme_attach_controller", 00:22:57.753 "req_id": 1 00:22:57.753 } 00:22:57.753 Got JSON-RPC error response 00:22:57.753 response: 00:22:57.753 { 00:22:57.753 "code": -114, 00:22:57.753 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:57.753 } 00:22:57.753 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:57.753 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:57.753 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:57.753 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:57.753 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:57.753 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:57.753 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:57.753 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:57.753 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:57.753 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:57.753 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:57.753 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:57.753 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:57.753 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.753 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.753 request: 00:22:57.753 { 00:22:57.753 "name": "NVMe0", 00:22:57.753 "trtype": "tcp", 00:22:57.753 "traddr": "10.0.0.2", 00:22:57.753 "adrfam": "ipv4", 00:22:57.753 "trsvcid": "4420", 00:22:57.753 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:57.753 "hostaddr": "10.0.0.1", 00:22:57.753 "prchk_reftag": false, 00:22:57.753 "prchk_guard": false, 00:22:57.753 "hdgst": false, 00:22:57.753 "ddgst": false, 00:22:57.753 "allow_unrecognized_csi": false, 00:22:57.753 "method": "bdev_nvme_attach_controller", 00:22:57.753 "req_id": 1 00:22:57.753 } 00:22:57.753 Got JSON-RPC error response 00:22:57.753 response: 00:22:57.753 { 00:22:57.753 "code": -114, 00:22:57.753 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:57.753 } 00:22:57.753 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:57.753 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:57.753 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:57.753 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:57.753 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:57.753 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:57.753 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:57.753 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:57.753 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:57.753 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:57.753 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:58.014 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:58.014 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:58.014 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.014 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:58.014 request: 00:22:58.014 { 00:22:58.014 "name": "NVMe0", 00:22:58.014 "trtype": "tcp", 00:22:58.014 "traddr": "10.0.0.2", 00:22:58.014 "adrfam": "ipv4", 00:22:58.014 "trsvcid": "4420", 00:22:58.014 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:58.014 "hostaddr": "10.0.0.1", 00:22:58.014 "prchk_reftag": false, 00:22:58.014 "prchk_guard": false, 00:22:58.014 "hdgst": false, 00:22:58.014 "ddgst": false, 00:22:58.014 "multipath": "disable", 00:22:58.014 "allow_unrecognized_csi": false, 00:22:58.014 "method": "bdev_nvme_attach_controller", 00:22:58.014 "req_id": 1 00:22:58.014 } 00:22:58.014 Got JSON-RPC error response 00:22:58.014 response: 00:22:58.014 { 00:22:58.014 "code": -114, 00:22:58.014 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:22:58.014 } 00:22:58.014 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:58.014 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:58.014 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:58.014 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:58.014 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:58.014 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:58.014 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:58.014 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:58.014 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:58.014 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:58.014 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:58.014 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:58.014 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:58.014 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.014 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:58.014 request: 00:22:58.014 { 00:22:58.014 "name": "NVMe0", 00:22:58.014 "trtype": "tcp", 00:22:58.014 "traddr": "10.0.0.2", 00:22:58.014 "adrfam": "ipv4", 00:22:58.014 "trsvcid": "4420", 00:22:58.014 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:58.014 "hostaddr": "10.0.0.1", 00:22:58.014 "prchk_reftag": false, 00:22:58.014 "prchk_guard": false, 00:22:58.014 "hdgst": false, 00:22:58.014 "ddgst": false, 00:22:58.014 "multipath": "failover", 00:22:58.014 "allow_unrecognized_csi": false, 00:22:58.014 "method": "bdev_nvme_attach_controller", 00:22:58.014 "req_id": 1 00:22:58.014 } 00:22:58.014 Got JSON-RPC error response 00:22:58.014 response: 00:22:58.014 { 00:22:58.014 "code": -114, 00:22:58.014 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:58.014 } 00:22:58.014 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:58.014 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:58.015 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:58.015 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:58.015 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:58.015 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:58.015 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.015 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:58.015 NVMe0n1 00:22:58.015 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.015 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:58.015 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.015 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:58.015 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.015 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:58.015 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.015 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:58.276 00:22:58.276 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.276 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:58.276 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:58.276 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.276 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:58.276 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.276 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:58.276 12:07:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:59.662 { 00:22:59.662 "results": [ 00:22:59.662 { 00:22:59.662 "job": "NVMe0n1", 00:22:59.662 "core_mask": "0x1", 00:22:59.662 "workload": "write", 00:22:59.662 "status": "finished", 00:22:59.662 "queue_depth": 128, 00:22:59.662 "io_size": 4096, 00:22:59.662 "runtime": 1.008365, 00:22:59.662 "iops": 18277.11195846742, 00:22:59.662 "mibps": 71.39496858776336, 00:22:59.662 "io_failed": 0, 00:22:59.662 "io_timeout": 0, 00:22:59.662 "avg_latency_us": 6977.317828178695, 00:22:59.662 "min_latency_us": 4969.8133333333335, 00:22:59.662 "max_latency_us": 16384.0 00:22:59.662 } 00:22:59.662 ], 00:22:59.662 "core_count": 1 00:22:59.662 } 00:22:59.662 12:07:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:59.662 12:07:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.662 12:07:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:59.662 12:07:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.662 12:07:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:22:59.662 12:07:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1051906 00:22:59.662 12:07:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1051906 ']' 00:22:59.662 12:07:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1051906 00:22:59.662 12:07:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:22:59.663 12:07:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:59.663 12:07:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1051906 00:22:59.663 12:07:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:59.663 12:07:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:59.663 12:07:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1051906' 00:22:59.663 killing process with pid 1051906 00:22:59.663 12:07:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1051906 00:22:59.663 12:07:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1051906 00:22:59.663 12:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:59.663 12:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.663 12:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:59.663 12:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.663 12:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:59.663 12:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.663 12:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:59.663 12:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.663 12:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:22:59.663 12:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:59.663 12:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:22:59.663 12:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:59.663 12:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:22:59.663 12:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:22:59.663 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:59.663 [2024-10-21 12:07:33.330890] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:22:59.663 [2024-10-21 12:07:33.330964] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1051906 ] 00:22:59.663 [2024-10-21 12:07:33.413388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.663 [2024-10-21 12:07:33.466315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:59.663 [2024-10-21 12:07:34.697021] bdev.c:4701:bdev_name_add: *ERROR*: Bdev name dfc7f8f5-2e7a-42e6-813e-ec23d440fda8 already exists 00:22:59.663 [2024-10-21 12:07:34.697071] bdev.c:7846:bdev_register: *ERROR*: Unable to add uuid:dfc7f8f5-2e7a-42e6-813e-ec23d440fda8 alias for bdev NVMe1n1 00:22:59.663 [2024-10-21 12:07:34.697081] bdev_nvme.c:4483:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:59.663 Running I/O for 1 seconds... 00:22:59.663 18262.00 IOPS, 71.34 MiB/s 00:22:59.663 Latency(us) 00:22:59.663 [2024-10-21T10:07:36.258Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.663 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:59.663 NVMe0n1 : 1.01 18277.11 71.39 0.00 0.00 6977.32 4969.81 16384.00 00:22:59.663 [2024-10-21T10:07:36.258Z] =================================================================================================================== 00:22:59.663 [2024-10-21T10:07:36.258Z] Total : 18277.11 71.39 0.00 0.00 6977.32 4969.81 16384.00 00:22:59.663 Received shutdown signal, test time was about 1.000000 seconds 00:22:59.663 00:22:59.663 Latency(us) 00:22:59.663 [2024-10-21T10:07:36.258Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.663 [2024-10-21T10:07:36.258Z] =================================================================================================================== 00:22:59.663 [2024-10-21T10:07:36.258Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:59.663 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:59.663 12:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:59.663 12:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:22:59.663 12:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:22:59.663 12:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:59.663 12:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:22:59.663 12:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:59.663 12:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:22:59.663 12:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:59.663 12:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:59.663 rmmod nvme_tcp 00:22:59.663 rmmod nvme_fabrics 00:22:59.663 rmmod nvme_keyring 00:22:59.663 12:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:59.663 12:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:22:59.663 12:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:22:59.663 12:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@515 -- # '[' -n 1051715 ']' 00:22:59.663 12:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # killprocess 1051715 00:22:59.663 12:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1051715 ']' 00:22:59.663 12:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1051715 00:22:59.663 12:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:22:59.663 12:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:59.663 12:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1051715 00:22:59.663 12:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:59.663 12:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:59.663 12:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1051715' 00:22:59.663 killing process with pid 1051715 00:22:59.663 12:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1051715 00:22:59.663 12:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1051715 00:22:59.924 12:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:59.924 12:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:59.924 12:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:59.924 12:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:22:59.924 12:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:59.924 12:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-save 00:22:59.924 12:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-restore 00:22:59.924 12:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:59.924 12:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:59.924 12:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.924 12:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:59.924 12:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:02.469 12:07:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:02.469 00:23:02.469 real 0m14.108s 00:23:02.469 user 0m17.270s 00:23:02.469 sys 0m6.611s 00:23:02.469 12:07:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:02.469 12:07:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:02.469 ************************************ 00:23:02.469 END TEST nvmf_multicontroller 00:23:02.469 ************************************ 00:23:02.469 12:07:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:02.469 12:07:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:02.469 12:07:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:02.469 12:07:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.469 ************************************ 00:23:02.469 START TEST nvmf_aer 00:23:02.469 ************************************ 00:23:02.469 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:02.469 * Looking for test storage... 00:23:02.469 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:02.469 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:02.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.470 --rc genhtml_branch_coverage=1 00:23:02.470 --rc genhtml_function_coverage=1 00:23:02.470 --rc genhtml_legend=1 00:23:02.470 --rc geninfo_all_blocks=1 00:23:02.470 --rc geninfo_unexecuted_blocks=1 00:23:02.470 00:23:02.470 ' 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:02.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.470 --rc genhtml_branch_coverage=1 00:23:02.470 --rc genhtml_function_coverage=1 00:23:02.470 --rc genhtml_legend=1 00:23:02.470 --rc geninfo_all_blocks=1 00:23:02.470 --rc geninfo_unexecuted_blocks=1 00:23:02.470 00:23:02.470 ' 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:02.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.470 --rc genhtml_branch_coverage=1 00:23:02.470 --rc genhtml_function_coverage=1 00:23:02.470 --rc genhtml_legend=1 00:23:02.470 --rc geninfo_all_blocks=1 00:23:02.470 --rc geninfo_unexecuted_blocks=1 00:23:02.470 00:23:02.470 ' 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:02.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.470 --rc genhtml_branch_coverage=1 00:23:02.470 --rc genhtml_function_coverage=1 00:23:02.470 --rc genhtml_legend=1 00:23:02.470 --rc geninfo_all_blocks=1 00:23:02.470 --rc geninfo_unexecuted_blocks=1 00:23:02.470 00:23:02.470 ' 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:02.470 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:23:02.470 12:07:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:10.618 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:10.618 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:10.618 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:10.618 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:10.618 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:10.619 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:10.619 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # is_hw=yes 00:23:10.619 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:10.619 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:10.619 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:10.619 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:10.619 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:10.619 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:10.619 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:10.619 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:10.619 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:10.619 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:10.619 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:10.619 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:10.619 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:10.619 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:10.619 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:10.619 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:10.619 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:10.619 12:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:10.619 12:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:10.619 12:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:10.619 12:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:10.619 12:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:10.619 12:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:10.619 12:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:10.619 12:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:10.619 12:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:10.619 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:10.619 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.680 ms 00:23:10.619 00:23:10.619 --- 10.0.0.2 ping statistics --- 00:23:10.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:10.619 rtt min/avg/max/mdev = 0.680/0.680/0.680/0.000 ms 00:23:10.619 12:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:10.619 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:10.619 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.332 ms 00:23:10.619 00:23:10.619 --- 10.0.0.1 ping statistics --- 00:23:10.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:10.619 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:23:10.619 12:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:10.619 12:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # return 0 00:23:10.619 12:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:10.619 12:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:10.619 12:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:10.619 12:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:10.619 12:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:10.619 12:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:10.619 12:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:10.619 12:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:10.619 12:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:10.619 12:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:10.619 12:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:10.619 12:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # nvmfpid=1056605 00:23:10.619 12:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # waitforlisten 1056605 00:23:10.619 12:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:10.619 12:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 1056605 ']' 00:23:10.619 12:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:10.619 12:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:10.619 12:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:10.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:10.619 12:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:10.619 12:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:10.619 [2024-10-21 12:07:46.301798] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:23:10.619 [2024-10-21 12:07:46.301860] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:10.619 [2024-10-21 12:07:46.380185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:10.619 [2024-10-21 12:07:46.434104] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:10.619 [2024-10-21 12:07:46.434161] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:10.619 [2024-10-21 12:07:46.434170] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:10.619 [2024-10-21 12:07:46.434177] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:10.619 [2024-10-21 12:07:46.434183] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:10.619 [2024-10-21 12:07:46.438350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:10.619 [2024-10-21 12:07:46.438542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:10.619 [2024-10-21 12:07:46.438768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:10.619 [2024-10-21 12:07:46.438772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:10.619 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:10.619 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:23:10.619 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:10.619 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:10.619 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:10.619 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:10.619 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:10.619 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.619 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:10.619 [2024-10-21 12:07:47.166790] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:10.619 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.619 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:10.619 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.619 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:10.619 Malloc0 00:23:10.619 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.619 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:10.619 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.619 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:10.882 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.883 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:10.883 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.883 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:10.883 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.883 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:10.883 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.883 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:10.883 [2024-10-21 12:07:47.241453] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:10.883 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.883 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:10.883 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.883 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:10.883 [ 00:23:10.883 { 00:23:10.883 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:10.883 "subtype": "Discovery", 00:23:10.883 "listen_addresses": [], 00:23:10.883 "allow_any_host": true, 00:23:10.883 "hosts": [] 00:23:10.883 }, 00:23:10.883 { 00:23:10.883 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.883 "subtype": "NVMe", 00:23:10.883 "listen_addresses": [ 00:23:10.883 { 00:23:10.883 "trtype": "TCP", 00:23:10.883 "adrfam": "IPv4", 00:23:10.883 "traddr": "10.0.0.2", 00:23:10.883 "trsvcid": "4420" 00:23:10.883 } 00:23:10.883 ], 00:23:10.883 "allow_any_host": true, 00:23:10.883 "hosts": [], 00:23:10.883 "serial_number": "SPDK00000000000001", 00:23:10.883 "model_number": "SPDK bdev Controller", 00:23:10.883 "max_namespaces": 2, 00:23:10.883 "min_cntlid": 1, 00:23:10.883 "max_cntlid": 65519, 00:23:10.883 "namespaces": [ 00:23:10.883 { 00:23:10.883 "nsid": 1, 00:23:10.883 "bdev_name": "Malloc0", 00:23:10.883 "name": "Malloc0", 00:23:10.883 "nguid": "070C207D2FEB43379F4ECC662ADE3408", 00:23:10.883 "uuid": "070c207d-2feb-4337-9f4e-cc662ade3408" 00:23:10.883 } 00:23:10.883 ] 00:23:10.883 } 00:23:10.883 ] 00:23:10.883 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.883 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:10.883 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:10.883 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1056945 00:23:10.883 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:10.883 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:10.883 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:23:10.883 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:10.883 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:23:10.883 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:23:10.883 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:10.883 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:10.883 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:23:10.883 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:23:10.883 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:11.144 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:11.144 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:23:11.144 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:23:11.144 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:11.144 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:11.144 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:11.144 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:23:11.144 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:11.144 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.144 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:11.144 Malloc1 00:23:11.144 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.144 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:11.144 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.144 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:11.144 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.144 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:11.144 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.144 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:11.144 Asynchronous Event Request test 00:23:11.144 Attaching to 10.0.0.2 00:23:11.144 Attached to 10.0.0.2 00:23:11.144 Registering asynchronous event callbacks... 00:23:11.144 Starting namespace attribute notice tests for all controllers... 00:23:11.144 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:11.144 aer_cb - Changed Namespace 00:23:11.144 Cleaning up... 00:23:11.144 [ 00:23:11.144 { 00:23:11.144 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:11.144 "subtype": "Discovery", 00:23:11.144 "listen_addresses": [], 00:23:11.144 "allow_any_host": true, 00:23:11.144 "hosts": [] 00:23:11.144 }, 00:23:11.144 { 00:23:11.144 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:11.144 "subtype": "NVMe", 00:23:11.144 "listen_addresses": [ 00:23:11.144 { 00:23:11.144 "trtype": "TCP", 00:23:11.144 "adrfam": "IPv4", 00:23:11.144 "traddr": "10.0.0.2", 00:23:11.144 "trsvcid": "4420" 00:23:11.144 } 00:23:11.144 ], 00:23:11.144 "allow_any_host": true, 00:23:11.144 "hosts": [], 00:23:11.144 "serial_number": "SPDK00000000000001", 00:23:11.144 "model_number": "SPDK bdev Controller", 00:23:11.144 "max_namespaces": 2, 00:23:11.144 "min_cntlid": 1, 00:23:11.144 "max_cntlid": 65519, 00:23:11.144 "namespaces": [ 00:23:11.144 { 00:23:11.144 "nsid": 1, 00:23:11.144 "bdev_name": "Malloc0", 00:23:11.144 "name": "Malloc0", 00:23:11.144 "nguid": "070C207D2FEB43379F4ECC662ADE3408", 00:23:11.144 "uuid": "070c207d-2feb-4337-9f4e-cc662ade3408" 00:23:11.144 }, 00:23:11.144 { 00:23:11.144 "nsid": 2, 00:23:11.144 "bdev_name": "Malloc1", 00:23:11.144 "name": "Malloc1", 00:23:11.144 "nguid": "4BFC402F1F4945818B5275F7DE13325B", 00:23:11.144 "uuid": "4bfc402f-1f49-4581-8b52-75f7de13325b" 00:23:11.144 } 00:23:11.144 ] 00:23:11.144 } 00:23:11.144 ] 00:23:11.144 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.145 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1056945 00:23:11.145 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:11.145 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.145 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:11.145 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.145 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:11.145 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.145 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:11.145 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.145 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:11.145 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.145 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:11.406 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.406 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:11.406 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:11.406 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:11.406 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:23:11.406 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:11.406 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:23:11.406 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:11.406 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:11.406 rmmod nvme_tcp 00:23:11.406 rmmod nvme_fabrics 00:23:11.406 rmmod nvme_keyring 00:23:11.406 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:11.406 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:23:11.406 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:23:11.406 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@515 -- # '[' -n 1056605 ']' 00:23:11.406 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # killprocess 1056605 00:23:11.406 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 1056605 ']' 00:23:11.406 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 1056605 00:23:11.406 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:23:11.406 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:11.406 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1056605 00:23:11.406 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:11.406 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:11.406 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1056605' 00:23:11.406 killing process with pid 1056605 00:23:11.406 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 1056605 00:23:11.406 12:07:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 1056605 00:23:11.667 12:07:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:11.667 12:07:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:11.667 12:07:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:11.667 12:07:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:23:11.667 12:07:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-save 00:23:11.667 12:07:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:11.667 12:07:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-restore 00:23:11.667 12:07:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:11.667 12:07:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:11.667 12:07:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.667 12:07:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:11.667 12:07:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:13.591 12:07:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:13.591 00:23:13.591 real 0m11.584s 00:23:13.591 user 0m8.563s 00:23:13.591 sys 0m6.177s 00:23:13.591 12:07:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:13.591 12:07:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:13.591 ************************************ 00:23:13.591 END TEST nvmf_aer 00:23:13.591 ************************************ 00:23:13.591 12:07:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:13.591 12:07:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:13.591 12:07:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:13.591 12:07:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.852 ************************************ 00:23:13.852 START TEST nvmf_async_init 00:23:13.852 ************************************ 00:23:13.852 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:13.852 * Looking for test storage... 00:23:13.852 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:13.852 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:13.852 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:23:13.852 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:13.852 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:13.852 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:13.852 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:13.852 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:13.852 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:23:13.852 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:23:13.852 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:23:13.852 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:23:13.852 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:23:13.852 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:23:13.852 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:23:13.852 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:13.852 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:23:13.852 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:23:13.852 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:13.852 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:13.852 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:23:13.852 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:23:13.852 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:13.852 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:23:13.852 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:23:13.852 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:23:13.852 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:23:13.852 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:13.852 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:23:13.852 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:23:13.852 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:13.852 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:13.852 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:23:13.852 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:13.852 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:13.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:13.853 --rc genhtml_branch_coverage=1 00:23:13.853 --rc genhtml_function_coverage=1 00:23:13.853 --rc genhtml_legend=1 00:23:13.853 --rc geninfo_all_blocks=1 00:23:13.853 --rc geninfo_unexecuted_blocks=1 00:23:13.853 00:23:13.853 ' 00:23:13.853 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:13.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:13.853 --rc genhtml_branch_coverage=1 00:23:13.853 --rc genhtml_function_coverage=1 00:23:13.853 --rc genhtml_legend=1 00:23:13.853 --rc geninfo_all_blocks=1 00:23:13.853 --rc geninfo_unexecuted_blocks=1 00:23:13.853 00:23:13.853 ' 00:23:13.853 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:13.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:13.853 --rc genhtml_branch_coverage=1 00:23:13.853 --rc genhtml_function_coverage=1 00:23:13.853 --rc genhtml_legend=1 00:23:13.853 --rc geninfo_all_blocks=1 00:23:13.853 --rc geninfo_unexecuted_blocks=1 00:23:13.853 00:23:13.853 ' 00:23:13.853 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:13.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:13.853 --rc genhtml_branch_coverage=1 00:23:13.853 --rc genhtml_function_coverage=1 00:23:13.853 --rc genhtml_legend=1 00:23:13.853 --rc geninfo_all_blocks=1 00:23:13.853 --rc geninfo_unexecuted_blocks=1 00:23:13.853 00:23:13.853 ' 00:23:13.853 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:13.853 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:13.853 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:13.853 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:13.853 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:13.853 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:13.853 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:13.853 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:13.853 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:13.853 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:13.853 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:13.853 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:13.853 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:13.853 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:13.853 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:13.853 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:13.853 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:13.853 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:13.853 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:13.853 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:23:13.853 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:13.853 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:13.853 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:13.853 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.853 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.853 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.853 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:13.853 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.853 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:23:13.853 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:13.853 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:13.853 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:13.853 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:13.853 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:13.853 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:13.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:13.853 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:13.853 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:13.853 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:13.853 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:13.853 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:13.853 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:13.853 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:14.114 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:14.114 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:14.114 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=7e01f482ab144bb786d1684c5ceb32e6 00:23:14.114 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:14.114 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:14.114 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:14.114 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:14.114 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:14.114 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:14.114 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.114 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:14.114 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.114 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:14.114 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:14.114 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:23:14.114 12:07:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:22.267 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:22.267 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:22.267 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:22.267 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # is_hw=yes 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:22.267 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:22.268 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:22.268 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:22.268 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:22.268 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:22.268 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:22.268 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:22.268 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:22.268 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:22.268 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.611 ms 00:23:22.268 00:23:22.268 --- 10.0.0.2 ping statistics --- 00:23:22.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.268 rtt min/avg/max/mdev = 0.611/0.611/0.611/0.000 ms 00:23:22.268 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:22.268 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:22.268 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.354 ms 00:23:22.268 00:23:22.268 --- 10.0.0.1 ping statistics --- 00:23:22.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.268 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:23:22.268 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:22.268 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # return 0 00:23:22.268 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:22.268 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:22.268 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:22.268 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:22.268 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:22.268 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:22.268 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:22.268 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:22.268 12:07:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:22.268 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:22.268 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:22.268 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # nvmfpid=1061271 00:23:22.268 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # waitforlisten 1061271 00:23:22.268 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:22.268 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 1061271 ']' 00:23:22.268 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:22.268 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:22.268 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:22.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:22.268 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:22.268 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:22.268 [2024-10-21 12:07:58.065277] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:23:22.268 [2024-10-21 12:07:58.065349] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:22.268 [2024-10-21 12:07:58.155080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.268 [2024-10-21 12:07:58.206586] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:22.268 [2024-10-21 12:07:58.206657] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:22.268 [2024-10-21 12:07:58.206667] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:22.268 [2024-10-21 12:07:58.206674] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:22.268 [2024-10-21 12:07:58.206680] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:22.268 [2024-10-21 12:07:58.207439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:22.529 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:22.529 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:23:22.529 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:22.529 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:22.529 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:22.529 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:22.529 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:22.529 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.529 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:22.529 [2024-10-21 12:07:58.949000] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:22.529 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.529 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:22.529 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.529 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:22.529 null0 00:23:22.529 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.529 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:22.529 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.529 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:22.529 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.529 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:22.529 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.529 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:22.529 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.529 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 7e01f482ab144bb786d1684c5ceb32e6 00:23:22.529 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.529 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:22.529 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.529 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:22.529 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.529 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:22.529 [2024-10-21 12:07:59.009395] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:22.529 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.529 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:22.529 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.529 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:22.790 nvme0n1 00:23:22.790 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.790 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:22.790 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.790 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:22.790 [ 00:23:22.790 { 00:23:22.790 "name": "nvme0n1", 00:23:22.790 "aliases": [ 00:23:22.790 "7e01f482-ab14-4bb7-86d1-684c5ceb32e6" 00:23:22.790 ], 00:23:22.790 "product_name": "NVMe disk", 00:23:22.790 "block_size": 512, 00:23:22.790 "num_blocks": 2097152, 00:23:22.790 "uuid": "7e01f482-ab14-4bb7-86d1-684c5ceb32e6", 00:23:22.790 "numa_id": 0, 00:23:22.790 "assigned_rate_limits": { 00:23:22.790 "rw_ios_per_sec": 0, 00:23:22.790 "rw_mbytes_per_sec": 0, 00:23:22.790 "r_mbytes_per_sec": 0, 00:23:22.790 "w_mbytes_per_sec": 0 00:23:22.790 }, 00:23:22.790 "claimed": false, 00:23:22.790 "zoned": false, 00:23:22.790 "supported_io_types": { 00:23:22.790 "read": true, 00:23:22.790 "write": true, 00:23:22.790 "unmap": false, 00:23:22.790 "flush": true, 00:23:22.790 "reset": true, 00:23:22.790 "nvme_admin": true, 00:23:22.790 "nvme_io": true, 00:23:22.790 "nvme_io_md": false, 00:23:22.790 "write_zeroes": true, 00:23:22.790 "zcopy": false, 00:23:22.790 "get_zone_info": false, 00:23:22.790 "zone_management": false, 00:23:22.790 "zone_append": false, 00:23:22.790 "compare": true, 00:23:22.790 "compare_and_write": true, 00:23:22.790 "abort": true, 00:23:22.790 "seek_hole": false, 00:23:22.790 "seek_data": false, 00:23:22.790 "copy": true, 00:23:22.790 "nvme_iov_md": false 00:23:22.790 }, 00:23:22.790 "memory_domains": [ 00:23:22.790 { 00:23:22.790 "dma_device_id": "system", 00:23:22.790 "dma_device_type": 1 00:23:22.790 } 00:23:22.790 ], 00:23:22.790 "driver_specific": { 00:23:22.790 "nvme": [ 00:23:22.790 { 00:23:22.790 "trid": { 00:23:22.790 "trtype": "TCP", 00:23:22.790 "adrfam": "IPv4", 00:23:22.790 "traddr": "10.0.0.2", 00:23:22.790 "trsvcid": "4420", 00:23:22.790 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:22.790 }, 00:23:22.790 "ctrlr_data": { 00:23:22.790 "cntlid": 1, 00:23:22.790 "vendor_id": "0x8086", 00:23:22.790 "model_number": "SPDK bdev Controller", 00:23:22.790 "serial_number": "00000000000000000000", 00:23:22.790 "firmware_revision": "25.01", 00:23:22.790 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:22.790 "oacs": { 00:23:22.790 "security": 0, 00:23:22.790 "format": 0, 00:23:22.790 "firmware": 0, 00:23:22.790 "ns_manage": 0 00:23:22.790 }, 00:23:22.790 "multi_ctrlr": true, 00:23:22.790 "ana_reporting": false 00:23:22.790 }, 00:23:22.790 "vs": { 00:23:22.790 "nvme_version": "1.3" 00:23:22.790 }, 00:23:22.790 "ns_data": { 00:23:22.790 "id": 1, 00:23:22.790 "can_share": true 00:23:22.790 } 00:23:22.790 } 00:23:22.790 ], 00:23:22.790 "mp_policy": "active_passive" 00:23:22.790 } 00:23:22.790 } 00:23:22.790 ] 00:23:22.790 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.790 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:22.791 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.791 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:22.791 [2024-10-21 12:07:59.287176] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:22.791 [2024-10-21 12:07:59.287270] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe2d20 (9): Bad file descriptor 00:23:23.052 [2024-10-21 12:07:59.419440] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:23.052 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.052 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:23.052 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.052 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:23.052 [ 00:23:23.052 { 00:23:23.052 "name": "nvme0n1", 00:23:23.052 "aliases": [ 00:23:23.052 "7e01f482-ab14-4bb7-86d1-684c5ceb32e6" 00:23:23.052 ], 00:23:23.052 "product_name": "NVMe disk", 00:23:23.052 "block_size": 512, 00:23:23.052 "num_blocks": 2097152, 00:23:23.052 "uuid": "7e01f482-ab14-4bb7-86d1-684c5ceb32e6", 00:23:23.052 "numa_id": 0, 00:23:23.052 "assigned_rate_limits": { 00:23:23.052 "rw_ios_per_sec": 0, 00:23:23.052 "rw_mbytes_per_sec": 0, 00:23:23.052 "r_mbytes_per_sec": 0, 00:23:23.052 "w_mbytes_per_sec": 0 00:23:23.052 }, 00:23:23.052 "claimed": false, 00:23:23.052 "zoned": false, 00:23:23.052 "supported_io_types": { 00:23:23.052 "read": true, 00:23:23.052 "write": true, 00:23:23.052 "unmap": false, 00:23:23.052 "flush": true, 00:23:23.052 "reset": true, 00:23:23.052 "nvme_admin": true, 00:23:23.052 "nvme_io": true, 00:23:23.052 "nvme_io_md": false, 00:23:23.052 "write_zeroes": true, 00:23:23.052 "zcopy": false, 00:23:23.052 "get_zone_info": false, 00:23:23.052 "zone_management": false, 00:23:23.052 "zone_append": false, 00:23:23.052 "compare": true, 00:23:23.052 "compare_and_write": true, 00:23:23.052 "abort": true, 00:23:23.052 "seek_hole": false, 00:23:23.052 "seek_data": false, 00:23:23.052 "copy": true, 00:23:23.052 "nvme_iov_md": false 00:23:23.052 }, 00:23:23.052 "memory_domains": [ 00:23:23.052 { 00:23:23.052 "dma_device_id": "system", 00:23:23.052 "dma_device_type": 1 00:23:23.052 } 00:23:23.052 ], 00:23:23.052 "driver_specific": { 00:23:23.052 "nvme": [ 00:23:23.052 { 00:23:23.052 "trid": { 00:23:23.052 "trtype": "TCP", 00:23:23.052 "adrfam": "IPv4", 00:23:23.052 "traddr": "10.0.0.2", 00:23:23.052 "trsvcid": "4420", 00:23:23.052 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:23.052 }, 00:23:23.052 "ctrlr_data": { 00:23:23.052 "cntlid": 2, 00:23:23.052 "vendor_id": "0x8086", 00:23:23.052 "model_number": "SPDK bdev Controller", 00:23:23.052 "serial_number": "00000000000000000000", 00:23:23.052 "firmware_revision": "25.01", 00:23:23.052 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:23.052 "oacs": { 00:23:23.052 "security": 0, 00:23:23.052 "format": 0, 00:23:23.052 "firmware": 0, 00:23:23.052 "ns_manage": 0 00:23:23.052 }, 00:23:23.052 "multi_ctrlr": true, 00:23:23.052 "ana_reporting": false 00:23:23.052 }, 00:23:23.052 "vs": { 00:23:23.052 "nvme_version": "1.3" 00:23:23.052 }, 00:23:23.052 "ns_data": { 00:23:23.052 "id": 1, 00:23:23.052 "can_share": true 00:23:23.052 } 00:23:23.052 } 00:23:23.052 ], 00:23:23.052 "mp_policy": "active_passive" 00:23:23.052 } 00:23:23.052 } 00:23:23.052 ] 00:23:23.052 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.052 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:23.052 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.052 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:23.052 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.052 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:23.052 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.zC9SvMpBGg 00:23:23.052 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:23.052 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.zC9SvMpBGg 00:23:23.052 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.zC9SvMpBGg 00:23:23.052 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.052 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:23.052 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.052 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:23.052 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.052 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:23.052 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.052 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:23.052 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.052 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:23.052 [2024-10-21 12:07:59.507937] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:23.052 [2024-10-21 12:07:59.508111] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:23.052 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.052 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:23:23.052 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.052 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:23.052 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.052 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:23.052 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.052 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:23.052 [2024-10-21 12:07:59.532014] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:23.052 nvme0n1 00:23:23.052 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.052 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:23.052 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.052 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:23.052 [ 00:23:23.052 { 00:23:23.052 "name": "nvme0n1", 00:23:23.052 "aliases": [ 00:23:23.052 "7e01f482-ab14-4bb7-86d1-684c5ceb32e6" 00:23:23.052 ], 00:23:23.052 "product_name": "NVMe disk", 00:23:23.052 "block_size": 512, 00:23:23.052 "num_blocks": 2097152, 00:23:23.052 "uuid": "7e01f482-ab14-4bb7-86d1-684c5ceb32e6", 00:23:23.052 "numa_id": 0, 00:23:23.052 "assigned_rate_limits": { 00:23:23.052 "rw_ios_per_sec": 0, 00:23:23.052 "rw_mbytes_per_sec": 0, 00:23:23.052 "r_mbytes_per_sec": 0, 00:23:23.052 "w_mbytes_per_sec": 0 00:23:23.052 }, 00:23:23.052 "claimed": false, 00:23:23.052 "zoned": false, 00:23:23.052 "supported_io_types": { 00:23:23.052 "read": true, 00:23:23.052 "write": true, 00:23:23.052 "unmap": false, 00:23:23.052 "flush": true, 00:23:23.052 "reset": true, 00:23:23.052 "nvme_admin": true, 00:23:23.052 "nvme_io": true, 00:23:23.052 "nvme_io_md": false, 00:23:23.052 "write_zeroes": true, 00:23:23.052 "zcopy": false, 00:23:23.052 "get_zone_info": false, 00:23:23.052 "zone_management": false, 00:23:23.052 "zone_append": false, 00:23:23.052 "compare": true, 00:23:23.052 "compare_and_write": true, 00:23:23.052 "abort": true, 00:23:23.052 "seek_hole": false, 00:23:23.052 "seek_data": false, 00:23:23.052 "copy": true, 00:23:23.052 "nvme_iov_md": false 00:23:23.052 }, 00:23:23.052 "memory_domains": [ 00:23:23.052 { 00:23:23.052 "dma_device_id": "system", 00:23:23.052 "dma_device_type": 1 00:23:23.052 } 00:23:23.052 ], 00:23:23.052 "driver_specific": { 00:23:23.052 "nvme": [ 00:23:23.052 { 00:23:23.052 "trid": { 00:23:23.052 "trtype": "TCP", 00:23:23.052 "adrfam": "IPv4", 00:23:23.052 "traddr": "10.0.0.2", 00:23:23.052 "trsvcid": "4421", 00:23:23.052 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:23.052 }, 00:23:23.052 "ctrlr_data": { 00:23:23.052 "cntlid": 3, 00:23:23.052 "vendor_id": "0x8086", 00:23:23.052 "model_number": "SPDK bdev Controller", 00:23:23.052 "serial_number": "00000000000000000000", 00:23:23.052 "firmware_revision": "25.01", 00:23:23.052 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:23.052 "oacs": { 00:23:23.052 "security": 0, 00:23:23.052 "format": 0, 00:23:23.052 "firmware": 0, 00:23:23.052 "ns_manage": 0 00:23:23.052 }, 00:23:23.052 "multi_ctrlr": true, 00:23:23.052 "ana_reporting": false 00:23:23.052 }, 00:23:23.052 "vs": { 00:23:23.052 "nvme_version": "1.3" 00:23:23.052 }, 00:23:23.052 "ns_data": { 00:23:23.052 "id": 1, 00:23:23.052 "can_share": true 00:23:23.052 } 00:23:23.052 } 00:23:23.052 ], 00:23:23.052 "mp_policy": "active_passive" 00:23:23.052 } 00:23:23.052 } 00:23:23.052 ] 00:23:23.052 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.053 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:23.053 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.053 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:23.314 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.314 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.zC9SvMpBGg 00:23:23.314 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:23:23.314 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:23:23.314 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:23.314 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:23:23.314 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:23.314 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:23:23.314 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:23.314 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:23.314 rmmod nvme_tcp 00:23:23.314 rmmod nvme_fabrics 00:23:23.314 rmmod nvme_keyring 00:23:23.314 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:23.314 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:23:23.314 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:23:23.314 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@515 -- # '[' -n 1061271 ']' 00:23:23.314 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # killprocess 1061271 00:23:23.314 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 1061271 ']' 00:23:23.314 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 1061271 00:23:23.314 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:23:23.314 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:23.314 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1061271 00:23:23.314 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:23.314 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:23.314 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1061271' 00:23:23.314 killing process with pid 1061271 00:23:23.314 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 1061271 00:23:23.314 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 1061271 00:23:23.575 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:23.575 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:23.575 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:23.575 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:23:23.575 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-save 00:23:23.575 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:23.575 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-restore 00:23:23.575 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:23.575 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:23.575 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:23.575 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:23.575 12:07:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.490 12:08:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:25.490 00:23:25.490 real 0m11.832s 00:23:25.490 user 0m4.321s 00:23:25.490 sys 0m6.106s 00:23:25.490 12:08:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:25.490 12:08:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:25.490 ************************************ 00:23:25.490 END TEST nvmf_async_init 00:23:25.490 ************************************ 00:23:25.490 12:08:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:25.490 12:08:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:25.490 12:08:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:25.490 12:08:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.751 ************************************ 00:23:25.751 START TEST dma 00:23:25.751 ************************************ 00:23:25.751 12:08:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:25.751 * Looking for test storage... 00:23:25.751 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:25.752 12:08:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:25.752 12:08:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:23:25.752 12:08:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:25.752 12:08:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:25.752 12:08:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:25.752 12:08:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:25.752 12:08:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:25.752 12:08:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:23:25.752 12:08:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:23:25.752 12:08:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:23:25.752 12:08:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:23:25.752 12:08:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:23:25.752 12:08:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:23:25.752 12:08:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:23:25.752 12:08:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:25.752 12:08:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:23:25.752 12:08:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:23:25.752 12:08:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:25.752 12:08:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:25.752 12:08:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:23:25.752 12:08:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:23:25.752 12:08:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:25.752 12:08:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:23:25.752 12:08:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:23:25.752 12:08:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:23:25.752 12:08:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:23:25.752 12:08:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:25.752 12:08:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:23:25.752 12:08:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:23:25.752 12:08:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:25.752 12:08:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:25.752 12:08:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:23:25.752 12:08:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:25.752 12:08:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:25.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.752 --rc genhtml_branch_coverage=1 00:23:25.752 --rc genhtml_function_coverage=1 00:23:25.752 --rc genhtml_legend=1 00:23:25.752 --rc geninfo_all_blocks=1 00:23:25.752 --rc geninfo_unexecuted_blocks=1 00:23:25.752 00:23:25.752 ' 00:23:25.752 12:08:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:25.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.752 --rc genhtml_branch_coverage=1 00:23:25.752 --rc genhtml_function_coverage=1 00:23:25.752 --rc genhtml_legend=1 00:23:25.752 --rc geninfo_all_blocks=1 00:23:25.752 --rc geninfo_unexecuted_blocks=1 00:23:25.752 00:23:25.752 ' 00:23:25.752 12:08:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:25.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.752 --rc genhtml_branch_coverage=1 00:23:25.752 --rc genhtml_function_coverage=1 00:23:25.752 --rc genhtml_legend=1 00:23:25.752 --rc geninfo_all_blocks=1 00:23:25.752 --rc geninfo_unexecuted_blocks=1 00:23:25.752 00:23:25.752 ' 00:23:25.752 12:08:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:25.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.752 --rc genhtml_branch_coverage=1 00:23:25.752 --rc genhtml_function_coverage=1 00:23:25.752 --rc genhtml_legend=1 00:23:25.752 --rc geninfo_all_blocks=1 00:23:25.752 --rc geninfo_unexecuted_blocks=1 00:23:25.752 00:23:25.752 ' 00:23:25.752 12:08:02 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:25.752 12:08:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:23:25.752 12:08:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:25.752 12:08:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:25.752 12:08:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:25.752 12:08:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:25.752 12:08:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:25.752 12:08:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:25.752 12:08:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:25.752 12:08:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:25.752 12:08:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:25.752 12:08:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:25.752 12:08:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:26.014 12:08:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:26.014 12:08:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:26.014 12:08:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:26.015 12:08:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:26.015 12:08:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:26.015 12:08:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:26.015 12:08:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:23:26.015 12:08:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:26.015 12:08:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:26.015 12:08:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:26.015 12:08:02 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.015 12:08:02 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.015 12:08:02 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.015 12:08:02 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:23:26.015 12:08:02 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.015 12:08:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:23:26.015 12:08:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:26.015 12:08:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:26.015 12:08:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:26.015 12:08:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:26.015 12:08:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:26.015 12:08:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:26.015 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:26.015 12:08:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:26.015 12:08:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:26.015 12:08:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:26.015 12:08:02 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:26.015 12:08:02 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:23:26.015 00:23:26.015 real 0m0.238s 00:23:26.015 user 0m0.157s 00:23:26.015 sys 0m0.097s 00:23:26.015 12:08:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:26.015 12:08:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:26.015 ************************************ 00:23:26.015 END TEST dma 00:23:26.015 ************************************ 00:23:26.015 12:08:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:26.015 12:08:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:26.015 12:08:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:26.015 12:08:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.015 ************************************ 00:23:26.015 START TEST nvmf_identify 00:23:26.015 ************************************ 00:23:26.015 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:26.015 * Looking for test storage... 00:23:26.015 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:26.015 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:26.015 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:23:26.015 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:26.277 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:26.277 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:26.277 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:26.277 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:26.277 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:23:26.277 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:23:26.277 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:23:26.277 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:23:26.277 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:23:26.277 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:23:26.277 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:23:26.277 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:26.277 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:23:26.277 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:23:26.277 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:26.277 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:26.277 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:23:26.277 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:23:26.277 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:26.277 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:23:26.277 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:23:26.277 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:23:26.277 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:23:26.277 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:26.277 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:23:26.277 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:23:26.277 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:26.277 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:26.277 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:23:26.277 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:26.277 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:26.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.277 --rc genhtml_branch_coverage=1 00:23:26.277 --rc genhtml_function_coverage=1 00:23:26.277 --rc genhtml_legend=1 00:23:26.277 --rc geninfo_all_blocks=1 00:23:26.277 --rc geninfo_unexecuted_blocks=1 00:23:26.277 00:23:26.277 ' 00:23:26.277 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:26.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.277 --rc genhtml_branch_coverage=1 00:23:26.277 --rc genhtml_function_coverage=1 00:23:26.277 --rc genhtml_legend=1 00:23:26.277 --rc geninfo_all_blocks=1 00:23:26.277 --rc geninfo_unexecuted_blocks=1 00:23:26.277 00:23:26.277 ' 00:23:26.277 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:26.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.277 --rc genhtml_branch_coverage=1 00:23:26.277 --rc genhtml_function_coverage=1 00:23:26.277 --rc genhtml_legend=1 00:23:26.277 --rc geninfo_all_blocks=1 00:23:26.277 --rc geninfo_unexecuted_blocks=1 00:23:26.277 00:23:26.277 ' 00:23:26.277 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:26.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.277 --rc genhtml_branch_coverage=1 00:23:26.277 --rc genhtml_function_coverage=1 00:23:26.277 --rc genhtml_legend=1 00:23:26.277 --rc geninfo_all_blocks=1 00:23:26.277 --rc geninfo_unexecuted_blocks=1 00:23:26.277 00:23:26.277 ' 00:23:26.277 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:26.277 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:26.277 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:26.277 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:26.277 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:26.277 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:26.277 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:26.278 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:26.278 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:26.278 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:26.278 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:26.278 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:26.278 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:26.278 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:26.278 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:26.278 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:26.278 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:26.278 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:26.278 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:26.278 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:23:26.278 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:26.278 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:26.278 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:26.278 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.278 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.278 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.278 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:26.278 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.278 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:23:26.278 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:26.278 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:26.278 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:26.278 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:26.278 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:26.278 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:26.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:26.278 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:26.278 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:26.278 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:26.278 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:26.278 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:26.278 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:26.278 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:26.278 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:26.278 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:26.278 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:26.278 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:26.278 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:26.278 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:26.278 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.278 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:26.278 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:26.278 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:23:26.278 12:08:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:34.421 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:34.421 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:34.421 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:34.421 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # is_hw=yes 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:34.421 12:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:34.421 12:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:34.421 12:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:34.421 12:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:34.421 12:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:34.421 12:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:34.421 12:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:34.421 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:34.421 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.547 ms 00:23:34.421 00:23:34.421 --- 10.0.0.2 ping statistics --- 00:23:34.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.421 rtt min/avg/max/mdev = 0.547/0.547/0.547/0.000 ms 00:23:34.421 12:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:34.422 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:34.422 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:23:34.422 00:23:34.422 --- 10.0.0.1 ping statistics --- 00:23:34.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.422 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:23:34.422 12:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:34.422 12:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # return 0 00:23:34.422 12:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:34.422 12:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:34.422 12:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:34.422 12:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:34.422 12:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:34.422 12:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:34.422 12:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:34.422 12:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:34.422 12:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:34.422 12:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:34.422 12:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1065840 00:23:34.422 12:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:34.422 12:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:34.422 12:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1065840 00:23:34.422 12:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 1065840 ']' 00:23:34.422 12:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:34.422 12:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:34.422 12:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:34.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:34.422 12:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:34.422 12:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:34.422 [2024-10-21 12:08:10.260349] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:23:34.422 [2024-10-21 12:08:10.260416] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:34.422 [2024-10-21 12:08:10.352483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:34.422 [2024-10-21 12:08:10.407547] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:34.422 [2024-10-21 12:08:10.407605] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:34.422 [2024-10-21 12:08:10.407614] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:34.422 [2024-10-21 12:08:10.407622] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:34.422 [2024-10-21 12:08:10.407629] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:34.422 [2024-10-21 12:08:10.410001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:34.422 [2024-10-21 12:08:10.410161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:34.422 [2024-10-21 12:08:10.410318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:34.422 [2024-10-21 12:08:10.410318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:34.684 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:34.684 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:23:34.684 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:34.684 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.684 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:34.684 [2024-10-21 12:08:11.094274] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:34.684 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.684 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:34.684 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:34.684 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:34.684 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:34.684 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.684 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:34.684 Malloc0 00:23:34.684 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.684 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:34.684 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.684 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:34.684 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.684 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:34.684 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.684 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:34.684 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.684 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:34.684 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.684 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:34.684 [2024-10-21 12:08:11.217556] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:34.684 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.684 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:34.684 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.684 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:34.684 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.684 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:34.684 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.684 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:34.684 [ 00:23:34.684 { 00:23:34.684 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:34.684 "subtype": "Discovery", 00:23:34.684 "listen_addresses": [ 00:23:34.684 { 00:23:34.684 "trtype": "TCP", 00:23:34.684 "adrfam": "IPv4", 00:23:34.684 "traddr": "10.0.0.2", 00:23:34.684 "trsvcid": "4420" 00:23:34.684 } 00:23:34.684 ], 00:23:34.684 "allow_any_host": true, 00:23:34.684 "hosts": [] 00:23:34.684 }, 00:23:34.684 { 00:23:34.684 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:34.684 "subtype": "NVMe", 00:23:34.684 "listen_addresses": [ 00:23:34.684 { 00:23:34.684 "trtype": "TCP", 00:23:34.684 "adrfam": "IPv4", 00:23:34.684 "traddr": "10.0.0.2", 00:23:34.684 "trsvcid": "4420" 00:23:34.684 } 00:23:34.684 ], 00:23:34.684 "allow_any_host": true, 00:23:34.684 "hosts": [], 00:23:34.684 "serial_number": "SPDK00000000000001", 00:23:34.684 "model_number": "SPDK bdev Controller", 00:23:34.684 "max_namespaces": 32, 00:23:34.684 "min_cntlid": 1, 00:23:34.684 "max_cntlid": 65519, 00:23:34.684 "namespaces": [ 00:23:34.684 { 00:23:34.684 "nsid": 1, 00:23:34.684 "bdev_name": "Malloc0", 00:23:34.684 "name": "Malloc0", 00:23:34.684 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:34.684 "eui64": "ABCDEF0123456789", 00:23:34.684 "uuid": "12798ad3-e272-4d5f-9dea-9bff7be1771f" 00:23:34.684 } 00:23:34.684 ] 00:23:34.684 } 00:23:34.684 ] 00:23:34.684 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.684 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:34.948 [2024-10-21 12:08:11.282083] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:23:34.948 [2024-10-21 12:08:11.282153] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1066041 ] 00:23:34.948 [2024-10-21 12:08:11.321497] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:23:34.948 [2024-10-21 12:08:11.321561] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:34.948 [2024-10-21 12:08:11.321566] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:34.948 [2024-10-21 12:08:11.321585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:34.948 [2024-10-21 12:08:11.321597] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:34.948 [2024-10-21 12:08:11.322559] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:23:34.948 [2024-10-21 12:08:11.322605] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x13f01e0 0 00:23:34.948 [2024-10-21 12:08:11.336336] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:34.948 [2024-10-21 12:08:11.336353] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:34.948 [2024-10-21 12:08:11.336359] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:34.948 [2024-10-21 12:08:11.336362] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:34.948 [2024-10-21 12:08:11.336406] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.948 [2024-10-21 12:08:11.336413] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.948 [2024-10-21 12:08:11.336418] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13f01e0) 00:23:34.948 [2024-10-21 12:08:11.336440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:34.948 [2024-10-21 12:08:11.336466] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1459180, cid 0, qid 0 00:23:34.948 [2024-10-21 12:08:11.344337] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.948 [2024-10-21 12:08:11.344348] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.948 [2024-10-21 12:08:11.344352] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.948 [2024-10-21 12:08:11.344357] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1459180) on tqpair=0x13f01e0 00:23:34.948 [2024-10-21 12:08:11.344371] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:34.948 [2024-10-21 12:08:11.344379] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:23:34.948 [2024-10-21 12:08:11.344385] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:23:34.948 [2024-10-21 12:08:11.344400] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.948 [2024-10-21 12:08:11.344405] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.948 [2024-10-21 12:08:11.344408] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13f01e0) 00:23:34.948 [2024-10-21 12:08:11.344417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.948 [2024-10-21 12:08:11.344434] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1459180, cid 0, qid 0 00:23:34.948 [2024-10-21 12:08:11.344650] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.948 [2024-10-21 12:08:11.344657] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.948 [2024-10-21 12:08:11.344660] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.948 [2024-10-21 12:08:11.344664] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1459180) on tqpair=0x13f01e0 00:23:34.948 [2024-10-21 12:08:11.344670] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:23:34.948 [2024-10-21 12:08:11.344677] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:23:34.948 [2024-10-21 12:08:11.344684] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.948 [2024-10-21 12:08:11.344687] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.948 [2024-10-21 12:08:11.344691] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13f01e0) 00:23:34.948 [2024-10-21 12:08:11.344698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.948 [2024-10-21 12:08:11.344709] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1459180, cid 0, qid 0 00:23:34.948 [2024-10-21 12:08:11.344922] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.948 [2024-10-21 12:08:11.344929] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.948 [2024-10-21 12:08:11.344932] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.948 [2024-10-21 12:08:11.344936] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1459180) on tqpair=0x13f01e0 00:23:34.948 [2024-10-21 12:08:11.344942] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:23:34.948 [2024-10-21 12:08:11.344951] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:23:34.948 [2024-10-21 12:08:11.344957] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.948 [2024-10-21 12:08:11.344961] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.948 [2024-10-21 12:08:11.344965] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13f01e0) 00:23:34.948 [2024-10-21 12:08:11.344976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.948 [2024-10-21 12:08:11.344987] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1459180, cid 0, qid 0 00:23:34.948 [2024-10-21 12:08:11.345189] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.949 [2024-10-21 12:08:11.345195] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.949 [2024-10-21 12:08:11.345199] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.949 [2024-10-21 12:08:11.345203] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1459180) on tqpair=0x13f01e0 00:23:34.949 [2024-10-21 12:08:11.345208] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:34.949 [2024-10-21 12:08:11.345218] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.949 [2024-10-21 12:08:11.345222] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.949 [2024-10-21 12:08:11.345226] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13f01e0) 00:23:34.949 [2024-10-21 12:08:11.345233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.949 [2024-10-21 12:08:11.345243] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1459180, cid 0, qid 0 00:23:34.949 [2024-10-21 12:08:11.345437] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.949 [2024-10-21 12:08:11.345444] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.949 [2024-10-21 12:08:11.345447] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.949 [2024-10-21 12:08:11.345451] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1459180) on tqpair=0x13f01e0 00:23:34.949 [2024-10-21 12:08:11.345456] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:23:34.949 [2024-10-21 12:08:11.345461] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:23:34.949 [2024-10-21 12:08:11.345469] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:34.949 [2024-10-21 12:08:11.345575] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:23:34.949 [2024-10-21 12:08:11.345580] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:34.949 [2024-10-21 12:08:11.345588] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.949 [2024-10-21 12:08:11.345592] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.949 [2024-10-21 12:08:11.345596] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13f01e0) 00:23:34.949 [2024-10-21 12:08:11.345602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.949 [2024-10-21 12:08:11.345613] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1459180, cid 0, qid 0 00:23:34.949 [2024-10-21 12:08:11.345826] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.949 [2024-10-21 12:08:11.345833] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.949 [2024-10-21 12:08:11.345836] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.949 [2024-10-21 12:08:11.345840] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1459180) on tqpair=0x13f01e0 00:23:34.949 [2024-10-21 12:08:11.345845] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:34.949 [2024-10-21 12:08:11.345854] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.949 [2024-10-21 12:08:11.345858] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.949 [2024-10-21 12:08:11.345865] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13f01e0) 00:23:34.949 [2024-10-21 12:08:11.345872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.949 [2024-10-21 12:08:11.345882] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1459180, cid 0, qid 0 00:23:34.949 [2024-10-21 12:08:11.346065] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.949 [2024-10-21 12:08:11.346071] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.949 [2024-10-21 12:08:11.346075] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.949 [2024-10-21 12:08:11.346079] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1459180) on tqpair=0x13f01e0 00:23:34.949 [2024-10-21 12:08:11.346083] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:34.949 [2024-10-21 12:08:11.346088] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:23:34.949 [2024-10-21 12:08:11.346096] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:23:34.949 [2024-10-21 12:08:11.346104] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:23:34.949 [2024-10-21 12:08:11.346114] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.949 [2024-10-21 12:08:11.346118] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13f01e0) 00:23:34.949 [2024-10-21 12:08:11.346125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.949 [2024-10-21 12:08:11.346136] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1459180, cid 0, qid 0 00:23:34.949 [2024-10-21 12:08:11.346388] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:34.949 [2024-10-21 12:08:11.346395] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:34.949 [2024-10-21 12:08:11.346399] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:34.949 [2024-10-21 12:08:11.346404] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13f01e0): datao=0, datal=4096, cccid=0 00:23:34.949 [2024-10-21 12:08:11.346408] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1459180) on tqpair(0x13f01e0): expected_datao=0, payload_size=4096 00:23:34.949 [2024-10-21 12:08:11.346413] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.949 [2024-10-21 12:08:11.346459] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:34.949 [2024-10-21 12:08:11.346464] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:34.949 [2024-10-21 12:08:11.387491] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.949 [2024-10-21 12:08:11.387502] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.949 [2024-10-21 12:08:11.387505] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.949 [2024-10-21 12:08:11.387510] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1459180) on tqpair=0x13f01e0 00:23:34.949 [2024-10-21 12:08:11.387520] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:23:34.949 [2024-10-21 12:08:11.387525] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:23:34.949 [2024-10-21 12:08:11.387529] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:23:34.949 [2024-10-21 12:08:11.387535] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:23:34.949 [2024-10-21 12:08:11.387540] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:23:34.949 [2024-10-21 12:08:11.387549] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:23:34.949 [2024-10-21 12:08:11.387558] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:23:34.949 [2024-10-21 12:08:11.387566] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.949 [2024-10-21 12:08:11.387570] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.949 [2024-10-21 12:08:11.387574] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13f01e0) 00:23:34.949 [2024-10-21 12:08:11.387581] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:34.949 [2024-10-21 12:08:11.387594] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1459180, cid 0, qid 0 00:23:34.949 [2024-10-21 12:08:11.387814] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.949 [2024-10-21 12:08:11.387820] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.949 [2024-10-21 12:08:11.387824] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.949 [2024-10-21 12:08:11.387828] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1459180) on tqpair=0x13f01e0 00:23:34.949 [2024-10-21 12:08:11.387835] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.949 [2024-10-21 12:08:11.387839] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.949 [2024-10-21 12:08:11.387843] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13f01e0) 00:23:34.949 [2024-10-21 12:08:11.387849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.949 [2024-10-21 12:08:11.387856] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.949 [2024-10-21 12:08:11.387859] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.949 [2024-10-21 12:08:11.387863] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x13f01e0) 00:23:34.949 [2024-10-21 12:08:11.387869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.949 [2024-10-21 12:08:11.387875] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.949 [2024-10-21 12:08:11.387879] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.949 [2024-10-21 12:08:11.387882] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x13f01e0) 00:23:34.949 [2024-10-21 12:08:11.387888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.949 [2024-10-21 12:08:11.387894] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.949 [2024-10-21 12:08:11.387898] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.949 [2024-10-21 12:08:11.387901] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13f01e0) 00:23:34.949 [2024-10-21 12:08:11.387907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.949 [2024-10-21 12:08:11.387912] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:23:34.949 [2024-10-21 12:08:11.387925] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:34.949 [2024-10-21 12:08:11.387931] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.949 [2024-10-21 12:08:11.387935] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13f01e0) 00:23:34.949 [2024-10-21 12:08:11.387942] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.949 [2024-10-21 12:08:11.387954] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1459180, cid 0, qid 0 00:23:34.949 [2024-10-21 12:08:11.387962] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1459300, cid 1, qid 0 00:23:34.949 [2024-10-21 12:08:11.387967] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1459480, cid 2, qid 0 00:23:34.949 [2024-10-21 12:08:11.387972] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1459600, cid 3, qid 0 00:23:34.949 [2024-10-21 12:08:11.387977] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1459780, cid 4, qid 0 00:23:34.949 [2024-10-21 12:08:11.388225] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.949 [2024-10-21 12:08:11.388232] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.949 [2024-10-21 12:08:11.388235] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.949 [2024-10-21 12:08:11.388239] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1459780) on tqpair=0x13f01e0 00:23:34.950 [2024-10-21 12:08:11.388244] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:23:34.950 [2024-10-21 12:08:11.388249] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:23:34.950 [2024-10-21 12:08:11.388261] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.950 [2024-10-21 12:08:11.388264] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13f01e0) 00:23:34.950 [2024-10-21 12:08:11.388271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.950 [2024-10-21 12:08:11.388282] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1459780, cid 4, qid 0 00:23:34.950 [2024-10-21 12:08:11.392348] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:34.950 [2024-10-21 12:08:11.392357] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:34.950 [2024-10-21 12:08:11.392360] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:34.950 [2024-10-21 12:08:11.392364] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13f01e0): datao=0, datal=4096, cccid=4 00:23:34.950 [2024-10-21 12:08:11.392369] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1459780) on tqpair(0x13f01e0): expected_datao=0, payload_size=4096 00:23:34.950 [2024-10-21 12:08:11.392374] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.950 [2024-10-21 12:08:11.392381] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:34.950 [2024-10-21 12:08:11.392384] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:34.950 [2024-10-21 12:08:11.392390] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.950 [2024-10-21 12:08:11.392396] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.950 [2024-10-21 12:08:11.392400] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.950 [2024-10-21 12:08:11.392403] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1459780) on tqpair=0x13f01e0 00:23:34.950 [2024-10-21 12:08:11.392417] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:23:34.950 [2024-10-21 12:08:11.392451] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.950 [2024-10-21 12:08:11.392455] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13f01e0) 00:23:34.950 [2024-10-21 12:08:11.392462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.950 [2024-10-21 12:08:11.392470] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.950 [2024-10-21 12:08:11.392474] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.950 [2024-10-21 12:08:11.392477] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13f01e0) 00:23:34.950 [2024-10-21 12:08:11.392483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.950 [2024-10-21 12:08:11.392501] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1459780, cid 4, qid 0 00:23:34.950 [2024-10-21 12:08:11.392507] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1459900, cid 5, qid 0 00:23:34.950 [2024-10-21 12:08:11.392763] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:34.950 [2024-10-21 12:08:11.392769] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:34.950 [2024-10-21 12:08:11.392772] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:34.950 [2024-10-21 12:08:11.392776] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13f01e0): datao=0, datal=1024, cccid=4 00:23:34.950 [2024-10-21 12:08:11.392781] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1459780) on tqpair(0x13f01e0): expected_datao=0, payload_size=1024 00:23:34.950 [2024-10-21 12:08:11.392785] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.950 [2024-10-21 12:08:11.392792] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:34.950 [2024-10-21 12:08:11.392795] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:34.950 [2024-10-21 12:08:11.392801] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.950 [2024-10-21 12:08:11.392807] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.950 [2024-10-21 12:08:11.392810] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.950 [2024-10-21 12:08:11.392814] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1459900) on tqpair=0x13f01e0 00:23:34.950 [2024-10-21 12:08:11.437361] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.950 [2024-10-21 12:08:11.437377] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.950 [2024-10-21 12:08:11.437380] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.950 [2024-10-21 12:08:11.437384] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1459780) on tqpair=0x13f01e0 00:23:34.950 [2024-10-21 12:08:11.437399] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.950 [2024-10-21 12:08:11.437403] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13f01e0) 00:23:34.950 [2024-10-21 12:08:11.437412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.950 [2024-10-21 12:08:11.437432] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1459780, cid 4, qid 0 00:23:34.950 [2024-10-21 12:08:11.437732] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:34.950 [2024-10-21 12:08:11.437739] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:34.950 [2024-10-21 12:08:11.437742] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:34.950 [2024-10-21 12:08:11.437747] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13f01e0): datao=0, datal=3072, cccid=4 00:23:34.950 [2024-10-21 12:08:11.437751] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1459780) on tqpair(0x13f01e0): expected_datao=0, payload_size=3072 00:23:34.950 [2024-10-21 12:08:11.437756] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.950 [2024-10-21 12:08:11.437772] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:34.950 [2024-10-21 12:08:11.437777] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:34.950 [2024-10-21 12:08:11.479334] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.950 [2024-10-21 12:08:11.479343] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.950 [2024-10-21 12:08:11.479347] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.950 [2024-10-21 12:08:11.479351] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1459780) on tqpair=0x13f01e0 00:23:34.950 [2024-10-21 12:08:11.479361] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.950 [2024-10-21 12:08:11.479365] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13f01e0) 00:23:34.950 [2024-10-21 12:08:11.479372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.950 [2024-10-21 12:08:11.479394] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1459780, cid 4, qid 0 00:23:34.950 [2024-10-21 12:08:11.479529] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:34.950 [2024-10-21 12:08:11.479535] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:34.950 [2024-10-21 12:08:11.479539] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:34.950 [2024-10-21 12:08:11.479543] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13f01e0): datao=0, datal=8, cccid=4 00:23:34.950 [2024-10-21 12:08:11.479547] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1459780) on tqpair(0x13f01e0): expected_datao=0, payload_size=8 00:23:34.950 [2024-10-21 12:08:11.479552] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.950 [2024-10-21 12:08:11.479558] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:34.950 [2024-10-21 12:08:11.479562] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:34.950 [2024-10-21 12:08:11.520515] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.950 [2024-10-21 12:08:11.520527] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.950 [2024-10-21 12:08:11.520531] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.950 [2024-10-21 12:08:11.520535] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1459780) on tqpair=0x13f01e0 00:23:34.950 ===================================================== 00:23:34.950 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:34.950 ===================================================== 00:23:34.950 Controller Capabilities/Features 00:23:34.950 ================================ 00:23:34.950 Vendor ID: 0000 00:23:34.950 Subsystem Vendor ID: 0000 00:23:34.950 Serial Number: .................... 00:23:34.950 Model Number: ........................................ 00:23:34.950 Firmware Version: 25.01 00:23:34.950 Recommended Arb Burst: 0 00:23:34.950 IEEE OUI Identifier: 00 00 00 00:23:34.950 Multi-path I/O 00:23:34.950 May have multiple subsystem ports: No 00:23:34.950 May have multiple controllers: No 00:23:34.950 Associated with SR-IOV VF: No 00:23:34.950 Max Data Transfer Size: 131072 00:23:34.950 Max Number of Namespaces: 0 00:23:34.950 Max Number of I/O Queues: 1024 00:23:34.950 NVMe Specification Version (VS): 1.3 00:23:34.950 NVMe Specification Version (Identify): 1.3 00:23:34.950 Maximum Queue Entries: 128 00:23:34.950 Contiguous Queues Required: Yes 00:23:34.950 Arbitration Mechanisms Supported 00:23:34.950 Weighted Round Robin: Not Supported 00:23:34.950 Vendor Specific: Not Supported 00:23:34.950 Reset Timeout: 15000 ms 00:23:34.950 Doorbell Stride: 4 bytes 00:23:34.950 NVM Subsystem Reset: Not Supported 00:23:34.950 Command Sets Supported 00:23:34.950 NVM Command Set: Supported 00:23:34.950 Boot Partition: Not Supported 00:23:34.950 Memory Page Size Minimum: 4096 bytes 00:23:34.950 Memory Page Size Maximum: 4096 bytes 00:23:34.950 Persistent Memory Region: Not Supported 00:23:34.950 Optional Asynchronous Events Supported 00:23:34.950 Namespace Attribute Notices: Not Supported 00:23:34.950 Firmware Activation Notices: Not Supported 00:23:34.950 ANA Change Notices: Not Supported 00:23:34.950 PLE Aggregate Log Change Notices: Not Supported 00:23:34.950 LBA Status Info Alert Notices: Not Supported 00:23:34.950 EGE Aggregate Log Change Notices: Not Supported 00:23:34.950 Normal NVM Subsystem Shutdown event: Not Supported 00:23:34.950 Zone Descriptor Change Notices: Not Supported 00:23:34.950 Discovery Log Change Notices: Supported 00:23:34.950 Controller Attributes 00:23:34.950 128-bit Host Identifier: Not Supported 00:23:34.950 Non-Operational Permissive Mode: Not Supported 00:23:34.950 NVM Sets: Not Supported 00:23:34.950 Read Recovery Levels: Not Supported 00:23:34.950 Endurance Groups: Not Supported 00:23:34.950 Predictable Latency Mode: Not Supported 00:23:34.950 Traffic Based Keep ALive: Not Supported 00:23:34.950 Namespace Granularity: Not Supported 00:23:34.950 SQ Associations: Not Supported 00:23:34.950 UUID List: Not Supported 00:23:34.950 Multi-Domain Subsystem: Not Supported 00:23:34.950 Fixed Capacity Management: Not Supported 00:23:34.950 Variable Capacity Management: Not Supported 00:23:34.950 Delete Endurance Group: Not Supported 00:23:34.951 Delete NVM Set: Not Supported 00:23:34.951 Extended LBA Formats Supported: Not Supported 00:23:34.951 Flexible Data Placement Supported: Not Supported 00:23:34.951 00:23:34.951 Controller Memory Buffer Support 00:23:34.951 ================================ 00:23:34.951 Supported: No 00:23:34.951 00:23:34.951 Persistent Memory Region Support 00:23:34.951 ================================ 00:23:34.951 Supported: No 00:23:34.951 00:23:34.951 Admin Command Set Attributes 00:23:34.951 ============================ 00:23:34.951 Security Send/Receive: Not Supported 00:23:34.951 Format NVM: Not Supported 00:23:34.951 Firmware Activate/Download: Not Supported 00:23:34.951 Namespace Management: Not Supported 00:23:34.951 Device Self-Test: Not Supported 00:23:34.951 Directives: Not Supported 00:23:34.951 NVMe-MI: Not Supported 00:23:34.951 Virtualization Management: Not Supported 00:23:34.951 Doorbell Buffer Config: Not Supported 00:23:34.951 Get LBA Status Capability: Not Supported 00:23:34.951 Command & Feature Lockdown Capability: Not Supported 00:23:34.951 Abort Command Limit: 1 00:23:34.951 Async Event Request Limit: 4 00:23:34.951 Number of Firmware Slots: N/A 00:23:34.951 Firmware Slot 1 Read-Only: N/A 00:23:34.951 Firmware Activation Without Reset: N/A 00:23:34.951 Multiple Update Detection Support: N/A 00:23:34.951 Firmware Update Granularity: No Information Provided 00:23:34.951 Per-Namespace SMART Log: No 00:23:34.951 Asymmetric Namespace Access Log Page: Not Supported 00:23:34.951 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:34.951 Command Effects Log Page: Not Supported 00:23:34.951 Get Log Page Extended Data: Supported 00:23:34.951 Telemetry Log Pages: Not Supported 00:23:34.951 Persistent Event Log Pages: Not Supported 00:23:34.951 Supported Log Pages Log Page: May Support 00:23:34.951 Commands Supported & Effects Log Page: Not Supported 00:23:34.951 Feature Identifiers & Effects Log Page:May Support 00:23:34.951 NVMe-MI Commands & Effects Log Page: May Support 00:23:34.951 Data Area 4 for Telemetry Log: Not Supported 00:23:34.951 Error Log Page Entries Supported: 128 00:23:34.951 Keep Alive: Not Supported 00:23:34.951 00:23:34.951 NVM Command Set Attributes 00:23:34.951 ========================== 00:23:34.951 Submission Queue Entry Size 00:23:34.951 Max: 1 00:23:34.951 Min: 1 00:23:34.951 Completion Queue Entry Size 00:23:34.951 Max: 1 00:23:34.951 Min: 1 00:23:34.951 Number of Namespaces: 0 00:23:34.951 Compare Command: Not Supported 00:23:34.951 Write Uncorrectable Command: Not Supported 00:23:34.951 Dataset Management Command: Not Supported 00:23:34.951 Write Zeroes Command: Not Supported 00:23:34.951 Set Features Save Field: Not Supported 00:23:34.951 Reservations: Not Supported 00:23:34.951 Timestamp: Not Supported 00:23:34.951 Copy: Not Supported 00:23:34.951 Volatile Write Cache: Not Present 00:23:34.951 Atomic Write Unit (Normal): 1 00:23:34.951 Atomic Write Unit (PFail): 1 00:23:34.951 Atomic Compare & Write Unit: 1 00:23:34.951 Fused Compare & Write: Supported 00:23:34.951 Scatter-Gather List 00:23:34.951 SGL Command Set: Supported 00:23:34.951 SGL Keyed: Supported 00:23:34.951 SGL Bit Bucket Descriptor: Not Supported 00:23:34.951 SGL Metadata Pointer: Not Supported 00:23:34.951 Oversized SGL: Not Supported 00:23:34.951 SGL Metadata Address: Not Supported 00:23:34.951 SGL Offset: Supported 00:23:34.951 Transport SGL Data Block: Not Supported 00:23:34.951 Replay Protected Memory Block: Not Supported 00:23:34.951 00:23:34.951 Firmware Slot Information 00:23:34.951 ========================= 00:23:34.951 Active slot: 0 00:23:34.951 00:23:34.951 00:23:34.951 Error Log 00:23:34.951 ========= 00:23:34.951 00:23:34.951 Active Namespaces 00:23:34.951 ================= 00:23:34.951 Discovery Log Page 00:23:34.951 ================== 00:23:34.951 Generation Counter: 2 00:23:34.951 Number of Records: 2 00:23:34.951 Record Format: 0 00:23:34.951 00:23:34.951 Discovery Log Entry 0 00:23:34.951 ---------------------- 00:23:34.951 Transport Type: 3 (TCP) 00:23:34.951 Address Family: 1 (IPv4) 00:23:34.951 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:34.951 Entry Flags: 00:23:34.951 Duplicate Returned Information: 1 00:23:34.951 Explicit Persistent Connection Support for Discovery: 1 00:23:34.951 Transport Requirements: 00:23:34.951 Secure Channel: Not Required 00:23:34.951 Port ID: 0 (0x0000) 00:23:34.951 Controller ID: 65535 (0xffff) 00:23:34.951 Admin Max SQ Size: 128 00:23:34.951 Transport Service Identifier: 4420 00:23:34.951 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:34.951 Transport Address: 10.0.0.2 00:23:34.951 Discovery Log Entry 1 00:23:34.951 ---------------------- 00:23:34.951 Transport Type: 3 (TCP) 00:23:34.951 Address Family: 1 (IPv4) 00:23:34.951 Subsystem Type: 2 (NVM Subsystem) 00:23:34.951 Entry Flags: 00:23:34.951 Duplicate Returned Information: 0 00:23:34.951 Explicit Persistent Connection Support for Discovery: 0 00:23:34.951 Transport Requirements: 00:23:34.951 Secure Channel: Not Required 00:23:34.951 Port ID: 0 (0x0000) 00:23:34.951 Controller ID: 65535 (0xffff) 00:23:34.951 Admin Max SQ Size: 128 00:23:34.951 Transport Service Identifier: 4420 00:23:34.951 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:34.951 Transport Address: 10.0.0.2 [2024-10-21 12:08:11.520640] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:23:34.951 [2024-10-21 12:08:11.520652] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1459180) on tqpair=0x13f01e0 00:23:34.951 [2024-10-21 12:08:11.520659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.951 [2024-10-21 12:08:11.520665] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1459300) on tqpair=0x13f01e0 00:23:34.951 [2024-10-21 12:08:11.520670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.951 [2024-10-21 12:08:11.520675] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1459480) on tqpair=0x13f01e0 00:23:34.951 [2024-10-21 12:08:11.520680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.951 [2024-10-21 12:08:11.520685] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1459600) on tqpair=0x13f01e0 00:23:34.951 [2024-10-21 12:08:11.520689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.951 [2024-10-21 12:08:11.520699] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.951 [2024-10-21 12:08:11.520703] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.951 [2024-10-21 12:08:11.520707] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13f01e0) 00:23:34.951 [2024-10-21 12:08:11.520715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.951 [2024-10-21 12:08:11.520732] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1459600, cid 3, qid 0 00:23:34.951 [2024-10-21 12:08:11.520833] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.951 [2024-10-21 12:08:11.520840] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.951 [2024-10-21 12:08:11.520843] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.951 [2024-10-21 12:08:11.520847] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1459600) on tqpair=0x13f01e0 00:23:34.951 [2024-10-21 12:08:11.520854] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.951 [2024-10-21 12:08:11.520858] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.951 [2024-10-21 12:08:11.520862] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13f01e0) 00:23:34.951 [2024-10-21 12:08:11.520869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.951 [2024-10-21 12:08:11.520886] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1459600, cid 3, qid 0 00:23:34.951 [2024-10-21 12:08:11.521100] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.951 [2024-10-21 12:08:11.521107] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.951 [2024-10-21 12:08:11.521110] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.951 [2024-10-21 12:08:11.521114] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1459600) on tqpair=0x13f01e0 00:23:34.951 [2024-10-21 12:08:11.521119] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:23:34.951 [2024-10-21 12:08:11.521126] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:23:34.951 [2024-10-21 12:08:11.521136] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.951 [2024-10-21 12:08:11.521140] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.951 [2024-10-21 12:08:11.521144] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13f01e0) 00:23:34.951 [2024-10-21 12:08:11.521151] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.951 [2024-10-21 12:08:11.521161] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1459600, cid 3, qid 0 00:23:34.951 [2024-10-21 12:08:11.521366] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.951 [2024-10-21 12:08:11.521373] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.951 [2024-10-21 12:08:11.521377] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.951 [2024-10-21 12:08:11.521380] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1459600) on tqpair=0x13f01e0 00:23:34.951 [2024-10-21 12:08:11.521391] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.951 [2024-10-21 12:08:11.521395] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.951 [2024-10-21 12:08:11.521398] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13f01e0) 00:23:34.951 [2024-10-21 12:08:11.521405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.952 [2024-10-21 12:08:11.521416] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1459600, cid 3, qid 0 00:23:34.952 [2024-10-21 12:08:11.521647] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.952 [2024-10-21 12:08:11.521653] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.952 [2024-10-21 12:08:11.521656] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.952 [2024-10-21 12:08:11.521660] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1459600) on tqpair=0x13f01e0 00:23:34.952 [2024-10-21 12:08:11.521670] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.952 [2024-10-21 12:08:11.521674] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.952 [2024-10-21 12:08:11.521678] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13f01e0) 00:23:34.952 [2024-10-21 12:08:11.521684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.952 [2024-10-21 12:08:11.521694] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1459600, cid 3, qid 0 00:23:34.952 [2024-10-21 12:08:11.521871] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.952 [2024-10-21 12:08:11.521877] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.952 [2024-10-21 12:08:11.521881] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.952 [2024-10-21 12:08:11.521884] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1459600) on tqpair=0x13f01e0 00:23:34.952 [2024-10-21 12:08:11.521894] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.952 [2024-10-21 12:08:11.521898] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.952 [2024-10-21 12:08:11.521907] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13f01e0) 00:23:34.952 [2024-10-21 12:08:11.521914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.952 [2024-10-21 12:08:11.521925] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1459600, cid 3, qid 0 00:23:34.952 [2024-10-21 12:08:11.522111] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.952 [2024-10-21 12:08:11.522117] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.952 [2024-10-21 12:08:11.522120] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.952 [2024-10-21 12:08:11.522124] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1459600) on tqpair=0x13f01e0 00:23:34.952 [2024-10-21 12:08:11.522134] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.952 [2024-10-21 12:08:11.522138] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.952 [2024-10-21 12:08:11.522142] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13f01e0) 00:23:34.952 [2024-10-21 12:08:11.522148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.952 [2024-10-21 12:08:11.522158] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1459600, cid 3, qid 0 00:23:34.952 [2024-10-21 12:08:11.522388] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.952 [2024-10-21 12:08:11.522394] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.952 [2024-10-21 12:08:11.522398] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.952 [2024-10-21 12:08:11.522402] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1459600) on tqpair=0x13f01e0 00:23:34.952 [2024-10-21 12:08:11.522411] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.952 [2024-10-21 12:08:11.522415] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.952 [2024-10-21 12:08:11.522419] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13f01e0) 00:23:34.952 [2024-10-21 12:08:11.522426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.952 [2024-10-21 12:08:11.522436] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1459600, cid 3, qid 0 00:23:34.952 [2024-10-21 12:08:11.522685] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.952 [2024-10-21 12:08:11.522691] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.952 [2024-10-21 12:08:11.522695] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.952 [2024-10-21 12:08:11.522699] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1459600) on tqpair=0x13f01e0 00:23:34.952 [2024-10-21 12:08:11.522709] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.952 [2024-10-21 12:08:11.522713] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.952 [2024-10-21 12:08:11.522716] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13f01e0) 00:23:34.952 [2024-10-21 12:08:11.522723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.952 [2024-10-21 12:08:11.522733] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1459600, cid 3, qid 0 00:23:34.952 [2024-10-21 12:08:11.522906] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.952 [2024-10-21 12:08:11.522912] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.952 [2024-10-21 12:08:11.522915] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.952 [2024-10-21 12:08:11.522919] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1459600) on tqpair=0x13f01e0 00:23:34.952 [2024-10-21 12:08:11.522929] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.952 [2024-10-21 12:08:11.522933] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.952 [2024-10-21 12:08:11.522937] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13f01e0) 00:23:34.952 [2024-10-21 12:08:11.522947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.952 [2024-10-21 12:08:11.522957] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1459600, cid 3, qid 0 00:23:34.952 [2024-10-21 12:08:11.523187] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.952 [2024-10-21 12:08:11.523194] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.952 [2024-10-21 12:08:11.523197] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.952 [2024-10-21 12:08:11.523201] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1459600) on tqpair=0x13f01e0 00:23:34.952 [2024-10-21 12:08:11.523211] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.952 [2024-10-21 12:08:11.523215] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.952 [2024-10-21 12:08:11.523218] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13f01e0) 00:23:34.952 [2024-10-21 12:08:11.523225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.952 [2024-10-21 12:08:11.523235] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1459600, cid 3, qid 0 00:23:34.952 [2024-10-21 12:08:11.527332] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.952 [2024-10-21 12:08:11.527340] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.952 [2024-10-21 12:08:11.527344] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.952 [2024-10-21 12:08:11.527348] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1459600) on tqpair=0x13f01e0 00:23:34.952 [2024-10-21 12:08:11.527356] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:23:34.952 00:23:35.217 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:35.217 [2024-10-21 12:08:11.574550] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:23:35.217 [2024-10-21 12:08:11.574596] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1066045 ] 00:23:35.217 [2024-10-21 12:08:11.608301] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:23:35.217 [2024-10-21 12:08:11.612374] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:35.217 [2024-10-21 12:08:11.612382] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:35.217 [2024-10-21 12:08:11.612399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:35.217 [2024-10-21 12:08:11.612409] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:35.217 [2024-10-21 12:08:11.613063] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:23:35.217 [2024-10-21 12:08:11.613105] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x8541e0 0 00:23:35.217 [2024-10-21 12:08:11.619340] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:35.217 [2024-10-21 12:08:11.619357] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:35.217 [2024-10-21 12:08:11.619362] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:35.217 [2024-10-21 12:08:11.619366] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:35.217 [2024-10-21 12:08:11.619407] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:35.217 [2024-10-21 12:08:11.619419] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:35.217 [2024-10-21 12:08:11.619423] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8541e0) 00:23:35.217 [2024-10-21 12:08:11.619437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:35.217 [2024-10-21 12:08:11.619463] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bd180, cid 0, qid 0 00:23:35.217 [2024-10-21 12:08:11.626333] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:35.217 [2024-10-21 12:08:11.626344] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:35.217 [2024-10-21 12:08:11.626347] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:35.217 [2024-10-21 12:08:11.626352] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8bd180) on tqpair=0x8541e0 00:23:35.217 [2024-10-21 12:08:11.626365] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:35.217 [2024-10-21 12:08:11.626372] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:23:35.217 [2024-10-21 12:08:11.626378] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:23:35.217 [2024-10-21 12:08:11.626392] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:35.217 [2024-10-21 12:08:11.626396] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:35.217 [2024-10-21 12:08:11.626400] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8541e0) 00:23:35.217 [2024-10-21 12:08:11.626409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.217 [2024-10-21 12:08:11.626425] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bd180, cid 0, qid 0 00:23:35.217 [2024-10-21 12:08:11.626604] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:35.217 [2024-10-21 12:08:11.626611] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:35.217 [2024-10-21 12:08:11.626615] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:35.217 [2024-10-21 12:08:11.626619] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8bd180) on tqpair=0x8541e0 00:23:35.217 [2024-10-21 12:08:11.626624] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:23:35.217 [2024-10-21 12:08:11.626632] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:23:35.217 [2024-10-21 12:08:11.626639] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:35.217 [2024-10-21 12:08:11.626643] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:35.217 [2024-10-21 12:08:11.626646] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8541e0) 00:23:35.217 [2024-10-21 12:08:11.626653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.217 [2024-10-21 12:08:11.626664] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bd180, cid 0, qid 0 00:23:35.217 [2024-10-21 12:08:11.626837] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:35.217 [2024-10-21 12:08:11.626845] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:35.217 [2024-10-21 12:08:11.626849] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:35.217 [2024-10-21 12:08:11.626853] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8bd180) on tqpair=0x8541e0 00:23:35.217 [2024-10-21 12:08:11.626858] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:23:35.217 [2024-10-21 12:08:11.626866] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:23:35.217 [2024-10-21 12:08:11.626873] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:35.217 [2024-10-21 12:08:11.626877] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:35.217 [2024-10-21 12:08:11.626884] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8541e0) 00:23:35.217 [2024-10-21 12:08:11.626891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.217 [2024-10-21 12:08:11.626901] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bd180, cid 0, qid 0 00:23:35.217 [2024-10-21 12:08:11.627083] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:35.217 [2024-10-21 12:08:11.627089] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:35.217 [2024-10-21 12:08:11.627092] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:35.217 [2024-10-21 12:08:11.627096] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8bd180) on tqpair=0x8541e0 00:23:35.217 [2024-10-21 12:08:11.627101] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:35.217 [2024-10-21 12:08:11.627113] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:35.217 [2024-10-21 12:08:11.627117] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:35.217 [2024-10-21 12:08:11.627121] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8541e0) 00:23:35.217 [2024-10-21 12:08:11.627127] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.217 [2024-10-21 12:08:11.627138] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bd180, cid 0, qid 0 00:23:35.218 [2024-10-21 12:08:11.627387] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:35.218 [2024-10-21 12:08:11.627393] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:35.218 [2024-10-21 12:08:11.627397] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:35.218 [2024-10-21 12:08:11.627401] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8bd180) on tqpair=0x8541e0 00:23:35.218 [2024-10-21 12:08:11.627405] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:23:35.218 [2024-10-21 12:08:11.627410] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:23:35.218 [2024-10-21 12:08:11.627418] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:35.218 [2024-10-21 12:08:11.627524] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:23:35.218 [2024-10-21 12:08:11.627528] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:35.218 [2024-10-21 12:08:11.627536] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:35.218 [2024-10-21 12:08:11.627540] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:35.218 [2024-10-21 12:08:11.627544] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8541e0) 00:23:35.218 [2024-10-21 12:08:11.627550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.218 [2024-10-21 12:08:11.627562] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bd180, cid 0, qid 0 00:23:35.218 [2024-10-21 12:08:11.627757] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:35.218 [2024-10-21 12:08:11.627763] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:35.218 [2024-10-21 12:08:11.627767] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:35.218 [2024-10-21 12:08:11.627771] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8bd180) on tqpair=0x8541e0 00:23:35.218 [2024-10-21 12:08:11.627775] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:35.218 [2024-10-21 12:08:11.627785] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:35.218 [2024-10-21 12:08:11.627791] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:35.218 [2024-10-21 12:08:11.627795] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8541e0) 00:23:35.218 [2024-10-21 12:08:11.627802] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.218 [2024-10-21 12:08:11.627814] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bd180, cid 0, qid 0 00:23:35.218 [2024-10-21 12:08:11.628002] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:35.218 [2024-10-21 12:08:11.628008] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:35.218 [2024-10-21 12:08:11.628012] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:35.218 [2024-10-21 12:08:11.628016] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8bd180) on tqpair=0x8541e0 00:23:35.218 [2024-10-21 12:08:11.628020] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:35.218 [2024-10-21 12:08:11.628025] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:23:35.218 [2024-10-21 12:08:11.628033] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:23:35.218 [2024-10-21 12:08:11.628047] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:23:35.218 [2024-10-21 12:08:11.628056] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:35.218 [2024-10-21 12:08:11.628060] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8541e0) 00:23:35.218 [2024-10-21 12:08:11.628067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.218 [2024-10-21 12:08:11.628077] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bd180, cid 0, qid 0 00:23:35.218 [2024-10-21 12:08:11.628346] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:35.218 [2024-10-21 12:08:11.628354] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:35.218 [2024-10-21 12:08:11.628357] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:35.218 [2024-10-21 12:08:11.628361] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8541e0): datao=0, datal=4096, cccid=0 00:23:35.218 [2024-10-21 12:08:11.628366] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8bd180) on tqpair(0x8541e0): expected_datao=0, payload_size=4096 00:23:35.218 [2024-10-21 12:08:11.628371] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:35.218 [2024-10-21 12:08:11.628379] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:35.218 [2024-10-21 12:08:11.628384] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:35.218 [2024-10-21 12:08:11.669494] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:35.218 [2024-10-21 12:08:11.669509] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:35.218 [2024-10-21 12:08:11.669513] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:35.218 [2024-10-21 12:08:11.669517] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8bd180) on tqpair=0x8541e0 00:23:35.218 [2024-10-21 12:08:11.669527] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:23:35.218 [2024-10-21 12:08:11.669532] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:23:35.218 [2024-10-21 12:08:11.669536] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:23:35.218 [2024-10-21 12:08:11.669541] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:23:35.218 [2024-10-21 12:08:11.669546] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:23:35.218 [2024-10-21 12:08:11.669551] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:23:35.218 [2024-10-21 12:08:11.669565] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:23:35.218 [2024-10-21 12:08:11.669573] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:35.218 [2024-10-21 12:08:11.669577] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:35.218 [2024-10-21 12:08:11.669581] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8541e0) 00:23:35.218 [2024-10-21 12:08:11.669590] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:35.218 [2024-10-21 12:08:11.669602] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bd180, cid 0, qid 0 00:23:35.218 [2024-10-21 12:08:11.673332] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:35.218 [2024-10-21 12:08:11.673343] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:35.218 [2024-10-21 12:08:11.673347] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:35.218 [2024-10-21 12:08:11.673351] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8bd180) on tqpair=0x8541e0 00:23:35.218 [2024-10-21 12:08:11.673358] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:35.218 [2024-10-21 12:08:11.673362] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:35.218 [2024-10-21 12:08:11.673366] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8541e0) 00:23:35.218 [2024-10-21 12:08:11.673373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:35.218 [2024-10-21 12:08:11.673379] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:35.218 [2024-10-21 12:08:11.673383] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:35.218 [2024-10-21 12:08:11.673386] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x8541e0) 00:23:35.218 [2024-10-21 12:08:11.673392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:35.218 [2024-10-21 12:08:11.673399] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:35.218 [2024-10-21 12:08:11.673402] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:35.218 [2024-10-21 12:08:11.673406] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x8541e0) 00:23:35.218 [2024-10-21 12:08:11.673412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:35.218 [2024-10-21 12:08:11.673418] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:35.218 [2024-10-21 12:08:11.673422] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:35.218 [2024-10-21 12:08:11.673425] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8541e0) 00:23:35.218 [2024-10-21 12:08:11.673431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:35.218 [2024-10-21 12:08:11.673436] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:35.218 [2024-10-21 12:08:11.673449] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:35.218 [2024-10-21 12:08:11.673457] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:35.218 [2024-10-21 12:08:11.673460] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8541e0) 00:23:35.218 [2024-10-21 12:08:11.673467] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.218 [2024-10-21 12:08:11.673481] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bd180, cid 0, qid 0 00:23:35.218 [2024-10-21 12:08:11.673486] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bd300, cid 1, qid 0 00:23:35.218 [2024-10-21 12:08:11.673495] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bd480, cid 2, qid 0 00:23:35.218 [2024-10-21 12:08:11.673500] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bd600, cid 3, qid 0 00:23:35.218 [2024-10-21 12:08:11.673505] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bd780, cid 4, qid 0 00:23:35.218 [2024-10-21 12:08:11.673715] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:35.218 [2024-10-21 12:08:11.673722] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:35.218 [2024-10-21 12:08:11.673726] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:35.218 [2024-10-21 12:08:11.673729] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8bd780) on tqpair=0x8541e0 00:23:35.218 [2024-10-21 12:08:11.673734] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:23:35.218 [2024-10-21 12:08:11.673740] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:35.218 [2024-10-21 12:08:11.673751] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:23:35.218 [2024-10-21 12:08:11.673759] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:35.218 [2024-10-21 12:08:11.673766] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:35.218 [2024-10-21 12:08:11.673770] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:35.218 [2024-10-21 12:08:11.673774] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8541e0) 00:23:35.218 [2024-10-21 12:08:11.673780] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:35.218 [2024-10-21 12:08:11.673791] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bd780, cid 4, qid 0 00:23:35.218 [2024-10-21 12:08:11.674005] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:35.219 [2024-10-21 12:08:11.674012] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:35.219 [2024-10-21 12:08:11.674016] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:35.219 [2024-10-21 12:08:11.674020] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8bd780) on tqpair=0x8541e0 00:23:35.219 [2024-10-21 12:08:11.674088] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:23:35.219 [2024-10-21 12:08:11.674099] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:35.219 [2024-10-21 12:08:11.674106] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:35.219 [2024-10-21 12:08:11.674110] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8541e0) 00:23:35.219 [2024-10-21 12:08:11.674117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.219 [2024-10-21 12:08:11.674127] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bd780, cid 4, qid 0 00:23:35.219 [2024-10-21 12:08:11.674372] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:35.219 [2024-10-21 12:08:11.674379] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:35.219 [2024-10-21 12:08:11.674382] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:35.219 [2024-10-21 12:08:11.674387] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8541e0): datao=0, datal=4096, cccid=4 00:23:35.219 [2024-10-21 12:08:11.674392] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8bd780) on tqpair(0x8541e0): expected_datao=0, payload_size=4096 00:23:35.219 [2024-10-21 12:08:11.674397] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:35.219 [2024-10-21 12:08:11.674404] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:35.219 [2024-10-21 12:08:11.674414] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:35.219 [2024-10-21 12:08:11.674578] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:35.219 [2024-10-21 12:08:11.674584] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:35.219 [2024-10-21 12:08:11.674588] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:35.219 [2024-10-21 12:08:11.674592] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8bd780) on tqpair=0x8541e0 00:23:35.219 [2024-10-21 12:08:11.674602] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:23:35.219 [2024-10-21 12:08:11.674611] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:23:35.219 [2024-10-21 12:08:11.674621] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:23:35.219 [2024-10-21 12:08:11.674628] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:35.219 [2024-10-21 12:08:11.674631] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8541e0) 00:23:35.219 [2024-10-21 12:08:11.674638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.219 [2024-10-21 12:08:11.674649] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bd780, cid 4, qid 0 00:23:35.219 [2024-10-21 12:08:11.674882] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:35.219 [2024-10-21 12:08:11.674891] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:35.219 [2024-10-21 12:08:11.674894] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:35.219 [2024-10-21 12:08:11.674898] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8541e0): datao=0, datal=4096, cccid=4 00:23:35.219 [2024-10-21 12:08:11.674902] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8bd780) on tqpair(0x8541e0): expected_datao=0, payload_size=4096 00:23:35.219 [2024-10-21 12:08:11.674907] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:35.219 [2024-10-21 12:08:11.674913] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:35.219 [2024-10-21 12:08:11.674917] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:35.219 [2024-10-21 12:08:11.675087] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:35.219 [2024-10-21 12:08:11.675094] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:35.219 [2024-10-21 12:08:11.675097] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:35.219 [2024-10-21 12:08:11.675101] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8bd780) on tqpair=0x8541e0 00:23:35.219 [2024-10-21 12:08:11.675114] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:35.219 [2024-10-21 12:08:11.675124] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:35.219 [2024-10-21 12:08:11.675131] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:35.219 [2024-10-21 12:08:11.675134] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8541e0) 00:23:35.219 [2024-10-21 12:08:11.675141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.219 [2024-10-21 12:08:11.675152] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bd780, cid 4, qid 0 00:23:35.219 [2024-10-21 12:08:11.675403] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:35.219 [2024-10-21 12:08:11.675414] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:35.219 [2024-10-21 12:08:11.675417] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:35.219 [2024-10-21 12:08:11.675421] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8541e0): datao=0, datal=4096, cccid=4 00:23:35.219 [2024-10-21 12:08:11.675428] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8bd780) on tqpair(0x8541e0): expected_datao=0, payload_size=4096 00:23:35.219 [2024-10-21 12:08:11.675433] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:35.219 [2024-10-21 12:08:11.675440] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:35.219 [2024-10-21 12:08:11.675443] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:35.219 [2024-10-21 12:08:11.675598] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:35.219 [2024-10-21 12:08:11.675605] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:35.219 [2024-10-21 12:08:11.675609] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:35.219 [2024-10-21 12:08:11.675614] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8bd780) on tqpair=0x8541e0 00:23:35.219 [2024-10-21 12:08:11.675623] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:35.219 [2024-10-21 12:08:11.675632] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:23:35.219 [2024-10-21 12:08:11.675642] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:23:35.219 [2024-10-21 12:08:11.675648] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:35.219 [2024-10-21 12:08:11.675653] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:35.219 [2024-10-21 12:08:11.675659] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:23:35.219 [2024-10-21 12:08:11.675667] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:23:35.219 [2024-10-21 12:08:11.675672] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:23:35.219 [2024-10-21 12:08:11.675678] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:23:35.219 [2024-10-21 12:08:11.675694] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:35.219 [2024-10-21 12:08:11.675698] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8541e0) 00:23:35.219 [2024-10-21 12:08:11.675706] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.219 [2024-10-21 12:08:11.675714] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:35.219 [2024-10-21 12:08:11.675718] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:35.219 [2024-10-21 12:08:11.675721] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8541e0) 00:23:35.219 [2024-10-21 12:08:11.675727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:35.219 [2024-10-21 12:08:11.675739] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bd780, cid 4, qid 0 00:23:35.219 [2024-10-21 12:08:11.675745] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bd900, cid 5, qid 0 00:23:35.219 [2024-10-21 12:08:11.675989] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:35.219 [2024-10-21 12:08:11.675996] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:35.219 [2024-10-21 12:08:11.675999] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:35.219 [2024-10-21 12:08:11.676003] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8bd780) on tqpair=0x8541e0 00:23:35.219 [2024-10-21 12:08:11.676010] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:35.219 [2024-10-21 12:08:11.676016] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:35.219 [2024-10-21 12:08:11.676019] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:35.219 [2024-10-21 12:08:11.676025] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8bd900) on tqpair=0x8541e0 00:23:35.219 [2024-10-21 12:08:11.676035] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:35.219 [2024-10-21 12:08:11.676039] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8541e0) 00:23:35.219 [2024-10-21 12:08:11.676046] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.219 [2024-10-21 12:08:11.676056] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bd900, cid 5, qid 0 00:23:35.219 [2024-10-21 12:08:11.676248] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:35.219 [2024-10-21 12:08:11.676256] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:35.219 [2024-10-21 12:08:11.676259] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:35.219 [2024-10-21 12:08:11.676263] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8bd900) on tqpair=0x8541e0 00:23:35.219 [2024-10-21 12:08:11.676273] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:35.219 [2024-10-21 12:08:11.676276] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8541e0) 00:23:35.219 [2024-10-21 12:08:11.676283] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.219 [2024-10-21 12:08:11.676293] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bd900, cid 5, qid 0 00:23:35.219 [2024-10-21 12:08:11.676498] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:35.219 [2024-10-21 12:08:11.676506] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:35.219 [2024-10-21 12:08:11.676509] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:35.219 [2024-10-21 12:08:11.676513] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8bd900) on tqpair=0x8541e0 00:23:35.219 [2024-10-21 12:08:11.676523] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:35.219 [2024-10-21 12:08:11.676526] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8541e0) 00:23:35.219 [2024-10-21 12:08:11.676533] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.219 [2024-10-21 12:08:11.676543] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bd900, cid 5, qid 0 00:23:35.219 [2024-10-21 12:08:11.676738] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:35.219 [2024-10-21 12:08:11.676744] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:35.220 [2024-10-21 12:08:11.676748] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:35.220 [2024-10-21 12:08:11.676751] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8bd900) on tqpair=0x8541e0 00:23:35.220 [2024-10-21 12:08:11.676767] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:35.220 [2024-10-21 12:08:11.676771] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8541e0) 00:23:35.220 [2024-10-21 12:08:11.676778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.220 [2024-10-21 12:08:11.676786] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:35.220 [2024-10-21 12:08:11.676789] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8541e0) 00:23:35.220 [2024-10-21 12:08:11.676796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.220 [2024-10-21 12:08:11.676803] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:35.220 [2024-10-21 12:08:11.676807] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x8541e0) 00:23:35.220 [2024-10-21 12:08:11.676813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.220 [2024-10-21 12:08:11.676825] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:35.220 [2024-10-21 12:08:11.676829] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x8541e0) 00:23:35.220 [2024-10-21 12:08:11.676835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.220 [2024-10-21 12:08:11.676847] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bd900, cid 5, qid 0 00:23:35.220 [2024-10-21 12:08:11.676852] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bd780, cid 4, qid 0 00:23:35.220 [2024-10-21 12:08:11.676857] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bda80, cid 6, qid 0 00:23:35.220 [2024-10-21 12:08:11.676862] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bdc00, cid 7, qid 0 00:23:35.220 [2024-10-21 12:08:11.677126] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:35.220 [2024-10-21 12:08:11.677132] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:35.220 [2024-10-21 12:08:11.677136] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:35.220 [2024-10-21 12:08:11.677139] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8541e0): datao=0, datal=8192, cccid=5 00:23:35.220 [2024-10-21 12:08:11.677144] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8bd900) on tqpair(0x8541e0): expected_datao=0, payload_size=8192 00:23:35.220 [2024-10-21 12:08:11.677148] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:35.220 [2024-10-21 12:08:11.677246] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:35.220 [2024-10-21 12:08:11.677250] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:35.220 [2024-10-21 12:08:11.677256] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:35.220 [2024-10-21 12:08:11.677262] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:35.220 [2024-10-21 12:08:11.677266] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:35.220 [2024-10-21 12:08:11.677269] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8541e0): datao=0, datal=512, cccid=4 00:23:35.220 [2024-10-21 12:08:11.677274] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8bd780) on tqpair(0x8541e0): expected_datao=0, payload_size=512 00:23:35.220 [2024-10-21 12:08:11.677278] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:35.220 [2024-10-21 12:08:11.677284] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:35.220 [2024-10-21 12:08:11.677288] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:35.220 [2024-10-21 12:08:11.677294] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:35.220 [2024-10-21 12:08:11.677299] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:35.220 [2024-10-21 12:08:11.677303] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:35.220 [2024-10-21 12:08:11.677306] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8541e0): datao=0, datal=512, cccid=6 00:23:35.220 [2024-10-21 12:08:11.677311] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8bda80) on tqpair(0x8541e0): expected_datao=0, payload_size=512 00:23:35.220 [2024-10-21 12:08:11.677315] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:35.220 [2024-10-21 12:08:11.681331] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:35.220 [2024-10-21 12:08:11.681338] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:35.220 [2024-10-21 12:08:11.681344] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:35.220 [2024-10-21 12:08:11.681350] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:35.220 [2024-10-21 12:08:11.681353] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:35.220 [2024-10-21 12:08:11.681357] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8541e0): datao=0, datal=4096, cccid=7 00:23:35.220 [2024-10-21 12:08:11.681361] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8bdc00) on tqpair(0x8541e0): expected_datao=0, payload_size=4096 00:23:35.220 [2024-10-21 12:08:11.681368] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:35.220 [2024-10-21 12:08:11.681381] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:35.220 [2024-10-21 12:08:11.681385] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:35.220 [2024-10-21 12:08:11.681392] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:35.220 [2024-10-21 12:08:11.681398] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:35.220 [2024-10-21 12:08:11.681402] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:35.220 [2024-10-21 12:08:11.681405] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8bd900) on tqpair=0x8541e0 00:23:35.220 [2024-10-21 12:08:11.681419] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:35.220 [2024-10-21 12:08:11.681425] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:35.220 [2024-10-21 12:08:11.681428] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:35.220 [2024-10-21 12:08:11.681432] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8bd780) on tqpair=0x8541e0 00:23:35.220 [2024-10-21 12:08:11.681443] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:35.220 [2024-10-21 12:08:11.681449] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:35.220 [2024-10-21 12:08:11.681452] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:35.220 [2024-10-21 12:08:11.681456] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8bda80) on tqpair=0x8541e0 00:23:35.220 [2024-10-21 12:08:11.681463] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:35.220 [2024-10-21 12:08:11.681469] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:35.220 [2024-10-21 12:08:11.681472] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:35.220 [2024-10-21 12:08:11.681476] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8bdc00) on tqpair=0x8541e0 00:23:35.220 ===================================================== 00:23:35.220 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:35.220 ===================================================== 00:23:35.220 Controller Capabilities/Features 00:23:35.220 ================================ 00:23:35.220 Vendor ID: 8086 00:23:35.220 Subsystem Vendor ID: 8086 00:23:35.220 Serial Number: SPDK00000000000001 00:23:35.220 Model Number: SPDK bdev Controller 00:23:35.220 Firmware Version: 25.01 00:23:35.220 Recommended Arb Burst: 6 00:23:35.220 IEEE OUI Identifier: e4 d2 5c 00:23:35.220 Multi-path I/O 00:23:35.220 May have multiple subsystem ports: Yes 00:23:35.220 May have multiple controllers: Yes 00:23:35.220 Associated with SR-IOV VF: No 00:23:35.220 Max Data Transfer Size: 131072 00:23:35.220 Max Number of Namespaces: 32 00:23:35.220 Max Number of I/O Queues: 127 00:23:35.220 NVMe Specification Version (VS): 1.3 00:23:35.220 NVMe Specification Version (Identify): 1.3 00:23:35.220 Maximum Queue Entries: 128 00:23:35.220 Contiguous Queues Required: Yes 00:23:35.220 Arbitration Mechanisms Supported 00:23:35.220 Weighted Round Robin: Not Supported 00:23:35.220 Vendor Specific: Not Supported 00:23:35.220 Reset Timeout: 15000 ms 00:23:35.220 Doorbell Stride: 4 bytes 00:23:35.220 NVM Subsystem Reset: Not Supported 00:23:35.220 Command Sets Supported 00:23:35.220 NVM Command Set: Supported 00:23:35.220 Boot Partition: Not Supported 00:23:35.220 Memory Page Size Minimum: 4096 bytes 00:23:35.220 Memory Page Size Maximum: 4096 bytes 00:23:35.220 Persistent Memory Region: Not Supported 00:23:35.220 Optional Asynchronous Events Supported 00:23:35.220 Namespace Attribute Notices: Supported 00:23:35.220 Firmware Activation Notices: Not Supported 00:23:35.220 ANA Change Notices: Not Supported 00:23:35.220 PLE Aggregate Log Change Notices: Not Supported 00:23:35.220 LBA Status Info Alert Notices: Not Supported 00:23:35.220 EGE Aggregate Log Change Notices: Not Supported 00:23:35.220 Normal NVM Subsystem Shutdown event: Not Supported 00:23:35.220 Zone Descriptor Change Notices: Not Supported 00:23:35.220 Discovery Log Change Notices: Not Supported 00:23:35.220 Controller Attributes 00:23:35.220 128-bit Host Identifier: Supported 00:23:35.220 Non-Operational Permissive Mode: Not Supported 00:23:35.220 NVM Sets: Not Supported 00:23:35.220 Read Recovery Levels: Not Supported 00:23:35.220 Endurance Groups: Not Supported 00:23:35.220 Predictable Latency Mode: Not Supported 00:23:35.220 Traffic Based Keep ALive: Not Supported 00:23:35.220 Namespace Granularity: Not Supported 00:23:35.220 SQ Associations: Not Supported 00:23:35.220 UUID List: Not Supported 00:23:35.220 Multi-Domain Subsystem: Not Supported 00:23:35.220 Fixed Capacity Management: Not Supported 00:23:35.220 Variable Capacity Management: Not Supported 00:23:35.220 Delete Endurance Group: Not Supported 00:23:35.220 Delete NVM Set: Not Supported 00:23:35.220 Extended LBA Formats Supported: Not Supported 00:23:35.220 Flexible Data Placement Supported: Not Supported 00:23:35.220 00:23:35.220 Controller Memory Buffer Support 00:23:35.220 ================================ 00:23:35.220 Supported: No 00:23:35.220 00:23:35.220 Persistent Memory Region Support 00:23:35.220 ================================ 00:23:35.220 Supported: No 00:23:35.220 00:23:35.220 Admin Command Set Attributes 00:23:35.220 ============================ 00:23:35.220 Security Send/Receive: Not Supported 00:23:35.220 Format NVM: Not Supported 00:23:35.220 Firmware Activate/Download: Not Supported 00:23:35.220 Namespace Management: Not Supported 00:23:35.220 Device Self-Test: Not Supported 00:23:35.220 Directives: Not Supported 00:23:35.220 NVMe-MI: Not Supported 00:23:35.220 Virtualization Management: Not Supported 00:23:35.221 Doorbell Buffer Config: Not Supported 00:23:35.221 Get LBA Status Capability: Not Supported 00:23:35.221 Command & Feature Lockdown Capability: Not Supported 00:23:35.221 Abort Command Limit: 4 00:23:35.221 Async Event Request Limit: 4 00:23:35.221 Number of Firmware Slots: N/A 00:23:35.221 Firmware Slot 1 Read-Only: N/A 00:23:35.221 Firmware Activation Without Reset: N/A 00:23:35.221 Multiple Update Detection Support: N/A 00:23:35.221 Firmware Update Granularity: No Information Provided 00:23:35.221 Per-Namespace SMART Log: No 00:23:35.221 Asymmetric Namespace Access Log Page: Not Supported 00:23:35.221 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:35.221 Command Effects Log Page: Supported 00:23:35.221 Get Log Page Extended Data: Supported 00:23:35.221 Telemetry Log Pages: Not Supported 00:23:35.221 Persistent Event Log Pages: Not Supported 00:23:35.221 Supported Log Pages Log Page: May Support 00:23:35.221 Commands Supported & Effects Log Page: Not Supported 00:23:35.221 Feature Identifiers & Effects Log Page:May Support 00:23:35.221 NVMe-MI Commands & Effects Log Page: May Support 00:23:35.221 Data Area 4 for Telemetry Log: Not Supported 00:23:35.221 Error Log Page Entries Supported: 128 00:23:35.221 Keep Alive: Supported 00:23:35.221 Keep Alive Granularity: 10000 ms 00:23:35.221 00:23:35.221 NVM Command Set Attributes 00:23:35.221 ========================== 00:23:35.221 Submission Queue Entry Size 00:23:35.221 Max: 64 00:23:35.221 Min: 64 00:23:35.221 Completion Queue Entry Size 00:23:35.221 Max: 16 00:23:35.221 Min: 16 00:23:35.221 Number of Namespaces: 32 00:23:35.221 Compare Command: Supported 00:23:35.221 Write Uncorrectable Command: Not Supported 00:23:35.221 Dataset Management Command: Supported 00:23:35.221 Write Zeroes Command: Supported 00:23:35.221 Set Features Save Field: Not Supported 00:23:35.221 Reservations: Supported 00:23:35.221 Timestamp: Not Supported 00:23:35.221 Copy: Supported 00:23:35.221 Volatile Write Cache: Present 00:23:35.221 Atomic Write Unit (Normal): 1 00:23:35.221 Atomic Write Unit (PFail): 1 00:23:35.221 Atomic Compare & Write Unit: 1 00:23:35.221 Fused Compare & Write: Supported 00:23:35.221 Scatter-Gather List 00:23:35.221 SGL Command Set: Supported 00:23:35.221 SGL Keyed: Supported 00:23:35.221 SGL Bit Bucket Descriptor: Not Supported 00:23:35.221 SGL Metadata Pointer: Not Supported 00:23:35.221 Oversized SGL: Not Supported 00:23:35.221 SGL Metadata Address: Not Supported 00:23:35.221 SGL Offset: Supported 00:23:35.221 Transport SGL Data Block: Not Supported 00:23:35.221 Replay Protected Memory Block: Not Supported 00:23:35.221 00:23:35.221 Firmware Slot Information 00:23:35.221 ========================= 00:23:35.221 Active slot: 1 00:23:35.221 Slot 1 Firmware Revision: 25.01 00:23:35.221 00:23:35.221 00:23:35.221 Commands Supported and Effects 00:23:35.221 ============================== 00:23:35.221 Admin Commands 00:23:35.221 -------------- 00:23:35.221 Get Log Page (02h): Supported 00:23:35.221 Identify (06h): Supported 00:23:35.221 Abort (08h): Supported 00:23:35.221 Set Features (09h): Supported 00:23:35.221 Get Features (0Ah): Supported 00:23:35.221 Asynchronous Event Request (0Ch): Supported 00:23:35.221 Keep Alive (18h): Supported 00:23:35.221 I/O Commands 00:23:35.221 ------------ 00:23:35.221 Flush (00h): Supported LBA-Change 00:23:35.221 Write (01h): Supported LBA-Change 00:23:35.221 Read (02h): Supported 00:23:35.221 Compare (05h): Supported 00:23:35.221 Write Zeroes (08h): Supported LBA-Change 00:23:35.221 Dataset Management (09h): Supported LBA-Change 00:23:35.221 Copy (19h): Supported LBA-Change 00:23:35.221 00:23:35.221 Error Log 00:23:35.221 ========= 00:23:35.221 00:23:35.221 Arbitration 00:23:35.221 =========== 00:23:35.221 Arbitration Burst: 1 00:23:35.221 00:23:35.221 Power Management 00:23:35.221 ================ 00:23:35.221 Number of Power States: 1 00:23:35.221 Current Power State: Power State #0 00:23:35.221 Power State #0: 00:23:35.221 Max Power: 0.00 W 00:23:35.221 Non-Operational State: Operational 00:23:35.221 Entry Latency: Not Reported 00:23:35.221 Exit Latency: Not Reported 00:23:35.221 Relative Read Throughput: 0 00:23:35.221 Relative Read Latency: 0 00:23:35.221 Relative Write Throughput: 0 00:23:35.221 Relative Write Latency: 0 00:23:35.221 Idle Power: Not Reported 00:23:35.221 Active Power: Not Reported 00:23:35.221 Non-Operational Permissive Mode: Not Supported 00:23:35.221 00:23:35.221 Health Information 00:23:35.221 ================== 00:23:35.221 Critical Warnings: 00:23:35.221 Available Spare Space: OK 00:23:35.221 Temperature: OK 00:23:35.221 Device Reliability: OK 00:23:35.221 Read Only: No 00:23:35.221 Volatile Memory Backup: OK 00:23:35.221 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:35.221 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:35.221 Available Spare: 0% 00:23:35.221 Available Spare Threshold: 0% 00:23:35.221 Life Percentage Used:[2024-10-21 12:08:11.681581] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:35.221 [2024-10-21 12:08:11.681586] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x8541e0) 00:23:35.221 [2024-10-21 12:08:11.681594] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.221 [2024-10-21 12:08:11.681609] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bdc00, cid 7, qid 0 00:23:35.221 [2024-10-21 12:08:11.681839] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:35.221 [2024-10-21 12:08:11.681846] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:35.221 [2024-10-21 12:08:11.681850] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:35.221 [2024-10-21 12:08:11.681854] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8bdc00) on tqpair=0x8541e0 00:23:35.221 [2024-10-21 12:08:11.681890] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:23:35.221 [2024-10-21 12:08:11.681901] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8bd180) on tqpair=0x8541e0 00:23:35.221 [2024-10-21 12:08:11.681911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.221 [2024-10-21 12:08:11.681919] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8bd300) on tqpair=0x8541e0 00:23:35.221 [2024-10-21 12:08:11.681925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.221 [2024-10-21 12:08:11.681930] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8bd480) on tqpair=0x8541e0 00:23:35.221 [2024-10-21 12:08:11.681934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.221 [2024-10-21 12:08:11.681939] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8bd600) on tqpair=0x8541e0 00:23:35.221 [2024-10-21 12:08:11.681944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.221 [2024-10-21 12:08:11.681954] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:35.221 [2024-10-21 12:08:11.681958] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:35.221 [2024-10-21 12:08:11.681962] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8541e0) 00:23:35.221 [2024-10-21 12:08:11.681969] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.221 [2024-10-21 12:08:11.681982] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bd600, cid 3, qid 0 00:23:35.221 [2024-10-21 12:08:11.682182] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:35.221 [2024-10-21 12:08:11.682188] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:35.221 [2024-10-21 12:08:11.682192] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:35.221 [2024-10-21 12:08:11.682196] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8bd600) on tqpair=0x8541e0 00:23:35.221 [2024-10-21 12:08:11.682203] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:35.221 [2024-10-21 12:08:11.682207] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:35.221 [2024-10-21 12:08:11.682210] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8541e0) 00:23:35.221 [2024-10-21 12:08:11.682217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.221 [2024-10-21 12:08:11.682230] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bd600, cid 3, qid 0 00:23:35.221 [2024-10-21 12:08:11.682465] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:35.222 [2024-10-21 12:08:11.682474] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:35.222 [2024-10-21 12:08:11.682477] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:35.222 [2024-10-21 12:08:11.682482] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8bd600) on tqpair=0x8541e0 00:23:35.222 [2024-10-21 12:08:11.682489] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:23:35.222 [2024-10-21 12:08:11.682495] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:23:35.222 [2024-10-21 12:08:11.682505] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:35.222 [2024-10-21 12:08:11.682509] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:35.222 [2024-10-21 12:08:11.682512] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8541e0) 00:23:35.222 [2024-10-21 12:08:11.682521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.222 [2024-10-21 12:08:11.682532] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bd600, cid 3, qid 0 00:23:35.222 [2024-10-21 12:08:11.682734] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:35.222 [2024-10-21 12:08:11.682740] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:35.222 [2024-10-21 12:08:11.682744] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:35.222 [2024-10-21 12:08:11.682748] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8bd600) on tqpair=0x8541e0 00:23:35.222 [2024-10-21 12:08:11.682758] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:35.222 [2024-10-21 12:08:11.682764] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:35.222 [2024-10-21 12:08:11.682767] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8541e0) 00:23:35.222 [2024-10-21 12:08:11.682774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.222 [2024-10-21 12:08:11.682784] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bd600, cid 3, qid 0 00:23:35.222 [2024-10-21 12:08:11.682999] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:35.222 [2024-10-21 12:08:11.683007] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:35.222 [2024-10-21 12:08:11.683011] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:35.222 [2024-10-21 12:08:11.683018] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8bd600) on tqpair=0x8541e0 00:23:35.222 [2024-10-21 12:08:11.683028] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:35.222 [2024-10-21 12:08:11.683032] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:35.222 [2024-10-21 12:08:11.683035] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8541e0) 00:23:35.222 [2024-10-21 12:08:11.683042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.222 [2024-10-21 12:08:11.683052] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bd600, cid 3, qid 0 00:23:35.222 [2024-10-21 12:08:11.683313] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:35.222 [2024-10-21 12:08:11.683329] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:35.222 [2024-10-21 12:08:11.683332] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:35.222 [2024-10-21 12:08:11.683336] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8bd600) on tqpair=0x8541e0 00:23:35.222 [2024-10-21 12:08:11.683346] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:35.222 [2024-10-21 12:08:11.683350] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:35.222 [2024-10-21 12:08:11.683354] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8541e0) 00:23:35.222 [2024-10-21 12:08:11.683361] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.222 [2024-10-21 12:08:11.683372] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bd600, cid 3, qid 0 00:23:35.222 [2024-10-21 12:08:11.683573] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:35.222 [2024-10-21 12:08:11.683579] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:35.222 [2024-10-21 12:08:11.683583] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:35.222 [2024-10-21 12:08:11.683587] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8bd600) on tqpair=0x8541e0 00:23:35.222 [2024-10-21 12:08:11.683597] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:35.222 [2024-10-21 12:08:11.683601] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:35.222 [2024-10-21 12:08:11.683605] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8541e0) 00:23:35.222 [2024-10-21 12:08:11.683612] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.222 [2024-10-21 12:08:11.683623] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bd600, cid 3, qid 0 00:23:35.222 [2024-10-21 12:08:11.683802] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:35.222 [2024-10-21 12:08:11.683809] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:35.222 [2024-10-21 12:08:11.683813] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:35.222 [2024-10-21 12:08:11.683817] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8bd600) on tqpair=0x8541e0 00:23:35.222 [2024-10-21 12:08:11.683827] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:35.222 [2024-10-21 12:08:11.683831] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:35.222 [2024-10-21 12:08:11.683835] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8541e0) 00:23:35.222 [2024-10-21 12:08:11.683841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.222 [2024-10-21 12:08:11.683852] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bd600, cid 3, qid 0 00:23:35.222 [2024-10-21 12:08:11.684047] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:35.222 [2024-10-21 12:08:11.684054] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:35.222 [2024-10-21 12:08:11.684057] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:35.222 [2024-10-21 12:08:11.684061] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8bd600) on tqpair=0x8541e0 00:23:35.222 [2024-10-21 12:08:11.684073] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:35.222 [2024-10-21 12:08:11.684077] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:35.222 [2024-10-21 12:08:11.684081] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8541e0) 00:23:35.222 [2024-10-21 12:08:11.684087] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.222 [2024-10-21 12:08:11.684098] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bd600, cid 3, qid 0 00:23:35.222 [2024-10-21 12:08:11.684270] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:35.222 [2024-10-21 12:08:11.684278] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:35.222 [2024-10-21 12:08:11.684282] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:35.222 [2024-10-21 12:08:11.684286] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8bd600) on tqpair=0x8541e0 00:23:35.222 [2024-10-21 12:08:11.684296] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:35.222 [2024-10-21 12:08:11.684300] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:35.222 [2024-10-21 12:08:11.684304] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8541e0) 00:23:35.222 [2024-10-21 12:08:11.684310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.222 [2024-10-21 12:08:11.684331] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bd600, cid 3, qid 0 00:23:35.222 [2024-10-21 12:08:11.684499] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:35.222 [2024-10-21 12:08:11.684507] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:35.222 [2024-10-21 12:08:11.684511] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:35.222 [2024-10-21 12:08:11.684515] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8bd600) on tqpair=0x8541e0 00:23:35.222 [2024-10-21 12:08:11.684525] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:35.222 [2024-10-21 12:08:11.684529] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:35.222 [2024-10-21 12:08:11.684532] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8541e0) 00:23:35.222 [2024-10-21 12:08:11.684539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.222 [2024-10-21 12:08:11.684549] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bd600, cid 3, qid 0 00:23:35.222 [2024-10-21 12:08:11.684749] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:35.222 [2024-10-21 12:08:11.684755] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:35.222 [2024-10-21 12:08:11.684760] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:35.222 [2024-10-21 12:08:11.684766] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8bd600) on tqpair=0x8541e0 00:23:35.222 [2024-10-21 12:08:11.684778] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:35.222 [2024-10-21 12:08:11.684784] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:35.222 [2024-10-21 12:08:11.684788] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8541e0) 00:23:35.222 [2024-10-21 12:08:11.684796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.222 [2024-10-21 12:08:11.684810] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bd600, cid 3, qid 0 00:23:35.222 [2024-10-21 12:08:11.684986] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:35.222 [2024-10-21 12:08:11.684992] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:35.222 [2024-10-21 12:08:11.684996] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:35.222 [2024-10-21 12:08:11.684999] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8bd600) on tqpair=0x8541e0 00:23:35.222 [2024-10-21 12:08:11.685009] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:35.222 [2024-10-21 12:08:11.685016] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:35.222 [2024-10-21 12:08:11.685019] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8541e0) 00:23:35.222 [2024-10-21 12:08:11.685026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.222 [2024-10-21 12:08:11.685037] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bd600, cid 3, qid 0 00:23:35.222 [2024-10-21 12:08:11.685205] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:35.222 [2024-10-21 12:08:11.685213] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:35.222 [2024-10-21 12:08:11.685219] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:35.222 [2024-10-21 12:08:11.685223] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8bd600) on tqpair=0x8541e0 00:23:35.222 [2024-10-21 12:08:11.685235] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:35.222 [2024-10-21 12:08:11.685241] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:35.222 [2024-10-21 12:08:11.685246] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8541e0) 00:23:35.222 [2024-10-21 12:08:11.685253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.222 [2024-10-21 12:08:11.685265] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bd600, cid 3, qid 0 00:23:35.222 [2024-10-21 12:08:11.689338] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:35.222 [2024-10-21 12:08:11.689349] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:35.222 [2024-10-21 12:08:11.689353] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:35.222 [2024-10-21 12:08:11.689357] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8bd600) on tqpair=0x8541e0 00:23:35.223 [2024-10-21 12:08:11.689365] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:23:35.223 0% 00:23:35.223 Data Units Read: 0 00:23:35.223 Data Units Written: 0 00:23:35.223 Host Read Commands: 0 00:23:35.223 Host Write Commands: 0 00:23:35.223 Controller Busy Time: 0 minutes 00:23:35.223 Power Cycles: 0 00:23:35.223 Power On Hours: 0 hours 00:23:35.223 Unsafe Shutdowns: 0 00:23:35.223 Unrecoverable Media Errors: 0 00:23:35.223 Lifetime Error Log Entries: 0 00:23:35.223 Warning Temperature Time: 0 minutes 00:23:35.223 Critical Temperature Time: 0 minutes 00:23:35.223 00:23:35.223 Number of Queues 00:23:35.223 ================ 00:23:35.223 Number of I/O Submission Queues: 127 00:23:35.223 Number of I/O Completion Queues: 127 00:23:35.223 00:23:35.223 Active Namespaces 00:23:35.223 ================= 00:23:35.223 Namespace ID:1 00:23:35.223 Error Recovery Timeout: Unlimited 00:23:35.223 Command Set Identifier: NVM (00h) 00:23:35.223 Deallocate: Supported 00:23:35.223 Deallocated/Unwritten Error: Not Supported 00:23:35.223 Deallocated Read Value: Unknown 00:23:35.223 Deallocate in Write Zeroes: Not Supported 00:23:35.223 Deallocated Guard Field: 0xFFFF 00:23:35.223 Flush: Supported 00:23:35.223 Reservation: Supported 00:23:35.223 Namespace Sharing Capabilities: Multiple Controllers 00:23:35.223 Size (in LBAs): 131072 (0GiB) 00:23:35.223 Capacity (in LBAs): 131072 (0GiB) 00:23:35.223 Utilization (in LBAs): 131072 (0GiB) 00:23:35.223 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:35.223 EUI64: ABCDEF0123456789 00:23:35.223 UUID: 12798ad3-e272-4d5f-9dea-9bff7be1771f 00:23:35.223 Thin Provisioning: Not Supported 00:23:35.223 Per-NS Atomic Units: Yes 00:23:35.223 Atomic Boundary Size (Normal): 0 00:23:35.223 Atomic Boundary Size (PFail): 0 00:23:35.223 Atomic Boundary Offset: 0 00:23:35.223 Maximum Single Source Range Length: 65535 00:23:35.223 Maximum Copy Length: 65535 00:23:35.223 Maximum Source Range Count: 1 00:23:35.223 NGUID/EUI64 Never Reused: No 00:23:35.223 Namespace Write Protected: No 00:23:35.223 Number of LBA Formats: 1 00:23:35.223 Current LBA Format: LBA Format #00 00:23:35.223 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:35.223 00:23:35.223 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:35.223 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:35.223 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.223 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:35.223 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.223 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:35.223 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:35.223 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:35.223 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:23:35.223 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:35.223 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:23:35.223 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:35.223 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:35.223 rmmod nvme_tcp 00:23:35.223 rmmod nvme_fabrics 00:23:35.223 rmmod nvme_keyring 00:23:35.223 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:35.223 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:23:35.223 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:23:35.223 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@515 -- # '[' -n 1065840 ']' 00:23:35.223 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # killprocess 1065840 00:23:35.223 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 1065840 ']' 00:23:35.223 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 1065840 00:23:35.223 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:23:35.223 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:35.223 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1065840 00:23:35.485 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:35.485 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:35.485 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1065840' 00:23:35.485 killing process with pid 1065840 00:23:35.485 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 1065840 00:23:35.485 12:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 1065840 00:23:35.485 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:35.485 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:35.485 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:35.485 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:23:35.485 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-save 00:23:35.485 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:35.485 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-restore 00:23:35.485 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:35.485 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:35.485 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:35.485 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:35.485 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.030 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:38.030 00:23:38.030 real 0m11.687s 00:23:38.030 user 0m8.710s 00:23:38.030 sys 0m6.193s 00:23:38.030 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:38.031 ************************************ 00:23:38.031 END TEST nvmf_identify 00:23:38.031 ************************************ 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.031 ************************************ 00:23:38.031 START TEST nvmf_perf 00:23:38.031 ************************************ 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:38.031 * Looking for test storage... 00:23:38.031 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:38.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.031 --rc genhtml_branch_coverage=1 00:23:38.031 --rc genhtml_function_coverage=1 00:23:38.031 --rc genhtml_legend=1 00:23:38.031 --rc geninfo_all_blocks=1 00:23:38.031 --rc geninfo_unexecuted_blocks=1 00:23:38.031 00:23:38.031 ' 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:38.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.031 --rc genhtml_branch_coverage=1 00:23:38.031 --rc genhtml_function_coverage=1 00:23:38.031 --rc genhtml_legend=1 00:23:38.031 --rc geninfo_all_blocks=1 00:23:38.031 --rc geninfo_unexecuted_blocks=1 00:23:38.031 00:23:38.031 ' 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:38.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.031 --rc genhtml_branch_coverage=1 00:23:38.031 --rc genhtml_function_coverage=1 00:23:38.031 --rc genhtml_legend=1 00:23:38.031 --rc geninfo_all_blocks=1 00:23:38.031 --rc geninfo_unexecuted_blocks=1 00:23:38.031 00:23:38.031 ' 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:38.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.031 --rc genhtml_branch_coverage=1 00:23:38.031 --rc genhtml_function_coverage=1 00:23:38.031 --rc genhtml_legend=1 00:23:38.031 --rc geninfo_all_blocks=1 00:23:38.031 --rc geninfo_unexecuted_blocks=1 00:23:38.031 00:23:38.031 ' 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:38.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:38.031 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:38.032 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:38.032 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:38.032 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:38.032 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:38.032 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:38.032 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.032 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:38.032 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.032 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:38.032 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:38.032 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:38.032 12:08:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:46.177 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:46.177 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:46.177 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:46.177 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # is_hw=yes 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:46.177 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:46.178 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:46.178 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:46.178 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:46.178 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:46.178 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:46.178 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:46.178 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:46.178 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:46.178 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:46.178 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:46.178 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:46.178 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:46.178 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:46.178 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:46.178 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:46.178 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:46.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:46.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:23:46.178 00:23:46.178 --- 10.0.0.2 ping statistics --- 00:23:46.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.178 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:23:46.178 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:46.178 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:46.178 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:23:46.178 00:23:46.178 --- 10.0.0.1 ping statistics --- 00:23:46.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.178 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:23:46.178 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:46.178 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # return 0 00:23:46.178 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:46.178 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:46.178 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:46.178 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:46.178 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:46.178 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:46.178 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:46.178 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:46.178 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:46.178 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:46.178 12:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:46.178 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # nvmfpid=1070367 00:23:46.178 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # waitforlisten 1070367 00:23:46.178 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:46.178 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 1070367 ']' 00:23:46.178 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.178 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:46.178 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:46.178 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:46.178 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:46.178 [2024-10-21 12:08:22.057717] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:23:46.178 [2024-10-21 12:08:22.057785] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:46.178 [2024-10-21 12:08:22.145932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:46.178 [2024-10-21 12:08:22.198296] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:46.178 [2024-10-21 12:08:22.198357] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:46.178 [2024-10-21 12:08:22.198367] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:46.178 [2024-10-21 12:08:22.198374] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:46.178 [2024-10-21 12:08:22.198380] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:46.178 [2024-10-21 12:08:22.200614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:46.178 [2024-10-21 12:08:22.200781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:46.178 [2024-10-21 12:08:22.200944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:46.178 [2024-10-21 12:08:22.200944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.440 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:46.440 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:23:46.440 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:46.440 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:46.440 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:46.440 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:46.440 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:46.440 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:47.012 12:08:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:47.012 12:08:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:47.272 12:08:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:23:47.272 12:08:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:47.534 12:08:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:47.534 12:08:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:23:47.534 12:08:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:47.534 12:08:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:47.534 12:08:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:47.534 [2024-10-21 12:08:24.038237] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:47.534 12:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:47.794 12:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:47.794 12:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:48.054 12:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:48.054 12:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:48.054 12:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:48.315 [2024-10-21 12:08:24.801146] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:48.315 12:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:48.575 12:08:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:23:48.575 12:08:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:23:48.575 12:08:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:48.575 12:08:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:23:49.957 Initializing NVMe Controllers 00:23:49.957 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:23:49.957 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:23:49.957 Initialization complete. Launching workers. 00:23:49.957 ======================================================== 00:23:49.957 Latency(us) 00:23:49.957 Device Information : IOPS MiB/s Average min max 00:23:49.957 PCIE (0000:65:00.0) NSID 1 from core 0: 78689.23 307.38 406.04 13.36 4779.94 00:23:49.957 ======================================================== 00:23:49.957 Total : 78689.23 307.38 406.04 13.36 4779.94 00:23:49.957 00:23:49.957 12:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:51.342 Initializing NVMe Controllers 00:23:51.342 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:51.342 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:51.342 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:51.342 Initialization complete. Launching workers. 00:23:51.342 ======================================================== 00:23:51.342 Latency(us) 00:23:51.342 Device Information : IOPS MiB/s Average min max 00:23:51.342 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 79.79 0.31 12860.17 256.67 45636.48 00:23:51.342 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 46.88 0.18 21671.57 7965.73 53873.14 00:23:51.342 ======================================================== 00:23:51.342 Total : 126.67 0.49 16121.08 256.67 53873.14 00:23:51.342 00:23:51.342 12:08:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:52.727 Initializing NVMe Controllers 00:23:52.727 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:52.727 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:52.727 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:52.727 Initialization complete. Launching workers. 00:23:52.727 ======================================================== 00:23:52.727 Latency(us) 00:23:52.727 Device Information : IOPS MiB/s Average min max 00:23:52.727 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11823.00 46.18 2713.43 468.12 6431.43 00:23:52.727 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3832.00 14.97 8388.37 6254.40 15925.14 00:23:52.727 ======================================================== 00:23:52.727 Total : 15655.00 61.15 4102.53 468.12 15925.14 00:23:52.727 00:23:52.727 12:08:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:52.727 12:08:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:52.727 12:08:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:55.364 Initializing NVMe Controllers 00:23:55.364 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:55.364 Controller IO queue size 128, less than required. 00:23:55.364 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:55.364 Controller IO queue size 128, less than required. 00:23:55.364 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:55.364 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:55.364 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:55.364 Initialization complete. Launching workers. 00:23:55.364 ======================================================== 00:23:55.364 Latency(us) 00:23:55.364 Device Information : IOPS MiB/s Average min max 00:23:55.364 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1892.66 473.17 68726.18 40742.20 105386.20 00:23:55.364 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 598.58 149.64 221814.37 59506.02 327306.05 00:23:55.364 ======================================================== 00:23:55.364 Total : 2491.24 622.81 105509.11 40742.20 327306.05 00:23:55.364 00:23:55.364 12:08:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:55.364 No valid NVMe controllers or AIO or URING devices found 00:23:55.364 Initializing NVMe Controllers 00:23:55.364 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:55.364 Controller IO queue size 128, less than required. 00:23:55.364 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:55.364 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:55.364 Controller IO queue size 128, less than required. 00:23:55.364 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:55.364 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:55.364 WARNING: Some requested NVMe devices were skipped 00:23:55.364 12:08:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:57.935 Initializing NVMe Controllers 00:23:57.935 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:57.935 Controller IO queue size 128, less than required. 00:23:57.935 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:57.935 Controller IO queue size 128, less than required. 00:23:57.935 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:57.935 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:57.935 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:57.935 Initialization complete. Launching workers. 00:23:57.935 00:23:57.935 ==================== 00:23:57.935 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:57.935 TCP transport: 00:23:57.935 polls: 40555 00:23:57.935 idle_polls: 26786 00:23:57.935 sock_completions: 13769 00:23:57.935 nvme_completions: 7263 00:23:57.935 submitted_requests: 10934 00:23:57.935 queued_requests: 1 00:23:57.935 00:23:57.935 ==================== 00:23:57.935 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:57.935 TCP transport: 00:23:57.935 polls: 42324 00:23:57.935 idle_polls: 25664 00:23:57.935 sock_completions: 16660 00:23:57.935 nvme_completions: 7385 00:23:57.935 submitted_requests: 11020 00:23:57.935 queued_requests: 1 00:23:57.935 ======================================================== 00:23:57.935 Latency(us) 00:23:57.935 Device Information : IOPS MiB/s Average min max 00:23:57.935 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1814.28 453.57 71348.80 39356.25 120612.73 00:23:57.935 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1844.76 461.19 69828.47 30998.46 107950.77 00:23:57.935 ======================================================== 00:23:57.935 Total : 3659.03 914.76 70582.30 30998.46 120612.73 00:23:57.935 00:23:57.935 12:08:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:57.935 12:08:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:57.935 12:08:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:57.935 12:08:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:57.935 12:08:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:57.935 12:08:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:57.935 12:08:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:23:57.935 12:08:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:57.935 12:08:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:23:57.935 12:08:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:57.935 12:08:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:57.935 rmmod nvme_tcp 00:23:57.935 rmmod nvme_fabrics 00:23:57.935 rmmod nvme_keyring 00:23:57.935 12:08:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:57.935 12:08:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:23:57.936 12:08:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:23:57.936 12:08:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@515 -- # '[' -n 1070367 ']' 00:23:57.936 12:08:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # killprocess 1070367 00:23:57.936 12:08:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 1070367 ']' 00:23:57.936 12:08:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 1070367 00:23:57.936 12:08:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:23:57.936 12:08:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:57.936 12:08:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1070367 00:23:57.936 12:08:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:57.936 12:08:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:57.936 12:08:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1070367' 00:23:57.936 killing process with pid 1070367 00:23:57.936 12:08:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 1070367 00:23:57.936 12:08:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 1070367 00:23:59.847 12:08:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:59.847 12:08:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:59.847 12:08:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:59.847 12:08:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:23:59.847 12:08:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-save 00:23:59.847 12:08:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:59.847 12:08:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-restore 00:23:59.847 12:08:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:59.847 12:08:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:59.847 12:08:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.848 12:08:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:59.848 12:08:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.405 12:08:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:02.405 00:24:02.405 real 0m24.166s 00:24:02.405 user 0m58.054s 00:24:02.405 sys 0m8.545s 00:24:02.405 12:08:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:02.405 12:08:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:02.405 ************************************ 00:24:02.405 END TEST nvmf_perf 00:24:02.405 ************************************ 00:24:02.405 12:08:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:02.405 12:08:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:02.405 12:08:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:02.405 12:08:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.405 ************************************ 00:24:02.405 START TEST nvmf_fio_host 00:24:02.405 ************************************ 00:24:02.405 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:02.405 * Looking for test storage... 00:24:02.405 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:02.405 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:02.405 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:24:02.405 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:02.405 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:02.405 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:02.405 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:02.405 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:02.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.406 --rc genhtml_branch_coverage=1 00:24:02.406 --rc genhtml_function_coverage=1 00:24:02.406 --rc genhtml_legend=1 00:24:02.406 --rc geninfo_all_blocks=1 00:24:02.406 --rc geninfo_unexecuted_blocks=1 00:24:02.406 00:24:02.406 ' 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:02.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.406 --rc genhtml_branch_coverage=1 00:24:02.406 --rc genhtml_function_coverage=1 00:24:02.406 --rc genhtml_legend=1 00:24:02.406 --rc geninfo_all_blocks=1 00:24:02.406 --rc geninfo_unexecuted_blocks=1 00:24:02.406 00:24:02.406 ' 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:02.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.406 --rc genhtml_branch_coverage=1 00:24:02.406 --rc genhtml_function_coverage=1 00:24:02.406 --rc genhtml_legend=1 00:24:02.406 --rc geninfo_all_blocks=1 00:24:02.406 --rc geninfo_unexecuted_blocks=1 00:24:02.406 00:24:02.406 ' 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:02.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.406 --rc genhtml_branch_coverage=1 00:24:02.406 --rc genhtml_function_coverage=1 00:24:02.406 --rc genhtml_legend=1 00:24:02.406 --rc geninfo_all_blocks=1 00:24:02.406 --rc geninfo_unexecuted_blocks=1 00:24:02.406 00:24:02.406 ' 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:02.406 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:02.407 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:02.407 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:02.407 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:02.407 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:02.407 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:02.407 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:02.407 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:02.407 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:02.407 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:02.407 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:02.407 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:02.407 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:02.407 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:02.407 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.407 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:02.407 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.407 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:02.407 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:02.407 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:02.407 12:08:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.545 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:10.545 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:10.545 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:10.545 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:10.545 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:10.545 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:10.545 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:10.545 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:10.545 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:10.545 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:24:10.545 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:10.545 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:24:10.545 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:10.545 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:24:10.545 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:10.545 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:10.545 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:10.545 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:10.545 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:10.545 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:10.545 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:10.545 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:10.545 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:10.545 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:10.545 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:10.545 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:10.546 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:10.546 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:10.546 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:10.546 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # is_hw=yes 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:10.546 12:08:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:10.546 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:10.546 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:10.546 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:10.546 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:10.546 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:10.546 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:24:10.546 00:24:10.546 --- 10.0.0.2 ping statistics --- 00:24:10.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.546 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:24:10.546 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:10.546 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:10.546 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:24:10.546 00:24:10.546 --- 10.0.0.1 ping statistics --- 00:24:10.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.546 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:24:10.546 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:10.546 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # return 0 00:24:10.546 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:10.546 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:10.546 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:10.546 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:10.546 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:10.546 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:10.546 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:10.546 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:10.546 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:10.546 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:10.546 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.546 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1077381 00:24:10.546 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:10.547 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:10.547 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1077381 00:24:10.547 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 1077381 ']' 00:24:10.547 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.547 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:10.547 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.547 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:10.547 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.547 [2024-10-21 12:08:46.226866] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:24:10.547 [2024-10-21 12:08:46.226934] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:10.547 [2024-10-21 12:08:46.315764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:10.547 [2024-10-21 12:08:46.369361] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:10.547 [2024-10-21 12:08:46.369414] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:10.547 [2024-10-21 12:08:46.369423] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:10.547 [2024-10-21 12:08:46.369431] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:10.547 [2024-10-21 12:08:46.369437] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:10.547 [2024-10-21 12:08:46.371449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:10.547 [2024-10-21 12:08:46.371610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:10.547 [2024-10-21 12:08:46.371771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:10.547 [2024-10-21 12:08:46.371771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:10.547 12:08:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:10.547 12:08:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:24:10.547 12:08:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:10.808 [2024-10-21 12:08:47.224297] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:10.808 12:08:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:10.808 12:08:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:10.808 12:08:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.808 12:08:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:11.068 Malloc1 00:24:11.068 12:08:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:11.329 12:08:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:11.588 12:08:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:11.589 [2024-10-21 12:08:48.102397] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:11.589 12:08:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:11.849 12:08:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:11.849 12:08:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:11.849 12:08:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:11.849 12:08:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:11.849 12:08:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:11.849 12:08:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:11.849 12:08:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:11.849 12:08:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:11.849 12:08:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:11.849 12:08:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:11.849 12:08:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:11.849 12:08:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:11.849 12:08:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:11.849 12:08:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:11.849 12:08:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:11.849 12:08:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:11.849 12:08:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:11.849 12:08:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:11.849 12:08:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:11.849 12:08:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:11.849 12:08:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:11.849 12:08:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:11.849 12:08:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:12.416 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:12.416 fio-3.35 00:24:12.416 Starting 1 thread 00:24:14.962 00:24:14.962 test: (groupid=0, jobs=1): err= 0: pid=1077973: Mon Oct 21 12:08:51 2024 00:24:14.962 read: IOPS=13.4k, BW=52.4MiB/s (55.0MB/s)(105MiB/2005msec) 00:24:14.962 slat (usec): min=2, max=280, avg= 2.15, stdev= 2.39 00:24:14.962 clat (usec): min=3187, max=9084, avg=5234.82, stdev=371.70 00:24:14.962 lat (usec): min=3189, max=9086, avg=5236.97, stdev=371.77 00:24:14.962 clat percentiles (usec): 00:24:14.962 | 1.00th=[ 4359], 5.00th=[ 4686], 10.00th=[ 4817], 20.00th=[ 4948], 00:24:14.962 | 30.00th=[ 5080], 40.00th=[ 5145], 50.00th=[ 5211], 60.00th=[ 5342], 00:24:14.962 | 70.00th=[ 5407], 80.00th=[ 5538], 90.00th=[ 5669], 95.00th=[ 5800], 00:24:14.962 | 99.00th=[ 6128], 99.50th=[ 6456], 99.90th=[ 7242], 99.95th=[ 7635], 00:24:14.962 | 99.99th=[ 8848] 00:24:14.962 bw ( KiB/s): min=52584, max=54336, per=100.00%, avg=53686.00, stdev=780.08, samples=4 00:24:14.962 iops : min=13146, max=13584, avg=13421.50, stdev=195.02, samples=4 00:24:14.962 write: IOPS=13.4k, BW=52.4MiB/s (54.9MB/s)(105MiB/2005msec); 0 zone resets 00:24:14.962 slat (usec): min=2, max=277, avg= 2.22, stdev= 1.85 00:24:14.962 clat (usec): min=2625, max=8586, avg=4249.18, stdev=319.13 00:24:14.962 lat (usec): min=2627, max=8588, avg=4251.40, stdev=319.27 00:24:14.962 clat percentiles (usec): 00:24:14.962 | 1.00th=[ 3556], 5.00th=[ 3785], 10.00th=[ 3884], 20.00th=[ 4015], 00:24:14.962 | 30.00th=[ 4113], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4293], 00:24:14.962 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 4621], 95.00th=[ 4686], 00:24:14.962 | 99.00th=[ 5014], 99.50th=[ 5407], 99.90th=[ 6587], 99.95th=[ 7504], 00:24:14.962 | 99.99th=[ 8586] 00:24:14.962 bw ( KiB/s): min=52936, max=53904, per=100.00%, avg=53654.00, stdev=478.90, samples=4 00:24:14.962 iops : min=13234, max=13476, avg=13413.50, stdev=119.73, samples=4 00:24:14.962 lat (msec) : 4=9.25%, 10=90.75% 00:24:14.962 cpu : usr=74.35%, sys=24.45%, ctx=20, majf=0, minf=8 00:24:14.962 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:14.962 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.962 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:14.962 issued rwts: total=26904,26893,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:14.962 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:14.962 00:24:14.962 Run status group 0 (all jobs): 00:24:14.962 READ: bw=52.4MiB/s (55.0MB/s), 52.4MiB/s-52.4MiB/s (55.0MB/s-55.0MB/s), io=105MiB (110MB), run=2005-2005msec 00:24:14.962 WRITE: bw=52.4MiB/s (54.9MB/s), 52.4MiB/s-52.4MiB/s (54.9MB/s-54.9MB/s), io=105MiB (110MB), run=2005-2005msec 00:24:14.962 12:08:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:14.962 12:08:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:14.962 12:08:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:14.962 12:08:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:14.962 12:08:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:14.962 12:08:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:14.962 12:08:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:14.962 12:08:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:14.962 12:08:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:14.962 12:08:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:14.962 12:08:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:14.962 12:08:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:14.962 12:08:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:14.962 12:08:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:14.962 12:08:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:14.962 12:08:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:14.962 12:08:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:14.962 12:08:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:14.962 12:08:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:14.962 12:08:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:14.962 12:08:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:14.962 12:08:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:14.962 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:14.962 fio-3.35 00:24:14.962 Starting 1 thread 00:24:17.507 00:24:17.507 test: (groupid=0, jobs=1): err= 0: pid=1078789: Mon Oct 21 12:08:53 2024 00:24:17.507 read: IOPS=9602, BW=150MiB/s (157MB/s)(301MiB/2006msec) 00:24:17.507 slat (usec): min=3, max=110, avg= 3.60, stdev= 1.58 00:24:17.507 clat (usec): min=1512, max=15811, avg=8086.89, stdev=1900.39 00:24:17.507 lat (usec): min=1515, max=15814, avg=8090.50, stdev=1900.49 00:24:17.507 clat percentiles (usec): 00:24:17.507 | 1.00th=[ 4113], 5.00th=[ 5145], 10.00th=[ 5669], 20.00th=[ 6390], 00:24:17.507 | 30.00th=[ 6915], 40.00th=[ 7439], 50.00th=[ 8029], 60.00th=[ 8586], 00:24:17.507 | 70.00th=[ 9110], 80.00th=[ 9896], 90.00th=[10683], 95.00th=[11076], 00:24:17.507 | 99.00th=[12780], 99.50th=[13173], 99.90th=[13960], 99.95th=[14877], 00:24:17.507 | 99.99th=[15270] 00:24:17.507 bw ( KiB/s): min=65728, max=87520, per=50.18%, avg=77096.00, stdev=9120.52, samples=4 00:24:17.507 iops : min= 4108, max= 5470, avg=4818.50, stdev=570.03, samples=4 00:24:17.507 write: IOPS=5561, BW=86.9MiB/s (91.1MB/s)(157MiB/1801msec); 0 zone resets 00:24:17.507 slat (usec): min=39, max=408, avg=40.93, stdev= 7.69 00:24:17.507 clat (usec): min=1458, max=15946, avg=9098.07, stdev=1417.72 00:24:17.507 lat (usec): min=1498, max=16064, avg=9139.00, stdev=1419.59 00:24:17.507 clat percentiles (usec): 00:24:17.507 | 1.00th=[ 5735], 5.00th=[ 7046], 10.00th=[ 7439], 20.00th=[ 7898], 00:24:17.507 | 30.00th=[ 8356], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9372], 00:24:17.507 | 70.00th=[ 9765], 80.00th=[10159], 90.00th=[10814], 95.00th=[11338], 00:24:17.507 | 99.00th=[12649], 99.50th=[13304], 99.90th=[15533], 99.95th=[15795], 00:24:17.507 | 99.99th=[15926] 00:24:17.507 bw ( KiB/s): min=67840, max=91008, per=89.77%, avg=79880.00, stdev=9660.48, samples=4 00:24:17.507 iops : min= 4240, max= 5688, avg=4992.50, stdev=603.78, samples=4 00:24:17.507 lat (msec) : 2=0.01%, 4=0.69%, 10=79.40%, 20=19.91% 00:24:17.507 cpu : usr=84.69%, sys=14.16%, ctx=15, majf=0, minf=28 00:24:17.507 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:24:17.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:17.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:17.507 issued rwts: total=19263,10016,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:17.507 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:17.507 00:24:17.507 Run status group 0 (all jobs): 00:24:17.507 READ: bw=150MiB/s (157MB/s), 150MiB/s-150MiB/s (157MB/s-157MB/s), io=301MiB (316MB), run=2006-2006msec 00:24:17.507 WRITE: bw=86.9MiB/s (91.1MB/s), 86.9MiB/s-86.9MiB/s (91.1MB/s-91.1MB/s), io=157MiB (164MB), run=1801-1801msec 00:24:17.507 12:08:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:17.507 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:17.507 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:17.507 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:17.507 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:17.507 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:17.507 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:24:17.507 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:17.507 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:24:17.507 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:17.507 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:17.507 rmmod nvme_tcp 00:24:17.507 rmmod nvme_fabrics 00:24:17.507 rmmod nvme_keyring 00:24:17.507 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:17.507 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:24:17.507 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:24:17.507 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@515 -- # '[' -n 1077381 ']' 00:24:17.507 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # killprocess 1077381 00:24:17.507 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 1077381 ']' 00:24:17.507 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 1077381 00:24:17.768 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:24:17.768 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:17.768 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1077381 00:24:17.768 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:17.768 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:17.768 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1077381' 00:24:17.768 killing process with pid 1077381 00:24:17.768 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 1077381 00:24:17.768 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 1077381 00:24:17.768 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:17.768 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:17.768 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:17.768 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:24:17.768 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-save 00:24:17.768 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:17.768 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-restore 00:24:17.768 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:17.768 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:17.768 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.768 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:17.768 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:20.313 00:24:20.313 real 0m17.904s 00:24:20.313 user 1m11.150s 00:24:20.313 sys 0m7.690s 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.313 ************************************ 00:24:20.313 END TEST nvmf_fio_host 00:24:20.313 ************************************ 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.313 ************************************ 00:24:20.313 START TEST nvmf_failover 00:24:20.313 ************************************ 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:20.313 * Looking for test storage... 00:24:20.313 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:20.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.313 --rc genhtml_branch_coverage=1 00:24:20.313 --rc genhtml_function_coverage=1 00:24:20.313 --rc genhtml_legend=1 00:24:20.313 --rc geninfo_all_blocks=1 00:24:20.313 --rc geninfo_unexecuted_blocks=1 00:24:20.313 00:24:20.313 ' 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:20.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.313 --rc genhtml_branch_coverage=1 00:24:20.313 --rc genhtml_function_coverage=1 00:24:20.313 --rc genhtml_legend=1 00:24:20.313 --rc geninfo_all_blocks=1 00:24:20.313 --rc geninfo_unexecuted_blocks=1 00:24:20.313 00:24:20.313 ' 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:20.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.313 --rc genhtml_branch_coverage=1 00:24:20.313 --rc genhtml_function_coverage=1 00:24:20.313 --rc genhtml_legend=1 00:24:20.313 --rc geninfo_all_blocks=1 00:24:20.313 --rc geninfo_unexecuted_blocks=1 00:24:20.313 00:24:20.313 ' 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:20.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.313 --rc genhtml_branch_coverage=1 00:24:20.313 --rc genhtml_function_coverage=1 00:24:20.313 --rc genhtml_legend=1 00:24:20.313 --rc geninfo_all_blocks=1 00:24:20.313 --rc geninfo_unexecuted_blocks=1 00:24:20.313 00:24:20.313 ' 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:20.313 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:20.314 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:20.314 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:20.314 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:20.314 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:20.314 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:20.314 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:20.314 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:20.314 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:24:20.314 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:20.314 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:20.314 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:20.314 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.314 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.314 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.314 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:20.314 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.314 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:24:20.314 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:20.314 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:20.314 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:20.314 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:20.314 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:20.314 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:20.314 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:20.314 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:20.314 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:20.314 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:20.314 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:20.314 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:20.314 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:20.314 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:20.314 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:20.314 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:20.314 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:20.314 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:20.314 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:20.314 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:20.314 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:20.314 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:20.314 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:20.314 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:20.314 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:20.314 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:24:20.314 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:28.456 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:28.456 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:28.456 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.456 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:28.456 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:28.457 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.457 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:28.457 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # is_hw=yes 00:24:28.457 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:28.457 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:28.457 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:28.457 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:28.457 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:28.457 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:28.457 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:28.457 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:28.457 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:28.457 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:28.457 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:28.457 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:28.457 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:28.457 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:28.457 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:28.457 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:28.457 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:28.457 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:28.457 12:09:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:28.457 12:09:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:28.457 12:09:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:28.457 12:09:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:28.457 12:09:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:28.457 12:09:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:28.457 12:09:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:28.457 12:09:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:28.457 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:28.457 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:24:28.457 00:24:28.457 --- 10.0.0.2 ping statistics --- 00:24:28.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.457 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:24:28.457 12:09:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:28.457 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:28.457 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:24:28.457 00:24:28.457 --- 10.0.0.1 ping statistics --- 00:24:28.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.457 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:24:28.457 12:09:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:28.457 12:09:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # return 0 00:24:28.457 12:09:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:28.457 12:09:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:28.457 12:09:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:28.457 12:09:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:28.457 12:09:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:28.457 12:09:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:28.457 12:09:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:28.457 12:09:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:28.457 12:09:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:28.457 12:09:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:28.457 12:09:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:28.457 12:09:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # nvmfpid=1083557 00:24:28.457 12:09:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # waitforlisten 1083557 00:24:28.457 12:09:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:28.457 12:09:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1083557 ']' 00:24:28.457 12:09:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:28.457 12:09:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:28.457 12:09:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:28.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:28.457 12:09:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:28.457 12:09:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:28.457 [2024-10-21 12:09:04.326704] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:24:28.457 [2024-10-21 12:09:04.326771] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:28.457 [2024-10-21 12:09:04.416100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:28.457 [2024-10-21 12:09:04.467726] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:28.457 [2024-10-21 12:09:04.467777] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:28.457 [2024-10-21 12:09:04.467785] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:28.457 [2024-10-21 12:09:04.467793] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:28.457 [2024-10-21 12:09:04.467799] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:28.457 [2024-10-21 12:09:04.469620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:28.457 [2024-10-21 12:09:04.469783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:28.457 [2024-10-21 12:09:04.469785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:28.718 12:09:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:28.718 12:09:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:28.718 12:09:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:28.718 12:09:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:28.718 12:09:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:28.718 12:09:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:28.718 12:09:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:28.979 [2024-10-21 12:09:05.368889] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:28.979 12:09:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:29.241 Malloc0 00:24:29.241 12:09:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:29.241 12:09:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:29.502 12:09:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:29.762 [2024-10-21 12:09:06.193471] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:29.762 12:09:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:30.024 [2024-10-21 12:09:06.386071] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:30.024 12:09:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:30.024 [2024-10-21 12:09:06.582851] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:30.284 12:09:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1083931 00:24:30.284 12:09:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:30.284 12:09:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:30.284 12:09:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1083931 /var/tmp/bdevperf.sock 00:24:30.284 12:09:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1083931 ']' 00:24:30.284 12:09:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:30.284 12:09:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:30.284 12:09:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:30.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:30.284 12:09:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:30.284 12:09:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:31.226 12:09:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:31.226 12:09:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:31.226 12:09:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:31.486 NVMe0n1 00:24:31.486 12:09:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:31.747 00:24:31.747 12:09:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:31.747 12:09:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1084263 00:24:31.747 12:09:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:32.689 12:09:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:32.949 [2024-10-21 12:09:09.307998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89030 is same with the state(6) to be set 00:24:32.949 [2024-10-21 12:09:09.308041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89030 is same with the state(6) to be set 00:24:32.949 [2024-10-21 12:09:09.308048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89030 is same with the state(6) to be set 00:24:32.949 [2024-10-21 12:09:09.308053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89030 is same with the state(6) to be set 00:24:32.949 [2024-10-21 12:09:09.308057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89030 is same with the state(6) to be set 00:24:32.949 [2024-10-21 12:09:09.308067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89030 is same with the state(6) to be set 00:24:32.949 [2024-10-21 12:09:09.308072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89030 is same with the state(6) to be set 00:24:32.949 [2024-10-21 12:09:09.308077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89030 is same with the state(6) to be set 00:24:32.949 [2024-10-21 12:09:09.308081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89030 is same with the state(6) to be set 00:24:32.949 [2024-10-21 12:09:09.308085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89030 is same with the state(6) to be set 00:24:32.949 [2024-10-21 12:09:09.308090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89030 is same with the state(6) to be set 00:24:32.949 [2024-10-21 12:09:09.308094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89030 is same with the state(6) to be set 00:24:32.949 [2024-10-21 12:09:09.308099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89030 is same with the state(6) to be set 00:24:32.949 [2024-10-21 12:09:09.308103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89030 is same with the state(6) to be set 00:24:32.949 [2024-10-21 12:09:09.308108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89030 is same with the state(6) to be set 00:24:32.949 [2024-10-21 12:09:09.308112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89030 is same with the state(6) to be set 00:24:32.949 [2024-10-21 12:09:09.308117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89030 is same with the state(6) to be set 00:24:32.949 [2024-10-21 12:09:09.308121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89030 is same with the state(6) to be set 00:24:32.949 [2024-10-21 12:09:09.308125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89030 is same with the state(6) to be set 00:24:32.949 [2024-10-21 12:09:09.308130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89030 is same with the state(6) to be set 00:24:32.949 [2024-10-21 12:09:09.308134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89030 is same with the state(6) to be set 00:24:32.949 [2024-10-21 12:09:09.308139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89030 is same with the state(6) to be set 00:24:32.949 [2024-10-21 12:09:09.308143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89030 is same with the state(6) to be set 00:24:32.949 [2024-10-21 12:09:09.308148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89030 is same with the state(6) to be set 00:24:32.949 [2024-10-21 12:09:09.308152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89030 is same with the state(6) to be set 00:24:32.949 [2024-10-21 12:09:09.308157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89030 is same with the state(6) to be set 00:24:32.949 [2024-10-21 12:09:09.308161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89030 is same with the state(6) to be set 00:24:32.949 [2024-10-21 12:09:09.308166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89030 is same with the state(6) to be set 00:24:32.949 [2024-10-21 12:09:09.308170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89030 is same with the state(6) to be set 00:24:32.949 [2024-10-21 12:09:09.308174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89030 is same with the state(6) to be set 00:24:32.949 [2024-10-21 12:09:09.308179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89030 is same with the state(6) to be set 00:24:32.949 [2024-10-21 12:09:09.308183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89030 is same with the state(6) to be set 00:24:32.949 [2024-10-21 12:09:09.308190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89030 is same with the state(6) to be set 00:24:32.949 [2024-10-21 12:09:09.308194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89030 is same with the state(6) to be set 00:24:32.949 [2024-10-21 12:09:09.308199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89030 is same with the state(6) to be set 00:24:32.949 12:09:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:36.248 12:09:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:36.248 00:24:36.249 12:09:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:36.509 [2024-10-21 12:09:12.931777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89e30 is same with the state(6) to be set 00:24:36.509 [2024-10-21 12:09:12.931814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89e30 is same with the state(6) to be set 00:24:36.509 [2024-10-21 12:09:12.931820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89e30 is same with the state(6) to be set 00:24:36.509 [2024-10-21 12:09:12.931825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89e30 is same with the state(6) to be set 00:24:36.509 [2024-10-21 12:09:12.931830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89e30 is same with the state(6) to be set 00:24:36.509 [2024-10-21 12:09:12.931835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89e30 is same with the state(6) to be set 00:24:36.509 [2024-10-21 12:09:12.931839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89e30 is same with the state(6) to be set 00:24:36.509 [2024-10-21 12:09:12.931844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89e30 is same with the state(6) to be set 00:24:36.509 [2024-10-21 12:09:12.931848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89e30 is same with the state(6) to be set 00:24:36.509 [2024-10-21 12:09:12.931853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89e30 is same with the state(6) to be set 00:24:36.509 [2024-10-21 12:09:12.931858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89e30 is same with the state(6) to be set 00:24:36.509 [2024-10-21 12:09:12.931862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89e30 is same with the state(6) to be set 00:24:36.509 [2024-10-21 12:09:12.931867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89e30 is same with the state(6) to be set 00:24:36.509 [2024-10-21 12:09:12.931871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89e30 is same with the state(6) to be set 00:24:36.509 [2024-10-21 12:09:12.931876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89e30 is same with the state(6) to be set 00:24:36.509 [2024-10-21 12:09:12.931880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89e30 is same with the state(6) to be set 00:24:36.509 [2024-10-21 12:09:12.931885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89e30 is same with the state(6) to be set 00:24:36.509 [2024-10-21 12:09:12.931889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89e30 is same with the state(6) to be set 00:24:36.509 [2024-10-21 12:09:12.931894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89e30 is same with the state(6) to be set 00:24:36.509 [2024-10-21 12:09:12.931898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89e30 is same with the state(6) to be set 00:24:36.509 [2024-10-21 12:09:12.931903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89e30 is same with the state(6) to be set 00:24:36.509 [2024-10-21 12:09:12.931912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89e30 is same with the state(6) to be set 00:24:36.509 12:09:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:39.806 12:09:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:39.806 [2024-10-21 12:09:16.124229] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:39.806 12:09:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:40.747 12:09:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:40.747 [2024-10-21 12:09:17.317380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ad30 is same with the state(6) to be set 00:24:40.747 [2024-10-21 12:09:17.317419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ad30 is same with the state(6) to be set 00:24:40.747 [2024-10-21 12:09:17.317425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ad30 is same with the state(6) to be set 00:24:40.747 [2024-10-21 12:09:17.317430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ad30 is same with the state(6) to be set 00:24:40.747 [2024-10-21 12:09:17.317435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ad30 is same with the state(6) to be set 00:24:40.747 [2024-10-21 12:09:17.317439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ad30 is same with the state(6) to be set 00:24:40.747 [2024-10-21 12:09:17.317444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ad30 is same with the state(6) to be set 00:24:41.007 12:09:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1084263 00:24:47.601 { 00:24:47.601 "results": [ 00:24:47.601 { 00:24:47.601 "job": "NVMe0n1", 00:24:47.601 "core_mask": "0x1", 00:24:47.601 "workload": "verify", 00:24:47.601 "status": "finished", 00:24:47.601 "verify_range": { 00:24:47.601 "start": 0, 00:24:47.601 "length": 16384 00:24:47.601 }, 00:24:47.601 "queue_depth": 128, 00:24:47.601 "io_size": 4096, 00:24:47.601 "runtime": 15.04396, 00:24:47.601 "iops": 12299.952937923259, 00:24:47.601 "mibps": 48.04669116376273, 00:24:47.601 "io_failed": 8381, 00:24:47.601 "io_timeout": 0, 00:24:47.601 "avg_latency_us": 9909.364690149123, 00:24:47.601 "min_latency_us": 535.8933333333333, 00:24:47.601 "max_latency_us": 41287.68 00:24:47.601 } 00:24:47.601 ], 00:24:47.601 "core_count": 1 00:24:47.601 } 00:24:47.601 12:09:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1083931 00:24:47.601 12:09:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1083931 ']' 00:24:47.601 12:09:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1083931 00:24:47.601 12:09:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:24:47.601 12:09:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:47.601 12:09:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1083931 00:24:47.601 12:09:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:47.601 12:09:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:47.601 12:09:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1083931' 00:24:47.601 killing process with pid 1083931 00:24:47.601 12:09:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1083931 00:24:47.601 12:09:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1083931 00:24:47.601 12:09:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:47.601 [2024-10-21 12:09:06.664520] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:24:47.602 [2024-10-21 12:09:06.664600] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1083931 ] 00:24:47.602 [2024-10-21 12:09:06.747036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.602 [2024-10-21 12:09:06.799537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:47.602 Running I/O for 15 seconds... 00:24:47.602 11077.00 IOPS, 43.27 MiB/s [2024-10-21T10:09:24.197Z] [2024-10-21 12:09:09.310355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.602 [2024-10-21 12:09:09.310388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.602 [2024-10-21 12:09:09.310405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.602 [2024-10-21 12:09:09.310414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.602 [2024-10-21 12:09:09.310424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:96936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.602 [2024-10-21 12:09:09.310432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.602 [2024-10-21 12:09:09.310441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.602 [2024-10-21 12:09:09.310449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.602 [2024-10-21 12:09:09.310459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.602 [2024-10-21 12:09:09.310466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.602 [2024-10-21 12:09:09.310476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:96960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.602 [2024-10-21 12:09:09.310483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.602 [2024-10-21 12:09:09.310493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.602 [2024-10-21 12:09:09.310500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.602 [2024-10-21 12:09:09.310510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:96976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.602 [2024-10-21 12:09:09.310517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.602 [2024-10-21 12:09:09.310526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.602 [2024-10-21 12:09:09.310534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.602 [2024-10-21 12:09:09.310543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.602 [2024-10-21 12:09:09.310550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.602 [2024-10-21 12:09:09.310560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.602 [2024-10-21 12:09:09.310567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.602 [2024-10-21 12:09:09.310582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:97008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.602 [2024-10-21 12:09:09.310589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.602 [2024-10-21 12:09:09.310598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.602 [2024-10-21 12:09:09.310606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.602 [2024-10-21 12:09:09.310616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.602 [2024-10-21 12:09:09.310624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.602 [2024-10-21 12:09:09.310633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.602 [2024-10-21 12:09:09.310641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.602 [2024-10-21 12:09:09.310651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.602 [2024-10-21 12:09:09.310659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.602 [2024-10-21 12:09:09.310669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:97048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.602 [2024-10-21 12:09:09.310676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.602 [2024-10-21 12:09:09.310685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:97056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.602 [2024-10-21 12:09:09.310693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.602 [2024-10-21 12:09:09.310702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.602 [2024-10-21 12:09:09.310710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.602 [2024-10-21 12:09:09.310719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.602 [2024-10-21 12:09:09.310727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.602 [2024-10-21 12:09:09.310737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:97080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.602 [2024-10-21 12:09:09.310745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.602 [2024-10-21 12:09:09.310755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.602 [2024-10-21 12:09:09.310763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.602 [2024-10-21 12:09:09.310772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.602 [2024-10-21 12:09:09.310780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.602 [2024-10-21 12:09:09.310789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:97104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.602 [2024-10-21 12:09:09.310798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.602 [2024-10-21 12:09:09.310808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:97112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.602 [2024-10-21 12:09:09.310815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.602 [2024-10-21 12:09:09.310825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.602 [2024-10-21 12:09:09.310833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.602 [2024-10-21 12:09:09.310843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.602 [2024-10-21 12:09:09.310850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.602 [2024-10-21 12:09:09.310859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.602 [2024-10-21 12:09:09.310867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.602 [2024-10-21 12:09:09.310876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.602 [2024-10-21 12:09:09.310883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.602 [2024-10-21 12:09:09.310892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.602 [2024-10-21 12:09:09.310900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.602 [2024-10-21 12:09:09.310909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.602 [2024-10-21 12:09:09.310916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.602 [2024-10-21 12:09:09.310926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.602 [2024-10-21 12:09:09.310934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.602 [2024-10-21 12:09:09.310943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.602 [2024-10-21 12:09:09.310950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.602 [2024-10-21 12:09:09.310960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.602 [2024-10-21 12:09:09.310968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.602 [2024-10-21 12:09:09.310977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.602 [2024-10-21 12:09:09.310985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.602 [2024-10-21 12:09:09.310994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.602 [2024-10-21 12:09:09.311002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.602 [2024-10-21 12:09:09.311013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.602 [2024-10-21 12:09:09.311020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.602 [2024-10-21 12:09:09.311030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.602 [2024-10-21 12:09:09.311037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.602 [2024-10-21 12:09:09.311046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.602 [2024-10-21 12:09:09.311054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.602 [2024-10-21 12:09:09.311063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.603 [2024-10-21 12:09:09.311070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.603 [2024-10-21 12:09:09.311080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.603 [2024-10-21 12:09:09.311088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.603 [2024-10-21 12:09:09.311097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:97128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.603 [2024-10-21 12:09:09.311104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.603 [2024-10-21 12:09:09.311114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.603 [2024-10-21 12:09:09.311121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.603 [2024-10-21 12:09:09.311130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:97144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.603 [2024-10-21 12:09:09.311138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.603 [2024-10-21 12:09:09.311147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.603 [2024-10-21 12:09:09.311155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.603 [2024-10-21 12:09:09.311164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.603 [2024-10-21 12:09:09.311171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.603 [2024-10-21 12:09:09.311180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.603 [2024-10-21 12:09:09.311188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.603 [2024-10-21 12:09:09.311197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.603 [2024-10-21 12:09:09.311204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.603 [2024-10-21 12:09:09.311214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.603 [2024-10-21 12:09:09.311222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.603 [2024-10-21 12:09:09.311232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.603 [2024-10-21 12:09:09.311239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.603 [2024-10-21 12:09:09.311249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.603 [2024-10-21 12:09:09.311256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.603 [2024-10-21 12:09:09.311265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.603 [2024-10-21 12:09:09.311272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.603 [2024-10-21 12:09:09.311281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.603 [2024-10-21 12:09:09.311289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.603 [2024-10-21 12:09:09.311299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.603 [2024-10-21 12:09:09.311306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.603 [2024-10-21 12:09:09.311316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:97352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.603 [2024-10-21 12:09:09.311329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.603 [2024-10-21 12:09:09.311339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.603 [2024-10-21 12:09:09.311346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.603 [2024-10-21 12:09:09.311355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.603 [2024-10-21 12:09:09.311363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.603 [2024-10-21 12:09:09.311372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.603 [2024-10-21 12:09:09.311380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.603 [2024-10-21 12:09:09.311390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:97384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.603 [2024-10-21 12:09:09.311397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.603 [2024-10-21 12:09:09.311406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.603 [2024-10-21 12:09:09.311414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.603 [2024-10-21 12:09:09.311423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.603 [2024-10-21 12:09:09.311431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.603 [2024-10-21 12:09:09.311440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.603 [2024-10-21 12:09:09.311453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.603 [2024-10-21 12:09:09.311462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.603 [2024-10-21 12:09:09.311470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.603 [2024-10-21 12:09:09.311479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.603 [2024-10-21 12:09:09.311487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.603 [2024-10-21 12:09:09.311496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.603 [2024-10-21 12:09:09.311503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.603 [2024-10-21 12:09:09.311513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.603 [2024-10-21 12:09:09.311521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.603 [2024-10-21 12:09:09.311531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:97448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.603 [2024-10-21 12:09:09.311538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.603 [2024-10-21 12:09:09.311547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.603 [2024-10-21 12:09:09.311555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.603 [2024-10-21 12:09:09.311564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.603 [2024-10-21 12:09:09.311572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.603 [2024-10-21 12:09:09.311581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.603 [2024-10-21 12:09:09.311589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.603 [2024-10-21 12:09:09.311598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.603 [2024-10-21 12:09:09.311605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.603 [2024-10-21 12:09:09.311615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.603 [2024-10-21 12:09:09.311623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.603 [2024-10-21 12:09:09.311633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.603 [2024-10-21 12:09:09.311641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.603 [2024-10-21 12:09:09.311650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.603 [2024-10-21 12:09:09.311657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.603 [2024-10-21 12:09:09.311668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.603 [2024-10-21 12:09:09.311676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.603 [2024-10-21 12:09:09.311685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.603 [2024-10-21 12:09:09.311693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.603 [2024-10-21 12:09:09.311703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.603 [2024-10-21 12:09:09.311710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.603 [2024-10-21 12:09:09.311719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.603 [2024-10-21 12:09:09.311727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.603 [2024-10-21 12:09:09.311736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.603 [2024-10-21 12:09:09.311744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.603 [2024-10-21 12:09:09.311753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.603 [2024-10-21 12:09:09.311760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.603 [2024-10-21 12:09:09.311770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.603 [2024-10-21 12:09:09.311777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.603 [2024-10-21 12:09:09.311786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.604 [2024-10-21 12:09:09.311794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.604 [2024-10-21 12:09:09.311804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.604 [2024-10-21 12:09:09.311811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.604 [2024-10-21 12:09:09.311820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.604 [2024-10-21 12:09:09.311828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.604 [2024-10-21 12:09:09.311837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.604 [2024-10-21 12:09:09.311845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.604 [2024-10-21 12:09:09.311854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.604 [2024-10-21 12:09:09.311862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.604 [2024-10-21 12:09:09.311871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.604 [2024-10-21 12:09:09.311880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.604 [2024-10-21 12:09:09.311889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.604 [2024-10-21 12:09:09.311897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.604 [2024-10-21 12:09:09.311906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.604 [2024-10-21 12:09:09.311913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.604 [2024-10-21 12:09:09.311923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.604 [2024-10-21 12:09:09.311930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.604 [2024-10-21 12:09:09.311939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.604 [2024-10-21 12:09:09.311946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.604 [2024-10-21 12:09:09.311956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.604 [2024-10-21 12:09:09.311963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.604 [2024-10-21 12:09:09.311972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:97656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.604 [2024-10-21 12:09:09.311980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.604 [2024-10-21 12:09:09.311989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.604 [2024-10-21 12:09:09.311996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.604 [2024-10-21 12:09:09.312006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.604 [2024-10-21 12:09:09.312013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.604 [2024-10-21 12:09:09.312023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.604 [2024-10-21 12:09:09.312030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.604 [2024-10-21 12:09:09.312039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.604 [2024-10-21 12:09:09.312047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.604 [2024-10-21 12:09:09.312056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.604 [2024-10-21 12:09:09.312065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.604 [2024-10-21 12:09:09.312074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.604 [2024-10-21 12:09:09.312082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.604 [2024-10-21 12:09:09.312091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.604 [2024-10-21 12:09:09.312101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.604 [2024-10-21 12:09:09.312111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.604 [2024-10-21 12:09:09.312118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.604 [2024-10-21 12:09:09.312127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.604 [2024-10-21 12:09:09.312135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.604 [2024-10-21 12:09:09.312144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.604 [2024-10-21 12:09:09.312151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.604 [2024-10-21 12:09:09.312161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.604 [2024-10-21 12:09:09.312169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.604 [2024-10-21 12:09:09.312179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.604 [2024-10-21 12:09:09.312186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.604 [2024-10-21 12:09:09.312196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.604 [2024-10-21 12:09:09.312203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.604 [2024-10-21 12:09:09.312213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:97768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.604 [2024-10-21 12:09:09.312220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.604 [2024-10-21 12:09:09.312243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.604 [2024-10-21 12:09:09.312251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97776 len:8 PRP1 0x0 PRP2 0x0 00:24:47.604 [2024-10-21 12:09:09.312259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.604 [2024-10-21 12:09:09.312296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.604 [2024-10-21 12:09:09.312306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.604 [2024-10-21 12:09:09.312315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.604 [2024-10-21 12:09:09.312326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.604 [2024-10-21 12:09:09.312334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.604 [2024-10-21 12:09:09.312341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.604 [2024-10-21 12:09:09.312350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.604 [2024-10-21 12:09:09.312359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.604 [2024-10-21 12:09:09.312367] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ae7d0 is same with the state(6) to be set 00:24:47.604 [2024-10-21 12:09:09.312599] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.604 [2024-10-21 12:09:09.312607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.604 [2024-10-21 12:09:09.312613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97784 len:8 PRP1 0x0 PRP2 0x0 00:24:47.604 [2024-10-21 12:09:09.312621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.604 [2024-10-21 12:09:09.312630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.604 [2024-10-21 12:09:09.312636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.604 [2024-10-21 12:09:09.312642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97792 len:8 PRP1 0x0 PRP2 0x0 00:24:47.604 [2024-10-21 12:09:09.312650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.604 [2024-10-21 12:09:09.312657] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.604 [2024-10-21 12:09:09.312663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.604 [2024-10-21 12:09:09.312669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97800 len:8 PRP1 0x0 PRP2 0x0 00:24:47.604 [2024-10-21 12:09:09.312676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.604 [2024-10-21 12:09:09.312685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.604 [2024-10-21 12:09:09.312691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.604 [2024-10-21 12:09:09.312697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97808 len:8 PRP1 0x0 PRP2 0x0 00:24:47.604 [2024-10-21 12:09:09.312704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.604 [2024-10-21 12:09:09.312712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.604 [2024-10-21 12:09:09.312718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.604 [2024-10-21 12:09:09.312724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97816 len:8 PRP1 0x0 PRP2 0x0 00:24:47.604 [2024-10-21 12:09:09.312731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.604 [2024-10-21 12:09:09.312739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.604 [2024-10-21 12:09:09.312745] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.604 [2024-10-21 12:09:09.312751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97824 len:8 PRP1 0x0 PRP2 0x0 00:24:47.604 [2024-10-21 12:09:09.312758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.604 [2024-10-21 12:09:09.312765] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.605 [2024-10-21 12:09:09.312771] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.605 [2024-10-21 12:09:09.312777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97832 len:8 PRP1 0x0 PRP2 0x0 00:24:47.605 [2024-10-21 12:09:09.312784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.605 [2024-10-21 12:09:09.312792] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.605 [2024-10-21 12:09:09.312799] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.605 [2024-10-21 12:09:09.312806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97840 len:8 PRP1 0x0 PRP2 0x0 00:24:47.605 [2024-10-21 12:09:09.312813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.605 [2024-10-21 12:09:09.312821] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.605 [2024-10-21 12:09:09.312827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.605 [2024-10-21 12:09:09.312833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97848 len:8 PRP1 0x0 PRP2 0x0 00:24:47.605 [2024-10-21 12:09:09.312840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.605 [2024-10-21 12:09:09.312848] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.605 [2024-10-21 12:09:09.312854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.605 [2024-10-21 12:09:09.312860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97856 len:8 PRP1 0x0 PRP2 0x0 00:24:47.605 [2024-10-21 12:09:09.312867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.605 [2024-10-21 12:09:09.312875] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.605 [2024-10-21 12:09:09.312880] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.605 [2024-10-21 12:09:09.312886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97864 len:8 PRP1 0x0 PRP2 0x0 00:24:47.605 [2024-10-21 12:09:09.312893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.605 [2024-10-21 12:09:09.312901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.605 [2024-10-21 12:09:09.312907] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.605 [2024-10-21 12:09:09.312913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97872 len:8 PRP1 0x0 PRP2 0x0 00:24:47.605 [2024-10-21 12:09:09.312920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.605 [2024-10-21 12:09:09.312928] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.605 [2024-10-21 12:09:09.312933] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.605 [2024-10-21 12:09:09.312940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97880 len:8 PRP1 0x0 PRP2 0x0 00:24:47.605 [2024-10-21 12:09:09.312947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.605 [2024-10-21 12:09:09.312954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.605 [2024-10-21 12:09:09.312960] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.605 [2024-10-21 12:09:09.312966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97888 len:8 PRP1 0x0 PRP2 0x0 00:24:47.605 [2024-10-21 12:09:09.312974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.605 [2024-10-21 12:09:09.312981] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.605 [2024-10-21 12:09:09.312987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.605 [2024-10-21 12:09:09.312993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97896 len:8 PRP1 0x0 PRP2 0x0 00:24:47.605 [2024-10-21 12:09:09.313000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.605 [2024-10-21 12:09:09.313010] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.605 [2024-10-21 12:09:09.313016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.605 [2024-10-21 12:09:09.313022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97904 len:8 PRP1 0x0 PRP2 0x0 00:24:47.605 [2024-10-21 12:09:09.313029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.605 [2024-10-21 12:09:09.313037] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.605 [2024-10-21 12:09:09.313042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.605 [2024-10-21 12:09:09.313048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97912 len:8 PRP1 0x0 PRP2 0x0 00:24:47.605 [2024-10-21 12:09:09.313056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.605 [2024-10-21 12:09:09.313064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.605 [2024-10-21 12:09:09.313069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.605 [2024-10-21 12:09:09.313076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97920 len:8 PRP1 0x0 PRP2 0x0 00:24:47.605 [2024-10-21 12:09:09.313083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.605 [2024-10-21 12:09:09.313090] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.605 [2024-10-21 12:09:09.313096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.605 [2024-10-21 12:09:09.313102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97928 len:8 PRP1 0x0 PRP2 0x0 00:24:47.605 [2024-10-21 12:09:09.313110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.605 [2024-10-21 12:09:09.313118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.605 [2024-10-21 12:09:09.313124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.605 [2024-10-21 12:09:09.313129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97936 len:8 PRP1 0x0 PRP2 0x0 00:24:47.605 [2024-10-21 12:09:09.313136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.605 [2024-10-21 12:09:09.313144] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.605 [2024-10-21 12:09:09.313150] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.605 [2024-10-21 12:09:09.313156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96920 len:8 PRP1 0x0 PRP2 0x0 00:24:47.605 [2024-10-21 12:09:09.313163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.605 [2024-10-21 12:09:09.313171] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.605 [2024-10-21 12:09:09.313177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.605 [2024-10-21 12:09:09.313183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96928 len:8 PRP1 0x0 PRP2 0x0 00:24:47.605 [2024-10-21 12:09:09.313190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.605 [2024-10-21 12:09:09.313198] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.605 [2024-10-21 12:09:09.313203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.605 [2024-10-21 12:09:09.313209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96936 len:8 PRP1 0x0 PRP2 0x0 00:24:47.605 [2024-10-21 12:09:09.313218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.605 [2024-10-21 12:09:09.313226] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.605 [2024-10-21 12:09:09.313231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.605 [2024-10-21 12:09:09.313238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96944 len:8 PRP1 0x0 PRP2 0x0 00:24:47.605 [2024-10-21 12:09:09.313245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.605 [2024-10-21 12:09:09.313253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.605 [2024-10-21 12:09:09.313259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.605 [2024-10-21 12:09:09.323358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96952 len:8 PRP1 0x0 PRP2 0x0 00:24:47.605 [2024-10-21 12:09:09.323387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.605 [2024-10-21 12:09:09.323401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.605 [2024-10-21 12:09:09.323408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.605 [2024-10-21 12:09:09.323415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96960 len:8 PRP1 0x0 PRP2 0x0 00:24:47.605 [2024-10-21 12:09:09.323423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.605 [2024-10-21 12:09:09.323430] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.605 [2024-10-21 12:09:09.323436] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.605 [2024-10-21 12:09:09.323442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96968 len:8 PRP1 0x0 PRP2 0x0 00:24:47.605 [2024-10-21 12:09:09.323450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.605 [2024-10-21 12:09:09.323457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.605 [2024-10-21 12:09:09.323463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.605 [2024-10-21 12:09:09.323469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96976 len:8 PRP1 0x0 PRP2 0x0 00:24:47.605 [2024-10-21 12:09:09.323477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.605 [2024-10-21 12:09:09.323484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.605 [2024-10-21 12:09:09.323491] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.605 [2024-10-21 12:09:09.323497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96984 len:8 PRP1 0x0 PRP2 0x0 00:24:47.605 [2024-10-21 12:09:09.323504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.605 [2024-10-21 12:09:09.323512] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.605 [2024-10-21 12:09:09.323518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.605 [2024-10-21 12:09:09.323524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96992 len:8 PRP1 0x0 PRP2 0x0 00:24:47.605 [2024-10-21 12:09:09.323531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.605 [2024-10-21 12:09:09.323539] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.606 [2024-10-21 12:09:09.323544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.606 [2024-10-21 12:09:09.323555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97000 len:8 PRP1 0x0 PRP2 0x0 00:24:47.606 [2024-10-21 12:09:09.323562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.606 [2024-10-21 12:09:09.323570] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.606 [2024-10-21 12:09:09.323576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.606 [2024-10-21 12:09:09.323582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97008 len:8 PRP1 0x0 PRP2 0x0 00:24:47.606 [2024-10-21 12:09:09.323589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.606 [2024-10-21 12:09:09.323597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.606 [2024-10-21 12:09:09.323603] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.606 [2024-10-21 12:09:09.323609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97016 len:8 PRP1 0x0 PRP2 0x0 00:24:47.606 [2024-10-21 12:09:09.323616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.606 [2024-10-21 12:09:09.323624] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.606 [2024-10-21 12:09:09.323629] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.606 [2024-10-21 12:09:09.323635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97024 len:8 PRP1 0x0 PRP2 0x0 00:24:47.606 [2024-10-21 12:09:09.323643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.606 [2024-10-21 12:09:09.323651] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.606 [2024-10-21 12:09:09.323656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.606 [2024-10-21 12:09:09.323662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97032 len:8 PRP1 0x0 PRP2 0x0 00:24:47.606 [2024-10-21 12:09:09.323669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.606 [2024-10-21 12:09:09.323677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.606 [2024-10-21 12:09:09.323683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.606 [2024-10-21 12:09:09.323689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97040 len:8 PRP1 0x0 PRP2 0x0 00:24:47.606 [2024-10-21 12:09:09.323696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.606 [2024-10-21 12:09:09.323704] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.606 [2024-10-21 12:09:09.323709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.606 [2024-10-21 12:09:09.323715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97048 len:8 PRP1 0x0 PRP2 0x0 00:24:47.606 [2024-10-21 12:09:09.323723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.606 [2024-10-21 12:09:09.323730] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.606 [2024-10-21 12:09:09.323736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.606 [2024-10-21 12:09:09.323742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97056 len:8 PRP1 0x0 PRP2 0x0 00:24:47.606 [2024-10-21 12:09:09.323749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.606 [2024-10-21 12:09:09.323759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.606 [2024-10-21 12:09:09.323764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.606 [2024-10-21 12:09:09.323771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97064 len:8 PRP1 0x0 PRP2 0x0 00:24:47.606 [2024-10-21 12:09:09.323778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.606 [2024-10-21 12:09:09.323785] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.606 [2024-10-21 12:09:09.323791] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.606 [2024-10-21 12:09:09.323798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97072 len:8 PRP1 0x0 PRP2 0x0 00:24:47.606 [2024-10-21 12:09:09.323805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.606 [2024-10-21 12:09:09.323813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.606 [2024-10-21 12:09:09.323819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.606 [2024-10-21 12:09:09.323825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97080 len:8 PRP1 0x0 PRP2 0x0 00:24:47.606 [2024-10-21 12:09:09.323834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.606 [2024-10-21 12:09:09.323841] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.606 [2024-10-21 12:09:09.323847] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.606 [2024-10-21 12:09:09.323853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97088 len:8 PRP1 0x0 PRP2 0x0 00:24:47.606 [2024-10-21 12:09:09.323860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.606 [2024-10-21 12:09:09.323868] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.606 [2024-10-21 12:09:09.323874] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.606 [2024-10-21 12:09:09.323880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97096 len:8 PRP1 0x0 PRP2 0x0 00:24:47.606 [2024-10-21 12:09:09.323887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.606 [2024-10-21 12:09:09.323895] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.606 [2024-10-21 12:09:09.323900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.606 [2024-10-21 12:09:09.323906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97104 len:8 PRP1 0x0 PRP2 0x0 00:24:47.606 [2024-10-21 12:09:09.323914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.606 [2024-10-21 12:09:09.323921] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.606 [2024-10-21 12:09:09.323927] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.606 [2024-10-21 12:09:09.323933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97112 len:8 PRP1 0x0 PRP2 0x0 00:24:47.606 [2024-10-21 12:09:09.323940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.606 [2024-10-21 12:09:09.323948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.606 [2024-10-21 12:09:09.323953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.606 [2024-10-21 12:09:09.323959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97152 len:8 PRP1 0x0 PRP2 0x0 00:24:47.606 [2024-10-21 12:09:09.323968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.606 [2024-10-21 12:09:09.323975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.606 [2024-10-21 12:09:09.323981] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.606 [2024-10-21 12:09:09.323987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97160 len:8 PRP1 0x0 PRP2 0x0 00:24:47.606 [2024-10-21 12:09:09.323994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.606 [2024-10-21 12:09:09.324002] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.606 [2024-10-21 12:09:09.324007] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.606 [2024-10-21 12:09:09.324014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97168 len:8 PRP1 0x0 PRP2 0x0 00:24:47.606 [2024-10-21 12:09:09.324021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.606 [2024-10-21 12:09:09.324029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.606 [2024-10-21 12:09:09.324034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.606 [2024-10-21 12:09:09.324040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97176 len:8 PRP1 0x0 PRP2 0x0 00:24:47.606 [2024-10-21 12:09:09.324047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.606 [2024-10-21 12:09:09.324055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.606 [2024-10-21 12:09:09.324061] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.606 [2024-10-21 12:09:09.324067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97184 len:8 PRP1 0x0 PRP2 0x0 00:24:47.606 [2024-10-21 12:09:09.324074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.606 [2024-10-21 12:09:09.324081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.606 [2024-10-21 12:09:09.324087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.606 [2024-10-21 12:09:09.324093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97192 len:8 PRP1 0x0 PRP2 0x0 00:24:47.606 [2024-10-21 12:09:09.324100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.606 [2024-10-21 12:09:09.324108] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.607 [2024-10-21 12:09:09.324113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.607 [2024-10-21 12:09:09.324119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97200 len:8 PRP1 0x0 PRP2 0x0 00:24:47.607 [2024-10-21 12:09:09.324126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.607 [2024-10-21 12:09:09.324134] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.607 [2024-10-21 12:09:09.324140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.607 [2024-10-21 12:09:09.324146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97208 len:8 PRP1 0x0 PRP2 0x0 00:24:47.607 [2024-10-21 12:09:09.324153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.607 [2024-10-21 12:09:09.324160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.607 [2024-10-21 12:09:09.324166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.607 [2024-10-21 12:09:09.324173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97216 len:8 PRP1 0x0 PRP2 0x0 00:24:47.607 [2024-10-21 12:09:09.324180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.607 [2024-10-21 12:09:09.324188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.607 [2024-10-21 12:09:09.324193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.607 [2024-10-21 12:09:09.324199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97224 len:8 PRP1 0x0 PRP2 0x0 00:24:47.607 [2024-10-21 12:09:09.324207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.607 [2024-10-21 12:09:09.324214] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.607 [2024-10-21 12:09:09.324220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.607 [2024-10-21 12:09:09.324226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97232 len:8 PRP1 0x0 PRP2 0x0 00:24:47.607 [2024-10-21 12:09:09.324233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.607 [2024-10-21 12:09:09.324240] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.607 [2024-10-21 12:09:09.324246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.607 [2024-10-21 12:09:09.324252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97240 len:8 PRP1 0x0 PRP2 0x0 00:24:47.607 [2024-10-21 12:09:09.324259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.607 [2024-10-21 12:09:09.324267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.607 [2024-10-21 12:09:09.324272] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.607 [2024-10-21 12:09:09.324278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97248 len:8 PRP1 0x0 PRP2 0x0 00:24:47.607 [2024-10-21 12:09:09.324286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.607 [2024-10-21 12:09:09.324293] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.607 [2024-10-21 12:09:09.324299] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.607 [2024-10-21 12:09:09.324304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97256 len:8 PRP1 0x0 PRP2 0x0 00:24:47.607 [2024-10-21 12:09:09.324311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.607 [2024-10-21 12:09:09.324325] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.607 [2024-10-21 12:09:09.324331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.607 [2024-10-21 12:09:09.324337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97264 len:8 PRP1 0x0 PRP2 0x0 00:24:47.607 [2024-10-21 12:09:09.324344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.607 [2024-10-21 12:09:09.324352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.607 [2024-10-21 12:09:09.324357] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.607 [2024-10-21 12:09:09.324363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97120 len:8 PRP1 0x0 PRP2 0x0 00:24:47.607 [2024-10-21 12:09:09.324370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.607 [2024-10-21 12:09:09.324378] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.607 [2024-10-21 12:09:09.324385] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.607 [2024-10-21 12:09:09.324391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97128 len:8 PRP1 0x0 PRP2 0x0 00:24:47.607 [2024-10-21 12:09:09.324398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.607 [2024-10-21 12:09:09.324406] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.607 [2024-10-21 12:09:09.324411] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.607 [2024-10-21 12:09:09.324417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97136 len:8 PRP1 0x0 PRP2 0x0 00:24:47.607 [2024-10-21 12:09:09.324424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.607 [2024-10-21 12:09:09.324431] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.607 [2024-10-21 12:09:09.324437] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.607 [2024-10-21 12:09:09.324443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97144 len:8 PRP1 0x0 PRP2 0x0 00:24:47.607 [2024-10-21 12:09:09.324450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.607 [2024-10-21 12:09:09.324458] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.607 [2024-10-21 12:09:09.324463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.607 [2024-10-21 12:09:09.324469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97272 len:8 PRP1 0x0 PRP2 0x0 00:24:47.607 [2024-10-21 12:09:09.324477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.607 [2024-10-21 12:09:09.324484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.607 [2024-10-21 12:09:09.324489] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.607 [2024-10-21 12:09:09.324495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97280 len:8 PRP1 0x0 PRP2 0x0 00:24:47.607 [2024-10-21 12:09:09.324503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.607 [2024-10-21 12:09:09.324510] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.607 [2024-10-21 12:09:09.324516] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.607 [2024-10-21 12:09:09.324522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97288 len:8 PRP1 0x0 PRP2 0x0 00:24:47.607 [2024-10-21 12:09:09.324528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.607 [2024-10-21 12:09:09.324536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.607 [2024-10-21 12:09:09.324541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.607 [2024-10-21 12:09:09.324548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97296 len:8 PRP1 0x0 PRP2 0x0 00:24:47.607 [2024-10-21 12:09:09.324555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.607 [2024-10-21 12:09:09.324562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.607 [2024-10-21 12:09:09.324568] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.607 [2024-10-21 12:09:09.324574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97304 len:8 PRP1 0x0 PRP2 0x0 00:24:47.607 [2024-10-21 12:09:09.324581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.607 [2024-10-21 12:09:09.324590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.607 [2024-10-21 12:09:09.324596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.607 [2024-10-21 12:09:09.324602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97312 len:8 PRP1 0x0 PRP2 0x0 00:24:47.607 [2024-10-21 12:09:09.324609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.607 [2024-10-21 12:09:09.324616] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.607 [2024-10-21 12:09:09.324622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.607 [2024-10-21 12:09:09.324628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97320 len:8 PRP1 0x0 PRP2 0x0 00:24:47.607 [2024-10-21 12:09:09.324635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.607 [2024-10-21 12:09:09.324643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.607 [2024-10-21 12:09:09.324648] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.607 [2024-10-21 12:09:09.324654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97328 len:8 PRP1 0x0 PRP2 0x0 00:24:47.607 [2024-10-21 12:09:09.324661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.607 [2024-10-21 12:09:09.324669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.607 [2024-10-21 12:09:09.324674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.607 [2024-10-21 12:09:09.324680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97336 len:8 PRP1 0x0 PRP2 0x0 00:24:47.607 [2024-10-21 12:09:09.324688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.607 [2024-10-21 12:09:09.324695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.607 [2024-10-21 12:09:09.324701] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.607 [2024-10-21 12:09:09.324707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97344 len:8 PRP1 0x0 PRP2 0x0 00:24:47.607 [2024-10-21 12:09:09.324715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.607 [2024-10-21 12:09:09.324722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.607 [2024-10-21 12:09:09.324728] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.607 [2024-10-21 12:09:09.324734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97352 len:8 PRP1 0x0 PRP2 0x0 00:24:47.607 [2024-10-21 12:09:09.324741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.607 [2024-10-21 12:09:09.324750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.607 [2024-10-21 12:09:09.324755] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.607 [2024-10-21 12:09:09.324761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97360 len:8 PRP1 0x0 PRP2 0x0 00:24:47.608 [2024-10-21 12:09:09.324768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.608 [2024-10-21 12:09:09.324776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.608 [2024-10-21 12:09:09.324782] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.608 [2024-10-21 12:09:09.324788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97368 len:8 PRP1 0x0 PRP2 0x0 00:24:47.608 [2024-10-21 12:09:09.324797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.608 [2024-10-21 12:09:09.324804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.608 [2024-10-21 12:09:09.324809] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.608 [2024-10-21 12:09:09.324816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97376 len:8 PRP1 0x0 PRP2 0x0 00:24:47.608 [2024-10-21 12:09:09.324823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.608 [2024-10-21 12:09:09.324831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.608 [2024-10-21 12:09:09.324836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.608 [2024-10-21 12:09:09.324842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97384 len:8 PRP1 0x0 PRP2 0x0 00:24:47.608 [2024-10-21 12:09:09.324849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.608 [2024-10-21 12:09:09.324857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.608 [2024-10-21 12:09:09.324862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.608 [2024-10-21 12:09:09.324868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97392 len:8 PRP1 0x0 PRP2 0x0 00:24:47.608 [2024-10-21 12:09:09.324875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.608 [2024-10-21 12:09:09.324883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.608 [2024-10-21 12:09:09.324889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.608 [2024-10-21 12:09:09.324895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97400 len:8 PRP1 0x0 PRP2 0x0 00:24:47.608 [2024-10-21 12:09:09.324902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.608 [2024-10-21 12:09:09.324909] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.608 [2024-10-21 12:09:09.324915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.608 [2024-10-21 12:09:09.324921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97408 len:8 PRP1 0x0 PRP2 0x0 00:24:47.608 [2024-10-21 12:09:09.324928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.608 [2024-10-21 12:09:09.324935] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.608 [2024-10-21 12:09:09.324941] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.608 [2024-10-21 12:09:09.324947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97416 len:8 PRP1 0x0 PRP2 0x0 00:24:47.608 [2024-10-21 12:09:09.324957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.608 [2024-10-21 12:09:09.324964] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.608 [2024-10-21 12:09:09.324970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.608 [2024-10-21 12:09:09.324976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97424 len:8 PRP1 0x0 PRP2 0x0 00:24:47.608 [2024-10-21 12:09:09.324983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.608 [2024-10-21 12:09:09.324991] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.608 [2024-10-21 12:09:09.324996] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.608 [2024-10-21 12:09:09.325006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97432 len:8 PRP1 0x0 PRP2 0x0 00:24:47.608 [2024-10-21 12:09:09.325014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.608 [2024-10-21 12:09:09.325021] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.608 [2024-10-21 12:09:09.325026] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.608 [2024-10-21 12:09:09.325032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97440 len:8 PRP1 0x0 PRP2 0x0 00:24:47.608 [2024-10-21 12:09:09.325040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.608 [2024-10-21 12:09:09.325047] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.608 [2024-10-21 12:09:09.325053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.608 [2024-10-21 12:09:09.325059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97448 len:8 PRP1 0x0 PRP2 0x0 00:24:47.608 [2024-10-21 12:09:09.325066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.608 [2024-10-21 12:09:09.325073] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.608 [2024-10-21 12:09:09.325079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.608 [2024-10-21 12:09:09.325085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97456 len:8 PRP1 0x0 PRP2 0x0 00:24:47.608 [2024-10-21 12:09:09.325092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.608 [2024-10-21 12:09:09.325100] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.608 [2024-10-21 12:09:09.332733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.608 [2024-10-21 12:09:09.332761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97464 len:8 PRP1 0x0 PRP2 0x0 00:24:47.608 [2024-10-21 12:09:09.332773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.608 [2024-10-21 12:09:09.332785] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.608 [2024-10-21 12:09:09.332791] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.608 [2024-10-21 12:09:09.332798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97472 len:8 PRP1 0x0 PRP2 0x0 00:24:47.608 [2024-10-21 12:09:09.332805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.608 [2024-10-21 12:09:09.332813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.608 [2024-10-21 12:09:09.332818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.608 [2024-10-21 12:09:09.332824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97480 len:8 PRP1 0x0 PRP2 0x0 00:24:47.608 [2024-10-21 12:09:09.332832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.608 [2024-10-21 12:09:09.332839] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.608 [2024-10-21 12:09:09.332845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.608 [2024-10-21 12:09:09.332851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97488 len:8 PRP1 0x0 PRP2 0x0 00:24:47.608 [2024-10-21 12:09:09.332858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.608 [2024-10-21 12:09:09.332870] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.608 [2024-10-21 12:09:09.332875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.608 [2024-10-21 12:09:09.332881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97496 len:8 PRP1 0x0 PRP2 0x0 00:24:47.608 [2024-10-21 12:09:09.332889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.608 [2024-10-21 12:09:09.332896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.608 [2024-10-21 12:09:09.332902] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.608 [2024-10-21 12:09:09.332908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97504 len:8 PRP1 0x0 PRP2 0x0 00:24:47.608 [2024-10-21 12:09:09.332915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.608 [2024-10-21 12:09:09.332922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.608 [2024-10-21 12:09:09.332928] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.608 [2024-10-21 12:09:09.332934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97512 len:8 PRP1 0x0 PRP2 0x0 00:24:47.608 [2024-10-21 12:09:09.332941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.608 [2024-10-21 12:09:09.332948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.608 [2024-10-21 12:09:09.332954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.608 [2024-10-21 12:09:09.332959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97520 len:8 PRP1 0x0 PRP2 0x0 00:24:47.608 [2024-10-21 12:09:09.332967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.608 [2024-10-21 12:09:09.332975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.608 [2024-10-21 12:09:09.332980] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.608 [2024-10-21 12:09:09.332986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97528 len:8 PRP1 0x0 PRP2 0x0 00:24:47.608 [2024-10-21 12:09:09.332993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.608 [2024-10-21 12:09:09.333001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.608 [2024-10-21 12:09:09.333006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.608 [2024-10-21 12:09:09.333012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97536 len:8 PRP1 0x0 PRP2 0x0 00:24:47.608 [2024-10-21 12:09:09.333019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.608 [2024-10-21 12:09:09.333027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.608 [2024-10-21 12:09:09.333032] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.608 [2024-10-21 12:09:09.333038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97544 len:8 PRP1 0x0 PRP2 0x0 00:24:47.608 [2024-10-21 12:09:09.333046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.608 [2024-10-21 12:09:09.333053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.608 [2024-10-21 12:09:09.333059] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.608 [2024-10-21 12:09:09.333064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97552 len:8 PRP1 0x0 PRP2 0x0 00:24:47.608 [2024-10-21 12:09:09.333074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.608 [2024-10-21 12:09:09.333081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.608 [2024-10-21 12:09:09.333087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.609 [2024-10-21 12:09:09.333093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97560 len:8 PRP1 0x0 PRP2 0x0 00:24:47.609 [2024-10-21 12:09:09.333100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.609 [2024-10-21 12:09:09.333108] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.609 [2024-10-21 12:09:09.333113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.609 [2024-10-21 12:09:09.333120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97568 len:8 PRP1 0x0 PRP2 0x0 00:24:47.609 [2024-10-21 12:09:09.333127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.609 [2024-10-21 12:09:09.333134] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.609 [2024-10-21 12:09:09.333139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.609 [2024-10-21 12:09:09.333146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97576 len:8 PRP1 0x0 PRP2 0x0 00:24:47.609 [2024-10-21 12:09:09.333153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.609 [2024-10-21 12:09:09.333160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.609 [2024-10-21 12:09:09.333165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.609 [2024-10-21 12:09:09.333172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97584 len:8 PRP1 0x0 PRP2 0x0 00:24:47.609 [2024-10-21 12:09:09.333179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.609 [2024-10-21 12:09:09.333186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.609 [2024-10-21 12:09:09.333192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.609 [2024-10-21 12:09:09.333198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97592 len:8 PRP1 0x0 PRP2 0x0 00:24:47.609 [2024-10-21 12:09:09.333206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.609 [2024-10-21 12:09:09.333213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.609 [2024-10-21 12:09:09.333218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.609 [2024-10-21 12:09:09.333225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97600 len:8 PRP1 0x0 PRP2 0x0 00:24:47.609 [2024-10-21 12:09:09.333232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.609 [2024-10-21 12:09:09.333239] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.609 [2024-10-21 12:09:09.333245] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.609 [2024-10-21 12:09:09.333251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97608 len:8 PRP1 0x0 PRP2 0x0 00:24:47.609 [2024-10-21 12:09:09.333258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.609 [2024-10-21 12:09:09.333265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.609 [2024-10-21 12:09:09.333271] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.609 [2024-10-21 12:09:09.333279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97616 len:8 PRP1 0x0 PRP2 0x0 00:24:47.609 [2024-10-21 12:09:09.333286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.609 [2024-10-21 12:09:09.333293] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.609 [2024-10-21 12:09:09.333299] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.609 [2024-10-21 12:09:09.333305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97624 len:8 PRP1 0x0 PRP2 0x0 00:24:47.609 [2024-10-21 12:09:09.333312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.609 [2024-10-21 12:09:09.333329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.609 [2024-10-21 12:09:09.333335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.609 [2024-10-21 12:09:09.333341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97632 len:8 PRP1 0x0 PRP2 0x0 00:24:47.609 [2024-10-21 12:09:09.333348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.609 [2024-10-21 12:09:09.333356] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.609 [2024-10-21 12:09:09.333362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.609 [2024-10-21 12:09:09.333368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97640 len:8 PRP1 0x0 PRP2 0x0 00:24:47.609 [2024-10-21 12:09:09.333375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.609 [2024-10-21 12:09:09.333383] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.609 [2024-10-21 12:09:09.333388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.609 [2024-10-21 12:09:09.333394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97648 len:8 PRP1 0x0 PRP2 0x0 00:24:47.609 [2024-10-21 12:09:09.333401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.609 [2024-10-21 12:09:09.333409] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.609 [2024-10-21 12:09:09.333415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.609 [2024-10-21 12:09:09.333421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97656 len:8 PRP1 0x0 PRP2 0x0 00:24:47.609 [2024-10-21 12:09:09.333428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.609 [2024-10-21 12:09:09.333436] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.609 [2024-10-21 12:09:09.333441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.609 [2024-10-21 12:09:09.333447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97664 len:8 PRP1 0x0 PRP2 0x0 00:24:47.609 [2024-10-21 12:09:09.333454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.609 [2024-10-21 12:09:09.333462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.609 [2024-10-21 12:09:09.333468] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.609 [2024-10-21 12:09:09.333474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97672 len:8 PRP1 0x0 PRP2 0x0 00:24:47.609 [2024-10-21 12:09:09.333481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.609 [2024-10-21 12:09:09.333488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.609 [2024-10-21 12:09:09.333495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.609 [2024-10-21 12:09:09.333502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97680 len:8 PRP1 0x0 PRP2 0x0 00:24:47.609 [2024-10-21 12:09:09.333509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.609 [2024-10-21 12:09:09.333516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.609 [2024-10-21 12:09:09.333522] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.609 [2024-10-21 12:09:09.333528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97688 len:8 PRP1 0x0 PRP2 0x0 00:24:47.609 [2024-10-21 12:09:09.333535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.609 [2024-10-21 12:09:09.333543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.609 [2024-10-21 12:09:09.333548] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.609 [2024-10-21 12:09:09.333554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97696 len:8 PRP1 0x0 PRP2 0x0 00:24:47.609 [2024-10-21 12:09:09.333561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.609 [2024-10-21 12:09:09.333569] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.609 [2024-10-21 12:09:09.333574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.609 [2024-10-21 12:09:09.333580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97704 len:8 PRP1 0x0 PRP2 0x0 00:24:47.609 [2024-10-21 12:09:09.333587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.609 [2024-10-21 12:09:09.333595] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.609 [2024-10-21 12:09:09.333601] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.609 [2024-10-21 12:09:09.333606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97712 len:8 PRP1 0x0 PRP2 0x0 00:24:47.609 [2024-10-21 12:09:09.333614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.609 [2024-10-21 12:09:09.333621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.609 [2024-10-21 12:09:09.333627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.609 [2024-10-21 12:09:09.333633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97720 len:8 PRP1 0x0 PRP2 0x0 00:24:47.609 [2024-10-21 12:09:09.333640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.609 [2024-10-21 12:09:09.333648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.609 [2024-10-21 12:09:09.333653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.609 [2024-10-21 12:09:09.333659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97728 len:8 PRP1 0x0 PRP2 0x0 00:24:47.609 [2024-10-21 12:09:09.333667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.609 [2024-10-21 12:09:09.333675] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.609 [2024-10-21 12:09:09.333680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.609 [2024-10-21 12:09:09.333686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97736 len:8 PRP1 0x0 PRP2 0x0 00:24:47.609 [2024-10-21 12:09:09.333693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.609 [2024-10-21 12:09:09.333703] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.609 [2024-10-21 12:09:09.333709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.609 [2024-10-21 12:09:09.333715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97744 len:8 PRP1 0x0 PRP2 0x0 00:24:47.609 [2024-10-21 12:09:09.333722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.609 [2024-10-21 12:09:09.333729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.609 [2024-10-21 12:09:09.333735] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.609 [2024-10-21 12:09:09.333741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97752 len:8 PRP1 0x0 PRP2 0x0 00:24:47.609 [2024-10-21 12:09:09.333748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.609 [2024-10-21 12:09:09.333755] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.610 [2024-10-21 12:09:09.333761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.610 [2024-10-21 12:09:09.333767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97760 len:8 PRP1 0x0 PRP2 0x0 00:24:47.610 [2024-10-21 12:09:09.333774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.610 [2024-10-21 12:09:09.333782] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.610 [2024-10-21 12:09:09.333787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.610 [2024-10-21 12:09:09.333794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97768 len:8 PRP1 0x0 PRP2 0x0 00:24:47.610 [2024-10-21 12:09:09.333801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.610 [2024-10-21 12:09:09.333808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.610 [2024-10-21 12:09:09.333814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.610 [2024-10-21 12:09:09.333820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97776 len:8 PRP1 0x0 PRP2 0x0 00:24:47.610 [2024-10-21 12:09:09.333827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.610 [2024-10-21 12:09:09.333864] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21cf750 was disconnected and freed. reset controller. 00:24:47.610 [2024-10-21 12:09:09.333874] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:47.610 [2024-10-21 12:09:09.333882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:47.610 [2024-10-21 12:09:09.333932] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ae7d0 (9): Bad file descriptor 00:24:47.610 [2024-10-21 12:09:09.337448] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:47.610 [2024-10-21 12:09:09.415492] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:47.610 10753.50 IOPS, 42.01 MiB/s [2024-10-21T10:09:24.205Z] 10882.33 IOPS, 42.51 MiB/s [2024-10-21T10:09:24.205Z] 11036.75 IOPS, 43.11 MiB/s [2024-10-21T10:09:24.205Z] [2024-10-21 12:09:12.932016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:52328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.610 [2024-10-21 12:09:12.932046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.610 [2024-10-21 12:09:12.932058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:52336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.610 [2024-10-21 12:09:12.932068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.610 [2024-10-21 12:09:12.932075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:52344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.610 [2024-10-21 12:09:12.932080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.610 [2024-10-21 12:09:12.932086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:52352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.610 [2024-10-21 12:09:12.932091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.610 [2024-10-21 12:09:12.932098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:52360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.610 [2024-10-21 12:09:12.932103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.610 [2024-10-21 12:09:12.932110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:52368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.610 [2024-10-21 12:09:12.932115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.610 [2024-10-21 12:09:12.932121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:52376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.610 [2024-10-21 12:09:12.932126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.610 [2024-10-21 12:09:12.932133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:52384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.610 [2024-10-21 12:09:12.932138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.610 [2024-10-21 12:09:12.932144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:52392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.610 [2024-10-21 12:09:12.932149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.610 [2024-10-21 12:09:12.932155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:52400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.610 [2024-10-21 12:09:12.932160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.610 [2024-10-21 12:09:12.932167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:52408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.610 [2024-10-21 12:09:12.932172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.610 [2024-10-21 12:09:12.932178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:52416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.610 [2024-10-21 12:09:12.932183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.610 [2024-10-21 12:09:12.932189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:52424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.610 [2024-10-21 12:09:12.932194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.610 [2024-10-21 12:09:12.932201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.610 [2024-10-21 12:09:12.932206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.610 [2024-10-21 12:09:12.932214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:51552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.610 [2024-10-21 12:09:12.932219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.610 [2024-10-21 12:09:12.932226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:51560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.610 [2024-10-21 12:09:12.932231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.610 [2024-10-21 12:09:12.932237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:51568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.610 [2024-10-21 12:09:12.932243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.610 [2024-10-21 12:09:12.932249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:51576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.610 [2024-10-21 12:09:12.932254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.610 [2024-10-21 12:09:12.932260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:51584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.610 [2024-10-21 12:09:12.932265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.610 [2024-10-21 12:09:12.932272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:51592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.610 [2024-10-21 12:09:12.932277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.610 [2024-10-21 12:09:12.932284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:51600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.610 [2024-10-21 12:09:12.932289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.610 [2024-10-21 12:09:12.932295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.610 [2024-10-21 12:09:12.932300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.610 [2024-10-21 12:09:12.932306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:52440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.610 [2024-10-21 12:09:12.932311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.610 [2024-10-21 12:09:12.932318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:52448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.610 [2024-10-21 12:09:12.932329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.610 [2024-10-21 12:09:12.932336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:52456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.610 [2024-10-21 12:09:12.932341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.610 [2024-10-21 12:09:12.932347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:52464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.610 [2024-10-21 12:09:12.932352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.610 [2024-10-21 12:09:12.932359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:52472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.610 [2024-10-21 12:09:12.932363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.610 [2024-10-21 12:09:12.932371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:52480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.610 [2024-10-21 12:09:12.932376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.611 [2024-10-21 12:09:12.932383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:52488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.611 [2024-10-21 12:09:12.932388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.611 [2024-10-21 12:09:12.932394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:52496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.611 [2024-10-21 12:09:12.932399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.611 [2024-10-21 12:09:12.932406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:52504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.611 [2024-10-21 12:09:12.932411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.611 [2024-10-21 12:09:12.932418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:51608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.611 [2024-10-21 12:09:12.932423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.611 [2024-10-21 12:09:12.932429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:51616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.611 [2024-10-21 12:09:12.932435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.611 [2024-10-21 12:09:12.932442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:51624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.611 [2024-10-21 12:09:12.932447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.611 [2024-10-21 12:09:12.932453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:51632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.611 [2024-10-21 12:09:12.932458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.611 [2024-10-21 12:09:12.932465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:51640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.611 [2024-10-21 12:09:12.932470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.611 [2024-10-21 12:09:12.932477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:51648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.611 [2024-10-21 12:09:12.932482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.611 [2024-10-21 12:09:12.932488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:51656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.611 [2024-10-21 12:09:12.932493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.611 [2024-10-21 12:09:12.932500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:51664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.611 [2024-10-21 12:09:12.932505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.611 [2024-10-21 12:09:12.932511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:51672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.611 [2024-10-21 12:09:12.932517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.611 [2024-10-21 12:09:12.932524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:51680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.611 [2024-10-21 12:09:12.932529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.611 [2024-10-21 12:09:12.932536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:51688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.611 [2024-10-21 12:09:12.932541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.611 [2024-10-21 12:09:12.932547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:51696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.611 [2024-10-21 12:09:12.932552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.611 [2024-10-21 12:09:12.932559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:51704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.611 [2024-10-21 12:09:12.932564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.611 [2024-10-21 12:09:12.932570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:51712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.611 [2024-10-21 12:09:12.932576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.611 [2024-10-21 12:09:12.932582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:51720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.611 [2024-10-21 12:09:12.932588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.611 [2024-10-21 12:09:12.932594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:51728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.611 [2024-10-21 12:09:12.932599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.611 [2024-10-21 12:09:12.932606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:51736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.611 [2024-10-21 12:09:12.932611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.611 [2024-10-21 12:09:12.932617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:51744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.611 [2024-10-21 12:09:12.932622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.611 [2024-10-21 12:09:12.932628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:51752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.611 [2024-10-21 12:09:12.932634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.611 [2024-10-21 12:09:12.932640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:51760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.611 [2024-10-21 12:09:12.932645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.611 [2024-10-21 12:09:12.932651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:51768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.611 [2024-10-21 12:09:12.932656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.611 [2024-10-21 12:09:12.932664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:51776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.611 [2024-10-21 12:09:12.932669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.611 [2024-10-21 12:09:12.932675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:51784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.611 [2024-10-21 12:09:12.932680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.611 [2024-10-21 12:09:12.932687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:51792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.611 [2024-10-21 12:09:12.932692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.611 [2024-10-21 12:09:12.932698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:51800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.611 [2024-10-21 12:09:12.932703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.611 [2024-10-21 12:09:12.932710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.611 [2024-10-21 12:09:12.932715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.611 [2024-10-21 12:09:12.932721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.611 [2024-10-21 12:09:12.932726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.611 [2024-10-21 12:09:12.932732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:51824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.611 [2024-10-21 12:09:12.932737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.611 [2024-10-21 12:09:12.932744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:51832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.611 [2024-10-21 12:09:12.932749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.611 [2024-10-21 12:09:12.932755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:51840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.611 [2024-10-21 12:09:12.932760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.611 [2024-10-21 12:09:12.932767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:51848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.611 [2024-10-21 12:09:12.932772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.611 [2024-10-21 12:09:12.932778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:51856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.611 [2024-10-21 12:09:12.932783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.611 [2024-10-21 12:09:12.932790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:51864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.611 [2024-10-21 12:09:12.932795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.611 [2024-10-21 12:09:12.932801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:51872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.611 [2024-10-21 12:09:12.932808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.611 [2024-10-21 12:09:12.932815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:51880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.611 [2024-10-21 12:09:12.932820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.611 [2024-10-21 12:09:12.932827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:51888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.611 [2024-10-21 12:09:12.932832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.611 [2024-10-21 12:09:12.932838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:51896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.611 [2024-10-21 12:09:12.932843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.611 [2024-10-21 12:09:12.932850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:51904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.611 [2024-10-21 12:09:12.932855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.611 [2024-10-21 12:09:12.932861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:51912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.612 [2024-10-21 12:09:12.932866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.612 [2024-10-21 12:09:12.932873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:51920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.612 [2024-10-21 12:09:12.932878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.612 [2024-10-21 12:09:12.932884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:51928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.612 [2024-10-21 12:09:12.932889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.612 [2024-10-21 12:09:12.932896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.612 [2024-10-21 12:09:12.932901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.612 [2024-10-21 12:09:12.932907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:51944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.612 [2024-10-21 12:09:12.932912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.612 [2024-10-21 12:09:12.932919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:51952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.612 [2024-10-21 12:09:12.932924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.612 [2024-10-21 12:09:12.932930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:51960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.612 [2024-10-21 12:09:12.932936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.612 [2024-10-21 12:09:12.932942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:51968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.612 [2024-10-21 12:09:12.932947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.612 [2024-10-21 12:09:12.932954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:51976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.612 [2024-10-21 12:09:12.932959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.612 [2024-10-21 12:09:12.932966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:51984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.612 [2024-10-21 12:09:12.932971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.612 [2024-10-21 12:09:12.932978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:51992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.612 [2024-10-21 12:09:12.932983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.612 [2024-10-21 12:09:12.932990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:52000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.612 [2024-10-21 12:09:12.932994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.612 [2024-10-21 12:09:12.933001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:52008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.612 [2024-10-21 12:09:12.933006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.612 [2024-10-21 12:09:12.933012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:52016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.612 [2024-10-21 12:09:12.933017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.612 [2024-10-21 12:09:12.933024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:52024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.612 [2024-10-21 12:09:12.933029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.612 [2024-10-21 12:09:12.933035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:52032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.612 [2024-10-21 12:09:12.933040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.612 [2024-10-21 12:09:12.933047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:52040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.612 [2024-10-21 12:09:12.933052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.612 [2024-10-21 12:09:12.933058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:52048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.612 [2024-10-21 12:09:12.933063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.612 [2024-10-21 12:09:12.933070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:52056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.612 [2024-10-21 12:09:12.933075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.612 [2024-10-21 12:09:12.933081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:52064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.612 [2024-10-21 12:09:12.933086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.612 [2024-10-21 12:09:12.933093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:52072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.612 [2024-10-21 12:09:12.933097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.612 [2024-10-21 12:09:12.933105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:52080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.612 [2024-10-21 12:09:12.933110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.612 [2024-10-21 12:09:12.933117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:52088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.612 [2024-10-21 12:09:12.933122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.612 [2024-10-21 12:09:12.933128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.612 [2024-10-21 12:09:12.933133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.612 [2024-10-21 12:09:12.933140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:52104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.612 [2024-10-21 12:09:12.933144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.612 [2024-10-21 12:09:12.933151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:52112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.612 [2024-10-21 12:09:12.933156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.612 [2024-10-21 12:09:12.933162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:52120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.612 [2024-10-21 12:09:12.933167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.612 [2024-10-21 12:09:12.933174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:52128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.612 [2024-10-21 12:09:12.933179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.612 [2024-10-21 12:09:12.933186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:52136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.612 [2024-10-21 12:09:12.933191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.612 [2024-10-21 12:09:12.933197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:52144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.612 [2024-10-21 12:09:12.933202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.612 [2024-10-21 12:09:12.933209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:52152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.612 [2024-10-21 12:09:12.933214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.612 [2024-10-21 12:09:12.933220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:52160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.612 [2024-10-21 12:09:12.933225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.612 [2024-10-21 12:09:12.933232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:52168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.612 [2024-10-21 12:09:12.933237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.612 [2024-10-21 12:09:12.933244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:52176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.612 [2024-10-21 12:09:12.933250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.612 [2024-10-21 12:09:12.933256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:52184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.612 [2024-10-21 12:09:12.933262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.612 [2024-10-21 12:09:12.933268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:52192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.612 [2024-10-21 12:09:12.933273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.612 [2024-10-21 12:09:12.933280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:52512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.612 [2024-10-21 12:09:12.933284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.612 [2024-10-21 12:09:12.933291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:52520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.612 [2024-10-21 12:09:12.933296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.612 [2024-10-21 12:09:12.933302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:52528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.612 [2024-10-21 12:09:12.933307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.612 [2024-10-21 12:09:12.933313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:52536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.612 [2024-10-21 12:09:12.933319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.612 [2024-10-21 12:09:12.933338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:52544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.612 [2024-10-21 12:09:12.933343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.612 [2024-10-21 12:09:12.933350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:52552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.612 [2024-10-21 12:09:12.933355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.612 [2024-10-21 12:09:12.933361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:52560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.613 [2024-10-21 12:09:12.933366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.613 [2024-10-21 12:09:12.933373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:52200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.613 [2024-10-21 12:09:12.933377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.613 [2024-10-21 12:09:12.933384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:52208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.613 [2024-10-21 12:09:12.933389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.613 [2024-10-21 12:09:12.933395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:52216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.613 [2024-10-21 12:09:12.933401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.613 [2024-10-21 12:09:12.933409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:52224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.613 [2024-10-21 12:09:12.933414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.613 [2024-10-21 12:09:12.933420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:52232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.613 [2024-10-21 12:09:12.933425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.613 [2024-10-21 12:09:12.933432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:52240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.613 [2024-10-21 12:09:12.933437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.613 [2024-10-21 12:09:12.933443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:52248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.613 [2024-10-21 12:09:12.933448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.613 [2024-10-21 12:09:12.933455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:52256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.613 [2024-10-21 12:09:12.933459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.613 [2024-10-21 12:09:12.933470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:52264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.613 [2024-10-21 12:09:12.933475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.613 [2024-10-21 12:09:12.933481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:52272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.613 [2024-10-21 12:09:12.933486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.613 [2024-10-21 12:09:12.933492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:52280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.613 [2024-10-21 12:09:12.933498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.613 [2024-10-21 12:09:12.933504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:52288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.613 [2024-10-21 12:09:12.933509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.613 [2024-10-21 12:09:12.933516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:52296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.613 [2024-10-21 12:09:12.933520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.613 [2024-10-21 12:09:12.933527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:52304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.613 [2024-10-21 12:09:12.933532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.613 [2024-10-21 12:09:12.933538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:52312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.613 [2024-10-21 12:09:12.933544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.613 [2024-10-21 12:09:12.933549] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1740 is same with the state(6) to be set 00:24:47.613 [2024-10-21 12:09:12.933557] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.613 [2024-10-21 12:09:12.933561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.613 [2024-10-21 12:09:12.933566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52320 len:8 PRP1 0x0 PRP2 0x0 00:24:47.613 [2024-10-21 12:09:12.933571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.613 [2024-10-21 12:09:12.933600] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21d1740 was disconnected and freed. reset controller. 00:24:47.613 [2024-10-21 12:09:12.933607] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:47.613 [2024-10-21 12:09:12.933624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.613 [2024-10-21 12:09:12.933632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.613 [2024-10-21 12:09:12.933640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.613 [2024-10-21 12:09:12.933648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.613 [2024-10-21 12:09:12.933657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.613 [2024-10-21 12:09:12.933662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.613 [2024-10-21 12:09:12.933668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.613 [2024-10-21 12:09:12.933673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.613 [2024-10-21 12:09:12.933678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:47.613 [2024-10-21 12:09:12.936143] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:47.613 [2024-10-21 12:09:12.936164] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ae7d0 (9): Bad file descriptor 00:24:47.613 [2024-10-21 12:09:12.972044] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:47.613 11296.80 IOPS, 44.13 MiB/s [2024-10-21T10:09:24.208Z] 11551.17 IOPS, 45.12 MiB/s [2024-10-21T10:09:24.208Z] 11767.86 IOPS, 45.97 MiB/s [2024-10-21T10:09:24.208Z] 11911.12 IOPS, 46.53 MiB/s [2024-10-21T10:09:24.208Z] 12031.89 IOPS, 47.00 MiB/s [2024-10-21T10:09:24.208Z] [2024-10-21 12:09:17.317589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:125384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.613 [2024-10-21 12:09:17.317618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.613 [2024-10-21 12:09:17.317632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:125392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.613 [2024-10-21 12:09:17.317638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.613 [2024-10-21 12:09:17.317645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:125400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.613 [2024-10-21 12:09:17.317650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.613 [2024-10-21 12:09:17.317657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:125408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.613 [2024-10-21 12:09:17.317662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.613 [2024-10-21 12:09:17.317673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:125416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.613 [2024-10-21 12:09:17.317679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.613 [2024-10-21 12:09:17.317685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:125424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.613 [2024-10-21 12:09:17.317690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.613 [2024-10-21 12:09:17.317697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:125432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.613 [2024-10-21 12:09:17.317702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.613 [2024-10-21 12:09:17.317709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:125440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.613 [2024-10-21 12:09:17.317714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.613 [2024-10-21 12:09:17.317720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:125448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.613 [2024-10-21 12:09:17.317725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.613 [2024-10-21 12:09:17.317732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:124688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.613 [2024-10-21 12:09:17.317737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.613 [2024-10-21 12:09:17.317744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:124696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.613 [2024-10-21 12:09:17.317749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.613 [2024-10-21 12:09:17.317755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:124704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.613 [2024-10-21 12:09:17.317760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.613 [2024-10-21 12:09:17.317767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:124712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.613 [2024-10-21 12:09:17.317772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.613 [2024-10-21 12:09:17.317778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:124720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.613 [2024-10-21 12:09:17.317783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.613 [2024-10-21 12:09:17.317790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:124728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.613 [2024-10-21 12:09:17.317795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.613 [2024-10-21 12:09:17.317801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:124736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.613 [2024-10-21 12:09:17.317806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.613 [2024-10-21 12:09:17.317813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:124744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.613 [2024-10-21 12:09:17.317818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.614 [2024-10-21 12:09:17.317826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:124752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.614 [2024-10-21 12:09:17.317831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.614 [2024-10-21 12:09:17.317838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:124760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.614 [2024-10-21 12:09:17.317843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.614 [2024-10-21 12:09:17.317849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:124768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.614 [2024-10-21 12:09:17.317854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.614 [2024-10-21 12:09:17.317860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:124776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.614 [2024-10-21 12:09:17.317865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.614 [2024-10-21 12:09:17.317872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:124784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.614 [2024-10-21 12:09:17.317877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.614 [2024-10-21 12:09:17.317884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:124792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.614 [2024-10-21 12:09:17.317888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.614 [2024-10-21 12:09:17.317895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:124800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.614 [2024-10-21 12:09:17.317900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.614 [2024-10-21 12:09:17.317907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:124808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.614 [2024-10-21 12:09:17.317912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.614 [2024-10-21 12:09:17.317918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:125456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.614 [2024-10-21 12:09:17.317923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.614 [2024-10-21 12:09:17.317929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:125464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.614 [2024-10-21 12:09:17.317934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.614 [2024-10-21 12:09:17.317941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:125472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.614 [2024-10-21 12:09:17.317946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.614 [2024-10-21 12:09:17.317952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:125480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.614 [2024-10-21 12:09:17.317957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.614 [2024-10-21 12:09:17.317964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:125488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.614 [2024-10-21 12:09:17.317973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.614 [2024-10-21 12:09:17.317979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:125496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.614 [2024-10-21 12:09:17.317984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.614 [2024-10-21 12:09:17.317991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:125504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.614 [2024-10-21 12:09:17.317996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.614 [2024-10-21 12:09:17.318002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:124816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.614 [2024-10-21 12:09:17.318008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.614 [2024-10-21 12:09:17.318014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:124824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.614 [2024-10-21 12:09:17.318019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.614 [2024-10-21 12:09:17.318026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:124832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.614 [2024-10-21 12:09:17.318031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.614 [2024-10-21 12:09:17.318038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:124840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.614 [2024-10-21 12:09:17.318043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.614 [2024-10-21 12:09:17.318050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:124848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.614 [2024-10-21 12:09:17.318055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.614 [2024-10-21 12:09:17.318061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:124856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.614 [2024-10-21 12:09:17.318066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.614 [2024-10-21 12:09:17.318073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:124864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.614 [2024-10-21 12:09:17.318078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.614 [2024-10-21 12:09:17.318085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:124872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.614 [2024-10-21 12:09:17.318090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.614 [2024-10-21 12:09:17.318097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:125512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.614 [2024-10-21 12:09:17.318102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.614 [2024-10-21 12:09:17.318108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:125520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.614 [2024-10-21 12:09:17.318113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.614 [2024-10-21 12:09:17.318121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:125528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.614 [2024-10-21 12:09:17.318126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.614 [2024-10-21 12:09:17.318132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:125536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.614 [2024-10-21 12:09:17.318137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.614 [2024-10-21 12:09:17.318144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:125544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.614 [2024-10-21 12:09:17.318149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.614 [2024-10-21 12:09:17.318155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:125552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.614 [2024-10-21 12:09:17.318160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.614 [2024-10-21 12:09:17.318167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:125560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.614 [2024-10-21 12:09:17.318172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.614 [2024-10-21 12:09:17.318178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:125568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.614 [2024-10-21 12:09:17.318183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.614 [2024-10-21 12:09:17.318190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:125576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.614 [2024-10-21 12:09:17.318195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.614 [2024-10-21 12:09:17.318201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:125584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.614 [2024-10-21 12:09:17.318206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.614 [2024-10-21 12:09:17.318213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:124880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.614 [2024-10-21 12:09:17.318218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.614 [2024-10-21 12:09:17.318224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:124888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.614 [2024-10-21 12:09:17.318229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.614 [2024-10-21 12:09:17.318236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:124896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.614 [2024-10-21 12:09:17.318241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.614 [2024-10-21 12:09:17.318247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:124904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.614 [2024-10-21 12:09:17.318252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.615 [2024-10-21 12:09:17.318259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:124912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.615 [2024-10-21 12:09:17.318265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.615 [2024-10-21 12:09:17.318271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:124920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.615 [2024-10-21 12:09:17.318276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.615 [2024-10-21 12:09:17.318283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:124928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.615 [2024-10-21 12:09:17.318288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.615 [2024-10-21 12:09:17.318294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:124936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.615 [2024-10-21 12:09:17.318299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.615 [2024-10-21 12:09:17.318306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:124944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.615 [2024-10-21 12:09:17.318311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.615 [2024-10-21 12:09:17.318318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:124952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.615 [2024-10-21 12:09:17.318327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.615 [2024-10-21 12:09:17.318334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:124960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.615 [2024-10-21 12:09:17.318339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.615 [2024-10-21 12:09:17.318345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:124968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.615 [2024-10-21 12:09:17.318350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.615 [2024-10-21 12:09:17.318357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:124976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.615 [2024-10-21 12:09:17.318362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.615 [2024-10-21 12:09:17.318368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:124984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.615 [2024-10-21 12:09:17.318373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.615 [2024-10-21 12:09:17.318380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:124992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.615 [2024-10-21 12:09:17.318386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.615 [2024-10-21 12:09:17.318392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:125000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.615 [2024-10-21 12:09:17.318397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.615 [2024-10-21 12:09:17.318404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:125008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.615 [2024-10-21 12:09:17.318409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.615 [2024-10-21 12:09:17.318417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:125016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.615 [2024-10-21 12:09:17.318422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.615 [2024-10-21 12:09:17.318428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:125024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.615 [2024-10-21 12:09:17.318434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.615 [2024-10-21 12:09:17.318440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:125032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.615 [2024-10-21 12:09:17.318445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.615 [2024-10-21 12:09:17.318452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:125040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.615 [2024-10-21 12:09:17.318457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.615 [2024-10-21 12:09:17.318464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:125048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.615 [2024-10-21 12:09:17.318469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.615 [2024-10-21 12:09:17.318476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:125056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.615 [2024-10-21 12:09:17.318481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.615 [2024-10-21 12:09:17.318487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:125064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.615 [2024-10-21 12:09:17.318492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.615 [2024-10-21 12:09:17.318498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:125072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.615 [2024-10-21 12:09:17.318503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.615 [2024-10-21 12:09:17.318510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:125080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.615 [2024-10-21 12:09:17.318515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.615 [2024-10-21 12:09:17.318521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:125088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.615 [2024-10-21 12:09:17.318526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.615 [2024-10-21 12:09:17.318533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:125096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.615 [2024-10-21 12:09:17.318538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.615 [2024-10-21 12:09:17.318544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:125104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.615 [2024-10-21 12:09:17.318549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.615 [2024-10-21 12:09:17.318556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:125112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.615 [2024-10-21 12:09:17.318562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.615 [2024-10-21 12:09:17.318569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:125120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.615 [2024-10-21 12:09:17.318574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.615 [2024-10-21 12:09:17.318581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:125592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.615 [2024-10-21 12:09:17.318586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.615 [2024-10-21 12:09:17.318593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:125600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.615 [2024-10-21 12:09:17.318598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.615 [2024-10-21 12:09:17.318605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:125608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.615 [2024-10-21 12:09:17.318610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.615 [2024-10-21 12:09:17.318617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:125616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.615 [2024-10-21 12:09:17.318623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.615 [2024-10-21 12:09:17.318629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:125624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.615 [2024-10-21 12:09:17.318634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.615 [2024-10-21 12:09:17.318641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:125632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.615 [2024-10-21 12:09:17.318646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.615 [2024-10-21 12:09:17.318652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.615 [2024-10-21 12:09:17.318657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.615 [2024-10-21 12:09:17.318664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:125648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.615 [2024-10-21 12:09:17.318670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.615 [2024-10-21 12:09:17.318676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:125656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.615 [2024-10-21 12:09:17.318682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.615 [2024-10-21 12:09:17.318688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:125664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.615 [2024-10-21 12:09:17.318694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.615 [2024-10-21 12:09:17.318700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:125672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.615 [2024-10-21 12:09:17.318706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.615 [2024-10-21 12:09:17.318713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:125680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.615 [2024-10-21 12:09:17.318719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.615 [2024-10-21 12:09:17.318726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:125688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.615 [2024-10-21 12:09:17.318731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.615 [2024-10-21 12:09:17.318737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:125696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.615 [2024-10-21 12:09:17.318742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.616 [2024-10-21 12:09:17.318749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:125128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.616 [2024-10-21 12:09:17.318754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.616 [2024-10-21 12:09:17.318760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:125136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.616 [2024-10-21 12:09:17.318766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.616 [2024-10-21 12:09:17.318772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:125144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.616 [2024-10-21 12:09:17.318778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.616 [2024-10-21 12:09:17.318784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:125152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.616 [2024-10-21 12:09:17.318790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.616 [2024-10-21 12:09:17.318796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:125160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.616 [2024-10-21 12:09:17.318801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.616 [2024-10-21 12:09:17.318808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:125168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.616 [2024-10-21 12:09:17.318813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.616 [2024-10-21 12:09:17.318819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:125176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.616 [2024-10-21 12:09:17.318824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.616 [2024-10-21 12:09:17.318831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:125184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.616 [2024-10-21 12:09:17.318836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.616 [2024-10-21 12:09:17.318842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.616 [2024-10-21 12:09:17.318847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.616 [2024-10-21 12:09:17.318854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:125192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.616 [2024-10-21 12:09:17.318859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.616 [2024-10-21 12:09:17.318866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:125200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.616 [2024-10-21 12:09:17.318872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.616 [2024-10-21 12:09:17.318878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:125208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.616 [2024-10-21 12:09:17.318886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.616 [2024-10-21 12:09:17.318893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:125216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.616 [2024-10-21 12:09:17.318898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.616 [2024-10-21 12:09:17.318904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:125224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.616 [2024-10-21 12:09:17.318909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.616 [2024-10-21 12:09:17.318916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:125232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.616 [2024-10-21 12:09:17.318921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.616 [2024-10-21 12:09:17.318927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:125240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.616 [2024-10-21 12:09:17.318933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.616 [2024-10-21 12:09:17.318939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:125248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.616 [2024-10-21 12:09:17.318945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.616 [2024-10-21 12:09:17.318951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:125256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.616 [2024-10-21 12:09:17.318956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.616 [2024-10-21 12:09:17.318962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:125264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.616 [2024-10-21 12:09:17.318968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.616 [2024-10-21 12:09:17.318974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:125272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.616 [2024-10-21 12:09:17.318979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.616 [2024-10-21 12:09:17.318986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:125280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.616 [2024-10-21 12:09:17.318991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.616 [2024-10-21 12:09:17.318997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:125288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.616 [2024-10-21 12:09:17.319002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.616 [2024-10-21 12:09:17.319009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:125296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.616 [2024-10-21 12:09:17.319015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.616 [2024-10-21 12:09:17.319021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:125304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.616 [2024-10-21 12:09:17.319026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.616 [2024-10-21 12:09:17.319033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:125312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.616 [2024-10-21 12:09:17.319038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.616 [2024-10-21 12:09:17.319044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:125320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.616 [2024-10-21 12:09:17.319049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.616 [2024-10-21 12:09:17.319056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:125328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.616 [2024-10-21 12:09:17.319061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.616 [2024-10-21 12:09:17.319067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:125336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.616 [2024-10-21 12:09:17.319072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.616 [2024-10-21 12:09:17.319079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:125344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.616 [2024-10-21 12:09:17.319084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.616 [2024-10-21 12:09:17.319090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:125352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.616 [2024-10-21 12:09:17.319095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.616 [2024-10-21 12:09:17.319102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:125360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.616 [2024-10-21 12:09:17.319107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.616 [2024-10-21 12:09:17.319113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:125368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.616 [2024-10-21 12:09:17.319119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.616 [2024-10-21 12:09:17.319125] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbac0 is same with the state(6) to be set 00:24:47.616 [2024-10-21 12:09:17.319131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.616 [2024-10-21 12:09:17.319135] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.616 [2024-10-21 12:09:17.319140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125376 len:8 PRP1 0x0 PRP2 0x0 00:24:47.616 [2024-10-21 12:09:17.319145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.616 [2024-10-21 12:09:17.319176] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21dbac0 was disconnected and freed. reset controller. 00:24:47.616 [2024-10-21 12:09:17.319183] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:47.616 [2024-10-21 12:09:17.319200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.616 [2024-10-21 12:09:17.319206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.616 [2024-10-21 12:09:17.319212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.616 [2024-10-21 12:09:17.319217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.616 [2024-10-21 12:09:17.319223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.616 [2024-10-21 12:09:17.319228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.616 [2024-10-21 12:09:17.319234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.616 [2024-10-21 12:09:17.319239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.616 [2024-10-21 12:09:17.319244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:47.616 [2024-10-21 12:09:17.319261] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ae7d0 (9): Bad file descriptor 00:24:47.616 [2024-10-21 12:09:17.321699] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:47.616 [2024-10-21 12:09:17.389762] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:47.616 12042.80 IOPS, 47.04 MiB/s [2024-10-21T10:09:24.211Z] 12117.36 IOPS, 47.33 MiB/s [2024-10-21T10:09:24.211Z] 12186.08 IOPS, 47.60 MiB/s [2024-10-21T10:09:24.211Z] 12238.08 IOPS, 47.80 MiB/s [2024-10-21T10:09:24.211Z] 12287.50 IOPS, 48.00 MiB/s [2024-10-21T10:09:24.211Z] 12335.40 IOPS, 48.19 MiB/s 00:24:47.616 Latency(us) 00:24:47.617 [2024-10-21T10:09:24.212Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:47.617 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:47.617 Verification LBA range: start 0x0 length 0x4000 00:24:47.617 NVMe0n1 : 15.04 12299.95 48.05 557.10 0.00 9909.36 535.89 41287.68 00:24:47.617 [2024-10-21T10:09:24.212Z] =================================================================================================================== 00:24:47.617 [2024-10-21T10:09:24.212Z] Total : 12299.95 48.05 557.10 0.00 9909.36 535.89 41287.68 00:24:47.617 Received shutdown signal, test time was about 15.000000 seconds 00:24:47.617 00:24:47.617 Latency(us) 00:24:47.617 [2024-10-21T10:09:24.212Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:47.617 [2024-10-21T10:09:24.212Z] =================================================================================================================== 00:24:47.617 [2024-10-21T10:09:24.212Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:47.617 12:09:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:47.617 12:09:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:47.617 12:09:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:47.617 12:09:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1087735 00:24:47.617 12:09:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1087735 /var/tmp/bdevperf.sock 00:24:47.617 12:09:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:47.617 12:09:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1087735 ']' 00:24:47.617 12:09:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:47.617 12:09:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:47.617 12:09:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:47.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:47.617 12:09:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:47.617 12:09:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:47.617 12:09:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:47.617 12:09:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:47.617 12:09:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:47.617 [2024-10-21 12:09:23.899555] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:47.617 12:09:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:47.617 [2024-10-21 12:09:24.075953] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:47.617 12:09:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:47.878 NVMe0n1 00:24:47.878 12:09:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:48.139 00:24:48.139 12:09:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:48.401 00:24:48.401 12:09:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:48.401 12:09:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:48.662 12:09:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:48.922 12:09:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:52.221 12:09:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:52.221 12:09:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:52.221 12:09:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1088652 00:24:52.221 12:09:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:52.221 12:09:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1088652 00:24:53.162 { 00:24:53.162 "results": [ 00:24:53.162 { 00:24:53.162 "job": "NVMe0n1", 00:24:53.162 "core_mask": "0x1", 00:24:53.162 "workload": "verify", 00:24:53.162 "status": "finished", 00:24:53.162 "verify_range": { 00:24:53.162 "start": 0, 00:24:53.162 "length": 16384 00:24:53.162 }, 00:24:53.162 "queue_depth": 128, 00:24:53.162 "io_size": 4096, 00:24:53.162 "runtime": 1.005632, 00:24:53.162 "iops": 12919.23884681474, 00:24:53.162 "mibps": 50.46577674537008, 00:24:53.162 "io_failed": 0, 00:24:53.162 "io_timeout": 0, 00:24:53.162 "avg_latency_us": 9872.516518883414, 00:24:53.162 "min_latency_us": 1365.3333333333333, 00:24:53.162 "max_latency_us": 10048.853333333333 00:24:53.162 } 00:24:53.162 ], 00:24:53.162 "core_count": 1 00:24:53.162 } 00:24:53.162 12:09:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:53.162 [2024-10-21 12:09:23.560062] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:24:53.162 [2024-10-21 12:09:23.560120] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1087735 ] 00:24:53.162 [2024-10-21 12:09:23.636379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:53.163 [2024-10-21 12:09:23.664414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:53.163 [2024-10-21 12:09:25.308442] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:53.163 [2024-10-21 12:09:25.308478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:53.163 [2024-10-21 12:09:25.308487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.163 [2024-10-21 12:09:25.308494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:53.163 [2024-10-21 12:09:25.308500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.163 [2024-10-21 12:09:25.308506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:53.163 [2024-10-21 12:09:25.308511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.163 [2024-10-21 12:09:25.308516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:53.163 [2024-10-21 12:09:25.308521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.163 [2024-10-21 12:09:25.308526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.163 [2024-10-21 12:09:25.308547] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.163 [2024-10-21 12:09:25.308558] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf427d0 (9): Bad file descriptor 00:24:53.163 [2024-10-21 12:09:25.360390] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:53.163 Running I/O for 1 seconds... 00:24:53.163 12863.00 IOPS, 50.25 MiB/s 00:24:53.163 Latency(us) 00:24:53.163 [2024-10-21T10:09:29.758Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:53.163 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:53.163 Verification LBA range: start 0x0 length 0x4000 00:24:53.163 NVMe0n1 : 1.01 12919.24 50.47 0.00 0.00 9872.52 1365.33 10048.85 00:24:53.163 [2024-10-21T10:09:29.758Z] =================================================================================================================== 00:24:53.163 [2024-10-21T10:09:29.758Z] Total : 12919.24 50.47 0.00 0.00 9872.52 1365.33 10048.85 00:24:53.163 12:09:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:53.163 12:09:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:53.423 12:09:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:53.423 12:09:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:53.423 12:09:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:53.684 12:09:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:53.944 12:09:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:57.349 12:09:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:57.349 12:09:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:57.349 12:09:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1087735 00:24:57.349 12:09:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1087735 ']' 00:24:57.349 12:09:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1087735 00:24:57.349 12:09:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:24:57.349 12:09:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:57.349 12:09:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1087735 00:24:57.349 12:09:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:57.349 12:09:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:57.349 12:09:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1087735' 00:24:57.349 killing process with pid 1087735 00:24:57.349 12:09:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1087735 00:24:57.349 12:09:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1087735 00:24:57.349 12:09:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:57.349 12:09:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:57.615 12:09:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:57.615 12:09:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:57.615 12:09:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:57.615 12:09:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:57.615 12:09:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:24:57.615 12:09:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:57.615 12:09:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:24:57.615 12:09:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:57.615 12:09:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:57.615 rmmod nvme_tcp 00:24:57.615 rmmod nvme_fabrics 00:24:57.615 rmmod nvme_keyring 00:24:57.615 12:09:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:57.615 12:09:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:24:57.615 12:09:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:24:57.615 12:09:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@515 -- # '[' -n 1083557 ']' 00:24:57.615 12:09:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # killprocess 1083557 00:24:57.615 12:09:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1083557 ']' 00:24:57.615 12:09:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1083557 00:24:57.615 12:09:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:24:57.615 12:09:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:57.615 12:09:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1083557 00:24:57.615 12:09:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:57.615 12:09:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:57.615 12:09:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1083557' 00:24:57.615 killing process with pid 1083557 00:24:57.615 12:09:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1083557 00:24:57.615 12:09:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1083557 00:24:57.615 12:09:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:57.615 12:09:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:57.615 12:09:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:57.615 12:09:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:24:57.615 12:09:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-save 00:24:57.615 12:09:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:57.615 12:09:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-restore 00:24:57.615 12:09:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:57.615 12:09:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:57.615 12:09:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.615 12:09:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:57.615 12:09:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:00.159 12:09:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:00.159 00:25:00.159 real 0m39.821s 00:25:00.159 user 2m1.481s 00:25:00.159 sys 0m8.875s 00:25:00.159 12:09:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:00.159 12:09:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:00.159 ************************************ 00:25:00.159 END TEST nvmf_failover 00:25:00.159 ************************************ 00:25:00.159 12:09:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:00.159 12:09:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:00.159 12:09:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:00.159 12:09:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.159 ************************************ 00:25:00.159 START TEST nvmf_host_discovery 00:25:00.159 ************************************ 00:25:00.159 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:00.159 * Looking for test storage... 00:25:00.159 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:00.159 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:00.159 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:25:00.159 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:00.159 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:00.159 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:00.159 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:00.159 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:00.159 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:25:00.159 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:25:00.159 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:25:00.159 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:25:00.159 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:25:00.159 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:25:00.159 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:25:00.159 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:00.159 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:25:00.159 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:25:00.159 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:00.159 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:00.159 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:25:00.159 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:25:00.159 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:00.159 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:25:00.159 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:25:00.159 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:25:00.159 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:25:00.159 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:00.159 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:25:00.159 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:25:00.159 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:00.159 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:00.159 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:25:00.159 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:00.159 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:00.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.159 --rc genhtml_branch_coverage=1 00:25:00.159 --rc genhtml_function_coverage=1 00:25:00.159 --rc genhtml_legend=1 00:25:00.159 --rc geninfo_all_blocks=1 00:25:00.159 --rc geninfo_unexecuted_blocks=1 00:25:00.159 00:25:00.159 ' 00:25:00.159 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:00.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.159 --rc genhtml_branch_coverage=1 00:25:00.159 --rc genhtml_function_coverage=1 00:25:00.159 --rc genhtml_legend=1 00:25:00.159 --rc geninfo_all_blocks=1 00:25:00.159 --rc geninfo_unexecuted_blocks=1 00:25:00.159 00:25:00.159 ' 00:25:00.159 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:00.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.159 --rc genhtml_branch_coverage=1 00:25:00.159 --rc genhtml_function_coverage=1 00:25:00.159 --rc genhtml_legend=1 00:25:00.159 --rc geninfo_all_blocks=1 00:25:00.159 --rc geninfo_unexecuted_blocks=1 00:25:00.159 00:25:00.159 ' 00:25:00.159 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:00.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.159 --rc genhtml_branch_coverage=1 00:25:00.159 --rc genhtml_function_coverage=1 00:25:00.159 --rc genhtml_legend=1 00:25:00.159 --rc geninfo_all_blocks=1 00:25:00.159 --rc geninfo_unexecuted_blocks=1 00:25:00.159 00:25:00.159 ' 00:25:00.159 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:00.159 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:00.159 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:00.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:25:00.160 12:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.304 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:08.304 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:25:08.304 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:08.304 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:08.304 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:08.304 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:08.304 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:08.304 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:25:08.304 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:08.304 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:25:08.304 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:25:08.304 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:25:08.304 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:25:08.304 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:25:08.304 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:25:08.304 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:08.304 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:08.304 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:08.305 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:08.305 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:08.305 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:08.305 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:08.305 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:08.305 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:08.305 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:08.305 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:08.305 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:08.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:08.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:25:08.305 00:25:08.305 --- 10.0.0.2 ping statistics --- 00:25:08.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.305 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:25:08.305 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:08.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:08.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:25:08.305 00:25:08.305 --- 10.0.0.1 ping statistics --- 00:25:08.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.305 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:25:08.305 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:08.305 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # return 0 00:25:08.305 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:08.305 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:08.305 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:08.305 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:08.305 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:08.305 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:08.305 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:08.305 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:08.305 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:08.305 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:08.305 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.305 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # nvmfpid=1093782 00:25:08.305 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # waitforlisten 1093782 00:25:08.305 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:08.305 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1093782 ']' 00:25:08.305 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:08.305 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:08.305 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:08.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:08.305 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:08.305 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.305 [2024-10-21 12:09:44.154470] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:25:08.306 [2024-10-21 12:09:44.154552] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:08.306 [2024-10-21 12:09:44.245793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.306 [2024-10-21 12:09:44.296547] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:08.306 [2024-10-21 12:09:44.296596] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:08.306 [2024-10-21 12:09:44.296604] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:08.306 [2024-10-21 12:09:44.296612] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:08.306 [2024-10-21 12:09:44.296618] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:08.306 [2024-10-21 12:09:44.297443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:08.567 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:08.567 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:25:08.567 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:08.567 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:08.567 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.567 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:08.567 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:08.567 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.567 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.567 [2024-10-21 12:09:45.012690] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:08.567 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.567 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:08.567 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.567 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.567 [2024-10-21 12:09:45.024947] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:08.567 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.567 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:08.567 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.567 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.567 null0 00:25:08.567 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.567 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:08.567 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.567 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.567 null1 00:25:08.567 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.567 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:08.567 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.567 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.567 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.567 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1094117 00:25:08.567 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:08.567 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1094117 /tmp/host.sock 00:25:08.567 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1094117 ']' 00:25:08.567 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:25:08.567 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:08.567 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:08.567 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:08.567 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:08.567 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.567 [2024-10-21 12:09:45.119659] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:25:08.567 [2024-10-21 12:09:45.119722] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1094117 ] 00:25:08.829 [2024-10-21 12:09:45.201829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.829 [2024-10-21 12:09:45.254539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:09.401 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:09.401 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:25:09.401 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:09.401 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:09.401 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.401 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.402 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.402 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:09.402 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.402 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.402 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.402 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:09.402 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:09.402 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:09.402 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.402 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.402 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:09.402 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:09.402 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:09.402 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:09.663 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.924 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:09.924 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:09.924 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.924 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.924 [2024-10-21 12:09:46.300262] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:09.924 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.924 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:09.924 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:09.924 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:09.924 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.924 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.924 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:09.924 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:09.924 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.924 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:09.924 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:09.924 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:09.925 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:09.925 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.925 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.925 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:09.925 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:09.925 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.925 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:09.925 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:09.925 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:09.925 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:09.925 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:09.925 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:09.925 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:09.925 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:09.925 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:09.925 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:09.925 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:09.925 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.925 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.925 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.925 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:09.925 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:09.925 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:09.925 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:09.925 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:09.925 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.925 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.925 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.925 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:09.925 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:09.925 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:09.925 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:09.925 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:09.925 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:09.925 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:09.925 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:09.925 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.925 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.925 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:09.925 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:09.925 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.925 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:25:09.925 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:25:10.497 [2024-10-21 12:09:47.002520] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:10.497 [2024-10-21 12:09:47.002556] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:10.497 [2024-10-21 12:09:47.002577] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:10.497 [2024-10-21 12:09:47.090818] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:10.758 [2024-10-21 12:09:47.315817] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:10.758 [2024-10-21 12:09:47.315838] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:11.018 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:11.018 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:11.018 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:11.018 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:11.018 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:11.018 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.018 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:11.018 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.018 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:11.018 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.018 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.018 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:11.018 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:11.018 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:11.018 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:11.018 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:11.018 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:11.018 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:11.018 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:11.018 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:11.018 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.018 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:11.018 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.018 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:11.018 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.018 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:11.018 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:11.018 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:11.018 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:11.018 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:11.018 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:11.018 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:11.018 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:11.018 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:11.018 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.018 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.018 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:11.018 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:11.018 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:11.018 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.279 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:25:11.279 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:11.279 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:11.279 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.280 [2024-10-21 12:09:47.812424] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:11.280 [2024-10-21 12:09:47.812987] bdev_nvme.c:7135:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:11.280 [2024-10-21 12:09:47.813016] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:11.280 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:11.541 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.541 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:11.541 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.541 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:11.541 [2024-10-21 12:09:47.900280] bdev_nvme.c:7077:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:11.541 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.541 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:11.541 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:11.541 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:11.541 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:11.541 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:11.541 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:11.541 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:11.541 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:11.541 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:11.541 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:11.541 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:11.541 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:11.541 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.541 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.541 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.542 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:11.542 12:09:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:25:11.542 [2024-10-21 12:09:48.048390] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:11.542 [2024-10-21 12:09:48.048412] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:11.542 [2024-10-21 12:09:48.048418] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:12.484 12:09:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:12.484 12:09:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:12.484 12:09:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:12.484 12:09:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:12.484 12:09:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:12.484 12:09:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.484 12:09:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:12.484 12:09:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:12.484 12:09:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:12.484 12:09:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.484 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:12.484 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:12.484 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:12.484 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:12.484 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:12.484 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:12.484 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:12.484 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:12.484 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:12.484 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:12.484 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:12.484 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:12.484 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.484 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:12.484 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.484 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:12.484 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:12.484 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:12.484 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:12.484 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:12.484 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.484 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:12.746 [2024-10-21 12:09:49.084255] bdev_nvme.c:7135:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:12.746 [2024-10-21 12:09:49.084279] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:12.746 [2024-10-21 12:09:49.087634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.746 [2024-10-21 12:09:49.087653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.746 [2024-10-21 12:09:49.087662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.746 [2024-10-21 12:09:49.087670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.746 [2024-10-21 12:09:49.087678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.746 [2024-10-21 12:09:49.087686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.746 [2024-10-21 12:09:49.087694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.746 [2024-10-21 12:09:49.087702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.746 [2024-10-21 12:09:49.087709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17539d0 is same with the state(6) to be set 00:25:12.746 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.746 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:12.746 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:12.746 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:12.746 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:12.746 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:12.746 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:12.746 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:12.746 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:12.746 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.746 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:12.746 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:12.746 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:12.746 [2024-10-21 12:09:49.097645] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17539d0 (9): Bad file descriptor 00:25:12.746 [2024-10-21 12:09:49.107685] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:12.746 [2024-10-21 12:09:49.107899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.746 [2024-10-21 12:09:49.107915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17539d0 with addr=10.0.0.2, port=4420 00:25:12.746 [2024-10-21 12:09:49.107924] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17539d0 is same with the state(6) to be set 00:25:12.746 [2024-10-21 12:09:49.107936] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17539d0 (9): Bad file descriptor 00:25:12.746 [2024-10-21 12:09:49.107947] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:12.746 [2024-10-21 12:09:49.107955] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:12.746 [2024-10-21 12:09:49.107963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:12.746 [2024-10-21 12:09:49.107976] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.746 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.746 [2024-10-21 12:09:49.117743] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:12.746 [2024-10-21 12:09:49.118038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.746 [2024-10-21 12:09:49.118051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17539d0 with addr=10.0.0.2, port=4420 00:25:12.746 [2024-10-21 12:09:49.118059] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17539d0 is same with the state(6) to be set 00:25:12.746 [2024-10-21 12:09:49.118070] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17539d0 (9): Bad file descriptor 00:25:12.746 [2024-10-21 12:09:49.118081] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:12.746 [2024-10-21 12:09:49.118088] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:12.746 [2024-10-21 12:09:49.118095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:12.746 [2024-10-21 12:09:49.118106] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.746 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.746 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:12.746 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:12.746 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:12.746 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:12.746 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:12.746 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:12.746 [2024-10-21 12:09:49.127797] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:12.746 [2024-10-21 12:09:49.128096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.746 [2024-10-21 12:09:49.128108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17539d0 with addr=10.0.0.2, port=4420 00:25:12.746 [2024-10-21 12:09:49.128115] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17539d0 is same with the state(6) to be set 00:25:12.747 [2024-10-21 12:09:49.128126] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17539d0 (9): Bad file descriptor 00:25:12.747 [2024-10-21 12:09:49.128142] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:12.747 [2024-10-21 12:09:49.128149] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:12.747 [2024-10-21 12:09:49.128156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:12.747 [2024-10-21 12:09:49.128166] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.747 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:12.747 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:12.747 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:12.747 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.747 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:12.747 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:12.747 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:12.747 [2024-10-21 12:09:49.138255] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:12.747 [2024-10-21 12:09:49.138604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.747 [2024-10-21 12:09:49.138618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17539d0 with addr=10.0.0.2, port=4420 00:25:12.747 [2024-10-21 12:09:49.138626] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17539d0 is same with the state(6) to be set 00:25:12.747 [2024-10-21 12:09:49.138637] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17539d0 (9): Bad file descriptor 00:25:12.747 [2024-10-21 12:09:49.138654] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:12.747 [2024-10-21 12:09:49.138661] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:12.747 [2024-10-21 12:09:49.138669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:12.747 [2024-10-21 12:09:49.138680] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.747 [2024-10-21 12:09:49.148314] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:12.747 [2024-10-21 12:09:49.148655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.747 [2024-10-21 12:09:49.148667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17539d0 with addr=10.0.0.2, port=4420 00:25:12.747 [2024-10-21 12:09:49.148675] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17539d0 is same with the state(6) to be set 00:25:12.747 [2024-10-21 12:09:49.148685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17539d0 (9): Bad file descriptor 00:25:12.747 [2024-10-21 12:09:49.148703] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:12.747 [2024-10-21 12:09:49.148710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:12.747 [2024-10-21 12:09:49.148717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:12.747 [2024-10-21 12:09:49.148727] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.747 [2024-10-21 12:09:49.158397] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:12.747 [2024-10-21 12:09:49.158741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.747 [2024-10-21 12:09:49.158753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17539d0 with addr=10.0.0.2, port=4420 00:25:12.747 [2024-10-21 12:09:49.158765] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17539d0 is same with the state(6) to be set 00:25:12.747 [2024-10-21 12:09:49.158776] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17539d0 (9): Bad file descriptor 00:25:12.747 [2024-10-21 12:09:49.158793] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:12.747 [2024-10-21 12:09:49.158799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:12.747 [2024-10-21 12:09:49.158807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:12.747 [2024-10-21 12:09:49.158818] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.747 [2024-10-21 12:09:49.168451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:12.747 [2024-10-21 12:09:49.168764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.747 [2024-10-21 12:09:49.168775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17539d0 with addr=10.0.0.2, port=4420 00:25:12.747 [2024-10-21 12:09:49.168783] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17539d0 is same with the state(6) to be set 00:25:12.747 [2024-10-21 12:09:49.168794] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17539d0 (9): Bad file descriptor 00:25:12.747 [2024-10-21 12:09:49.168804] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:12.747 [2024-10-21 12:09:49.168811] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:12.747 [2024-10-21 12:09:49.168818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:12.747 [2024-10-21 12:09:49.168829] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.747 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.747 [2024-10-21 12:09:49.178502] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:12.747 [2024-10-21 12:09:49.178806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.747 [2024-10-21 12:09:49.178818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17539d0 with addr=10.0.0.2, port=4420 00:25:12.747 [2024-10-21 12:09:49.178825] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17539d0 is same with the state(6) to be set 00:25:12.747 [2024-10-21 12:09:49.178837] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17539d0 (9): Bad file descriptor 00:25:12.747 [2024-10-21 12:09:49.178847] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:12.747 [2024-10-21 12:09:49.178853] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:12.747 [2024-10-21 12:09:49.178860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:12.747 [2024-10-21 12:09:49.178871] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.747 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:12.747 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:12.747 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:12.747 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:12.747 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:12.747 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:12.747 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:12.747 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:12.747 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:12.747 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:12.747 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.747 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:12.747 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:12.747 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:12.747 [2024-10-21 12:09:49.188558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:12.747 [2024-10-21 12:09:49.188894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.747 [2024-10-21 12:09:49.188906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17539d0 with addr=10.0.0.2, port=4420 00:25:12.747 [2024-10-21 12:09:49.188913] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17539d0 is same with the state(6) to be set 00:25:12.747 [2024-10-21 12:09:49.188924] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17539d0 (9): Bad file descriptor 00:25:12.747 [2024-10-21 12:09:49.188941] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:12.747 [2024-10-21 12:09:49.188948] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:12.747 [2024-10-21 12:09:49.188955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:12.747 [2024-10-21 12:09:49.188965] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.747 [2024-10-21 12:09:49.198611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:12.747 [2024-10-21 12:09:49.198914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.747 [2024-10-21 12:09:49.198926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17539d0 with addr=10.0.0.2, port=4420 00:25:12.747 [2024-10-21 12:09:49.198933] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17539d0 is same with the state(6) to be set 00:25:12.747 [2024-10-21 12:09:49.198944] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17539d0 (9): Bad file descriptor 00:25:12.747 [2024-10-21 12:09:49.198954] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:12.747 [2024-10-21 12:09:49.198960] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:12.747 [2024-10-21 12:09:49.198967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:12.747 [2024-10-21 12:09:49.198978] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.747 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.747 [2024-10-21 12:09:49.208665] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:12.747 [2024-10-21 12:09:49.208951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.747 [2024-10-21 12:09:49.208962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17539d0 with addr=10.0.0.2, port=4420 00:25:12.747 [2024-10-21 12:09:49.208969] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17539d0 is same with the state(6) to be set 00:25:12.747 [2024-10-21 12:09:49.208980] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17539d0 (9): Bad file descriptor 00:25:12.747 [2024-10-21 12:09:49.208994] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:12.747 [2024-10-21 12:09:49.209000] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:12.747 [2024-10-21 12:09:49.209007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:12.747 [2024-10-21 12:09:49.209017] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.748 [2024-10-21 12:09:49.215058] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:12.748 [2024-10-21 12:09:49.215076] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:12.748 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:25:12.748 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:25:13.689 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:13.689 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:13.689 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:13.689 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:13.689 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:13.689 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.690 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:13.690 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.690 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:13.690 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.951 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.952 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.952 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:13.952 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:13.952 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:13.952 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:13.952 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:13.952 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.952 12:09:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:15.338 [2024-10-21 12:09:51.502896] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:15.338 [2024-10-21 12:09:51.502912] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:15.338 [2024-10-21 12:09:51.502921] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:15.338 [2024-10-21 12:09:51.591173] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:15.338 [2024-10-21 12:09:51.697965] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:15.338 [2024-10-21 12:09:51.697989] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:15.338 request: 00:25:15.338 { 00:25:15.338 "name": "nvme", 00:25:15.338 "trtype": "tcp", 00:25:15.338 "traddr": "10.0.0.2", 00:25:15.338 "adrfam": "ipv4", 00:25:15.338 "trsvcid": "8009", 00:25:15.338 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:15.338 "wait_for_attach": true, 00:25:15.338 "method": "bdev_nvme_start_discovery", 00:25:15.338 "req_id": 1 00:25:15.338 } 00:25:15.338 Got JSON-RPC error response 00:25:15.338 response: 00:25:15.338 { 00:25:15.338 "code": -17, 00:25:15.338 "message": "File exists" 00:25:15.338 } 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:15.338 request: 00:25:15.338 { 00:25:15.338 "name": "nvme_second", 00:25:15.338 "trtype": "tcp", 00:25:15.338 "traddr": "10.0.0.2", 00:25:15.338 "adrfam": "ipv4", 00:25:15.338 "trsvcid": "8009", 00:25:15.338 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:15.338 "wait_for_attach": true, 00:25:15.338 "method": "bdev_nvme_start_discovery", 00:25:15.338 "req_id": 1 00:25:15.338 } 00:25:15.338 Got JSON-RPC error response 00:25:15.338 response: 00:25:15.338 { 00:25:15.338 "code": -17, 00:25:15.338 "message": "File exists" 00:25:15.338 } 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:15.338 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:15.339 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:15.339 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:15.339 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.339 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:15.339 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:15.339 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:15.339 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.600 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:15.600 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:15.600 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:15.600 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:15.600 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:15.600 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:15.600 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:15.600 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:15.600 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:15.600 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.600 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:16.541 [2024-10-21 12:09:52.961872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.541 [2024-10-21 12:09:52.961897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1753170 with addr=10.0.0.2, port=8010 00:25:16.541 [2024-10-21 12:09:52.961908] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:16.541 [2024-10-21 12:09:52.961913] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:16.541 [2024-10-21 12:09:52.961918] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:17.482 [2024-10-21 12:09:53.964209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.482 [2024-10-21 12:09:53.964229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1753170 with addr=10.0.0.2, port=8010 00:25:17.482 [2024-10-21 12:09:53.964237] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:17.482 [2024-10-21 12:09:53.964242] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:17.482 [2024-10-21 12:09:53.964247] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:18.423 [2024-10-21 12:09:54.966256] bdev_nvme.c:7196:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:18.423 request: 00:25:18.423 { 00:25:18.423 "name": "nvme_second", 00:25:18.423 "trtype": "tcp", 00:25:18.423 "traddr": "10.0.0.2", 00:25:18.423 "adrfam": "ipv4", 00:25:18.423 "trsvcid": "8010", 00:25:18.423 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:18.423 "wait_for_attach": false, 00:25:18.423 "attach_timeout_ms": 3000, 00:25:18.423 "method": "bdev_nvme_start_discovery", 00:25:18.423 "req_id": 1 00:25:18.423 } 00:25:18.423 Got JSON-RPC error response 00:25:18.423 response: 00:25:18.423 { 00:25:18.423 "code": -110, 00:25:18.423 "message": "Connection timed out" 00:25:18.423 } 00:25:18.423 12:09:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:18.423 12:09:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:18.423 12:09:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:18.423 12:09:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:18.423 12:09:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:18.423 12:09:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:18.423 12:09:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:18.423 12:09:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:18.423 12:09:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.423 12:09:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:18.423 12:09:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.423 12:09:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:18.423 12:09:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.684 12:09:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:18.684 12:09:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:18.684 12:09:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1094117 00:25:18.684 12:09:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:18.684 12:09:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:18.684 12:09:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:25:18.684 12:09:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:18.684 12:09:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:25:18.684 12:09:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:18.684 12:09:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:18.684 rmmod nvme_tcp 00:25:18.684 rmmod nvme_fabrics 00:25:18.684 rmmod nvme_keyring 00:25:18.684 12:09:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:18.684 12:09:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:25:18.684 12:09:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:25:18.684 12:09:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@515 -- # '[' -n 1093782 ']' 00:25:18.684 12:09:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # killprocess 1093782 00:25:18.684 12:09:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 1093782 ']' 00:25:18.684 12:09:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 1093782 00:25:18.684 12:09:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:25:18.684 12:09:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:18.684 12:09:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1093782 00:25:18.684 12:09:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:18.684 12:09:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:18.684 12:09:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1093782' 00:25:18.684 killing process with pid 1093782 00:25:18.684 12:09:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 1093782 00:25:18.684 12:09:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 1093782 00:25:18.684 12:09:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:18.684 12:09:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:18.684 12:09:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:18.684 12:09:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:25:18.684 12:09:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-save 00:25:18.684 12:09:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:18.684 12:09:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:25:18.684 12:09:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:18.684 12:09:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:18.684 12:09:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.684 12:09:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:18.684 12:09:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:21.228 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:21.228 00:25:21.228 real 0m20.990s 00:25:21.228 user 0m24.967s 00:25:21.228 sys 0m7.212s 00:25:21.228 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:21.228 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.228 ************************************ 00:25:21.228 END TEST nvmf_host_discovery 00:25:21.228 ************************************ 00:25:21.228 12:09:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:21.228 12:09:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:21.228 12:09:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:21.228 12:09:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.228 ************************************ 00:25:21.228 START TEST nvmf_host_multipath_status 00:25:21.228 ************************************ 00:25:21.228 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:21.228 * Looking for test storage... 00:25:21.228 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:21.228 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:21.228 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:21.228 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:25:21.228 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:21.228 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:21.228 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:21.228 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:21.228 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:25:21.228 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:25:21.228 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:25:21.228 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:25:21.228 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:21.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.229 --rc genhtml_branch_coverage=1 00:25:21.229 --rc genhtml_function_coverage=1 00:25:21.229 --rc genhtml_legend=1 00:25:21.229 --rc geninfo_all_blocks=1 00:25:21.229 --rc geninfo_unexecuted_blocks=1 00:25:21.229 00:25:21.229 ' 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:21.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.229 --rc genhtml_branch_coverage=1 00:25:21.229 --rc genhtml_function_coverage=1 00:25:21.229 --rc genhtml_legend=1 00:25:21.229 --rc geninfo_all_blocks=1 00:25:21.229 --rc geninfo_unexecuted_blocks=1 00:25:21.229 00:25:21.229 ' 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:21.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.229 --rc genhtml_branch_coverage=1 00:25:21.229 --rc genhtml_function_coverage=1 00:25:21.229 --rc genhtml_legend=1 00:25:21.229 --rc geninfo_all_blocks=1 00:25:21.229 --rc geninfo_unexecuted_blocks=1 00:25:21.229 00:25:21.229 ' 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:21.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.229 --rc genhtml_branch_coverage=1 00:25:21.229 --rc genhtml_function_coverage=1 00:25:21.229 --rc genhtml_legend=1 00:25:21.229 --rc geninfo_all_blocks=1 00:25:21.229 --rc geninfo_unexecuted_blocks=1 00:25:21.229 00:25:21.229 ' 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:21.229 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:21.229 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:25:21.230 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:29.372 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:29.372 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:25:29.372 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:29.372 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:29.372 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:29.372 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:29.372 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:29.372 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:25:29.372 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:29.372 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:25:29.372 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:25:29.372 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:25:29.372 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:25:29.372 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:25:29.372 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:25:29.372 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:29.372 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:29.372 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:29.372 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:29.372 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:29.372 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:29.372 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:29.372 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:29.372 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:29.372 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:29.372 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:29.372 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:29.372 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:29.372 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:29.372 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:29.372 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:29.372 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:29.372 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:29.372 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:29.372 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:29.372 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:29.372 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:29.372 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:29.372 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:29.372 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:29.372 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:29.372 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:29.373 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:29.373 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:29.373 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:29.373 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:29.373 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:29.373 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:29.373 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:29.373 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:29.373 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:29.373 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:29.373 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:29.373 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:29.373 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:29.373 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:29.373 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:29.373 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:29.373 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:29.373 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:29.373 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:29.373 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:29.373 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:29.373 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:29.373 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:29.373 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:29.373 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:29.373 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:29.373 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:29.373 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:29.373 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:29.373 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:29.373 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:29.373 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # is_hw=yes 00:25:29.373 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:29.373 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:29.373 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:29.373 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:29.373 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:29.373 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:29.373 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:29.373 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:29.373 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:29.373 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:29.373 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:29.373 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:29.373 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:29.373 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:29.373 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:29.373 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:29.373 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:29.373 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:29.373 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:29.373 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:29.373 12:10:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:29.373 12:10:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:29.373 12:10:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:29.373 12:10:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:29.373 12:10:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:29.373 12:10:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:29.373 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:29.373 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:25:29.373 00:25:29.373 --- 10.0.0.2 ping statistics --- 00:25:29.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.373 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:25:29.373 12:10:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:29.373 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:29.373 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:25:29.373 00:25:29.373 --- 10.0.0.1 ping statistics --- 00:25:29.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.373 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:25:29.373 12:10:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:29.373 12:10:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # return 0 00:25:29.373 12:10:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:29.373 12:10:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:29.373 12:10:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:29.373 12:10:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:29.373 12:10:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:29.373 12:10:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:29.373 12:10:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:29.373 12:10:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:29.373 12:10:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:29.373 12:10:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:29.373 12:10:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:29.373 12:10:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # nvmfpid=1100314 00:25:29.373 12:10:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # waitforlisten 1100314 00:25:29.373 12:10:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:29.373 12:10:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1100314 ']' 00:25:29.373 12:10:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:29.373 12:10:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:29.373 12:10:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:29.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:29.373 12:10:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:29.373 12:10:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:29.373 [2024-10-21 12:10:05.256234] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:25:29.373 [2024-10-21 12:10:05.256302] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:29.373 [2024-10-21 12:10:05.344154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:29.373 [2024-10-21 12:10:05.396138] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:29.373 [2024-10-21 12:10:05.396196] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:29.373 [2024-10-21 12:10:05.396205] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:29.373 [2024-10-21 12:10:05.396212] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:29.373 [2024-10-21 12:10:05.396218] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:29.373 [2024-10-21 12:10:05.397875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:29.373 [2024-10-21 12:10:05.397879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:29.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:29.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:25:29.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:29.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:29.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:29.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:29.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1100314 00:25:29.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:29.894 [2024-10-21 12:10:06.280751] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:29.894 12:10:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:30.155 Malloc0 00:25:30.155 12:10:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:30.155 12:10:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:30.416 12:10:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:30.678 [2024-10-21 12:10:07.035492] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:30.678 12:10:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:30.678 [2024-10-21 12:10:07.203975] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:30.678 12:10:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:30.678 12:10:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1100674 00:25:30.678 12:10:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:30.678 12:10:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1100674 /var/tmp/bdevperf.sock 00:25:30.678 12:10:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1100674 ']' 00:25:30.678 12:10:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:30.678 12:10:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:30.678 12:10:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:30.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:30.678 12:10:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:30.678 12:10:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:31.621 12:10:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:31.621 12:10:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:25:31.621 12:10:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:31.880 12:10:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:32.141 Nvme0n1 00:25:32.141 12:10:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:32.403 Nvme0n1 00:25:32.403 12:10:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:32.403 12:10:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:34.945 12:10:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:34.945 12:10:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:34.945 12:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:34.945 12:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:35.885 12:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:35.885 12:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:35.885 12:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.885 12:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:36.146 12:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:36.146 12:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:36.146 12:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:36.146 12:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:36.146 12:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:36.146 12:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:36.146 12:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:36.146 12:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:36.407 12:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:36.407 12:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:36.407 12:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:36.407 12:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:36.667 12:10:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:36.667 12:10:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:36.667 12:10:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:36.667 12:10:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:36.928 12:10:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:36.928 12:10:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:36.928 12:10:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:36.928 12:10:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:36.928 12:10:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:36.928 12:10:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:36.928 12:10:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:37.188 12:10:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:37.448 12:10:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:38.390 12:10:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:38.390 12:10:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:38.390 12:10:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.390 12:10:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:38.651 12:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:38.651 12:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:38.651 12:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.651 12:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:38.911 12:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:38.911 12:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:38.911 12:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.911 12:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:38.911 12:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:38.911 12:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:38.911 12:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.911 12:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:39.172 12:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:39.172 12:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:39.172 12:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.172 12:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:39.432 12:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:39.432 12:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:39.432 12:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.432 12:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:39.432 12:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:39.432 12:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:39.432 12:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:39.693 12:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:39.953 12:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:40.895 12:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:40.895 12:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:40.895 12:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.895 12:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:41.156 12:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:41.156 12:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:41.156 12:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.156 12:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:41.156 12:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:41.156 12:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:41.156 12:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.156 12:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:41.417 12:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:41.417 12:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:41.417 12:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.417 12:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:41.678 12:10:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:41.679 12:10:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:41.679 12:10:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.679 12:10:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:41.940 12:10:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:41.940 12:10:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:41.940 12:10:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.940 12:10:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:41.940 12:10:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:41.940 12:10:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:41.940 12:10:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:42.201 12:10:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:42.460 12:10:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:43.404 12:10:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:43.404 12:10:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:43.404 12:10:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.404 12:10:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:43.404 12:10:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:43.404 12:10:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:43.404 12:10:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.404 12:10:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:43.665 12:10:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:43.665 12:10:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:43.665 12:10:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.665 12:10:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:43.926 12:10:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:43.926 12:10:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:43.926 12:10:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.926 12:10:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:43.926 12:10:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:43.926 12:10:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:44.187 12:10:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:44.187 12:10:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.187 12:10:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:44.187 12:10:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:44.187 12:10:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.187 12:10:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:44.448 12:10:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:44.448 12:10:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:44.448 12:10:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:44.708 12:10:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:44.708 12:10:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:46.092 12:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:46.092 12:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:46.093 12:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.093 12:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:46.093 12:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:46.093 12:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:46.093 12:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.093 12:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:46.093 12:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:46.093 12:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:46.093 12:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.093 12:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:46.353 12:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:46.353 12:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:46.353 12:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.353 12:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:46.614 12:10:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:46.614 12:10:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:46.614 12:10:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.614 12:10:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:46.614 12:10:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:46.614 12:10:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:46.614 12:10:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.614 12:10:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:46.875 12:10:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:46.875 12:10:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:46.875 12:10:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:47.136 12:10:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:47.136 12:10:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:48.519 12:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:48.519 12:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:48.519 12:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.519 12:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:48.519 12:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:48.519 12:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:48.519 12:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.519 12:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:48.519 12:10:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.519 12:10:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:48.519 12:10:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.519 12:10:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:48.779 12:10:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.779 12:10:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:48.779 12:10:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.779 12:10:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:49.039 12:10:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:49.039 12:10:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:49.039 12:10:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.039 12:10:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:49.300 12:10:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:49.300 12:10:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:49.300 12:10:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.300 12:10:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:49.300 12:10:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:49.300 12:10:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:49.568 12:10:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:49.568 12:10:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:49.827 12:10:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:49.827 12:10:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:50.849 12:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:50.849 12:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:50.849 12:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.849 12:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:51.195 12:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:51.195 12:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:51.195 12:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.195 12:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:51.461 12:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:51.461 12:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:51.461 12:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.461 12:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:51.461 12:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:51.461 12:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:51.461 12:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.461 12:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:51.721 12:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:51.721 12:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:51.721 12:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.721 12:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:51.721 12:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:51.721 12:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:51.721 12:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.721 12:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:51.982 12:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:51.982 12:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:51.982 12:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:52.243 12:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:52.243 12:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:53.625 12:10:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:53.625 12:10:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:53.625 12:10:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:53.625 12:10:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.625 12:10:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:53.625 12:10:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:53.625 12:10:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.625 12:10:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:53.625 12:10:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.625 12:10:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:53.625 12:10:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.625 12:10:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:53.886 12:10:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.886 12:10:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:53.886 12:10:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.886 12:10:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:54.148 12:10:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.148 12:10:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:54.148 12:10:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.148 12:10:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:54.409 12:10:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.409 12:10:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:54.409 12:10:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.409 12:10:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:54.409 12:10:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.409 12:10:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:54.409 12:10:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:54.669 12:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:54.930 12:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:55.871 12:10:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:55.871 12:10:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:55.871 12:10:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.872 12:10:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:56.133 12:10:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.133 12:10:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:56.133 12:10:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.133 12:10:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:56.133 12:10:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.133 12:10:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:56.133 12:10:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.133 12:10:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:56.394 12:10:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.394 12:10:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:56.394 12:10:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.394 12:10:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:56.655 12:10:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.655 12:10:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:56.655 12:10:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.655 12:10:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:56.915 12:10:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.915 12:10:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:56.915 12:10:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.915 12:10:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:56.915 12:10:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.915 12:10:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:56.915 12:10:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:57.175 12:10:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:57.436 12:10:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:58.376 12:10:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:58.376 12:10:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:58.376 12:10:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.376 12:10:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:58.639 12:10:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.639 12:10:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:58.639 12:10:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.639 12:10:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:58.639 12:10:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:58.639 12:10:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:58.639 12:10:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.639 12:10:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:58.899 12:10:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.899 12:10:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:58.899 12:10:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.899 12:10:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:59.160 12:10:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.160 12:10:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:59.160 12:10:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.160 12:10:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:59.160 12:10:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.160 12:10:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:59.160 12:10:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.160 12:10:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:59.421 12:10:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:59.421 12:10:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1100674 00:25:59.421 12:10:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1100674 ']' 00:25:59.421 12:10:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1100674 00:25:59.421 12:10:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:25:59.421 12:10:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:59.421 12:10:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1100674 00:25:59.421 12:10:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:25:59.421 12:10:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:25:59.421 12:10:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1100674' 00:25:59.421 killing process with pid 1100674 00:25:59.421 12:10:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1100674 00:25:59.421 12:10:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1100674 00:25:59.421 { 00:25:59.421 "results": [ 00:25:59.421 { 00:25:59.421 "job": "Nvme0n1", 00:25:59.421 "core_mask": "0x4", 00:25:59.421 "workload": "verify", 00:25:59.421 "status": "terminated", 00:25:59.421 "verify_range": { 00:25:59.421 "start": 0, 00:25:59.421 "length": 16384 00:25:59.421 }, 00:25:59.421 "queue_depth": 128, 00:25:59.421 "io_size": 4096, 00:25:59.421 "runtime": 26.839883, 00:25:59.421 "iops": 12095.097433919515, 00:25:59.422 "mibps": 47.246474351248104, 00:25:59.422 "io_failed": 0, 00:25:59.422 "io_timeout": 0, 00:25:59.422 "avg_latency_us": 10565.424460551621, 00:25:59.422 "min_latency_us": 607.5733333333334, 00:25:59.422 "max_latency_us": 3019898.88 00:25:59.422 } 00:25:59.422 ], 00:25:59.422 "core_count": 1 00:25:59.422 } 00:25:59.685 12:10:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1100674 00:25:59.685 12:10:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:59.685 [2024-10-21 12:10:07.257227] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:25:59.685 [2024-10-21 12:10:07.257298] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1100674 ] 00:25:59.685 [2024-10-21 12:10:07.337545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.685 [2024-10-21 12:10:07.388870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:59.685 Running I/O for 90 seconds... 00:25:59.685 10975.00 IOPS, 42.87 MiB/s [2024-10-21T10:10:36.280Z] 11510.50 IOPS, 44.96 MiB/s [2024-10-21T10:10:36.280Z] 12001.67 IOPS, 46.88 MiB/s [2024-10-21T10:10:36.280Z] 12250.00 IOPS, 47.85 MiB/s [2024-10-21T10:10:36.280Z] 12382.00 IOPS, 48.37 MiB/s [2024-10-21T10:10:36.280Z] 12457.67 IOPS, 48.66 MiB/s [2024-10-21T10:10:36.280Z] 12519.43 IOPS, 48.90 MiB/s [2024-10-21T10:10:36.280Z] 12567.88 IOPS, 49.09 MiB/s [2024-10-21T10:10:36.280Z] 12597.67 IOPS, 49.21 MiB/s [2024-10-21T10:10:36.280Z] 12621.40 IOPS, 49.30 MiB/s [2024-10-21T10:10:36.280Z] 12655.73 IOPS, 49.44 MiB/s [2024-10-21T10:10:36.280Z] [2024-10-21 12:10:21.045988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:29672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.685 [2024-10-21 12:10:21.046020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:59.685 [2024-10-21 12:10:21.046052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.685 [2024-10-21 12:10:21.046059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:59.685 [2024-10-21 12:10:21.046070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:29688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.685 [2024-10-21 12:10:21.046076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:59.685 [2024-10-21 12:10:21.046087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:29696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.685 [2024-10-21 12:10:21.046092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:59.685 [2024-10-21 12:10:21.046103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:29704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.685 [2024-10-21 12:10:21.046108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.685 [2024-10-21 12:10:21.046119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:29712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.685 [2024-10-21 12:10:21.046124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:59.685 [2024-10-21 12:10:21.046134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:29720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.685 [2024-10-21 12:10:21.046140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:59.686 [2024-10-21 12:10:21.046150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:29728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.686 [2024-10-21 12:10:21.046155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:59.686 [2024-10-21 12:10:21.046165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:29736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.686 [2024-10-21 12:10:21.046170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:59.686 [2024-10-21 12:10:21.046181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:29744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.686 [2024-10-21 12:10:21.046191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:59.686 [2024-10-21 12:10:21.046201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:29752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.686 [2024-10-21 12:10:21.046206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:59.686 [2024-10-21 12:10:21.046216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.686 [2024-10-21 12:10:21.046222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:59.686 [2024-10-21 12:10:21.046232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:29768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.686 [2024-10-21 12:10:21.046238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:59.686 [2024-10-21 12:10:21.046248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:29776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.686 [2024-10-21 12:10:21.046253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:59.686 [2024-10-21 12:10:21.046263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:29784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.686 [2024-10-21 12:10:21.046268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:59.686 [2024-10-21 12:10:21.046279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:29792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.686 [2024-10-21 12:10:21.046284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:59.686 [2024-10-21 12:10:21.046295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:29800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.686 [2024-10-21 12:10:21.046300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:59.686 [2024-10-21 12:10:21.046311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:29808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.686 [2024-10-21 12:10:21.046316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:59.686 [2024-10-21 12:10:21.046330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:29816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.686 [2024-10-21 12:10:21.046336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:59.686 [2024-10-21 12:10:21.046346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:29824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.686 [2024-10-21 12:10:21.046351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:59.686 [2024-10-21 12:10:21.046361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:29832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.686 [2024-10-21 12:10:21.046367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:59.686 [2024-10-21 12:10:21.046377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:29840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.686 [2024-10-21 12:10:21.046382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:59.686 [2024-10-21 12:10:21.046395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:29848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.686 [2024-10-21 12:10:21.046401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:59.686 [2024-10-21 12:10:21.046411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:29856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.686 [2024-10-21 12:10:21.046417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:59.686 [2024-10-21 12:10:21.046427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:29864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.686 [2024-10-21 12:10:21.046432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:59.686 [2024-10-21 12:10:21.046443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:29872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.686 [2024-10-21 12:10:21.046448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:59.686 [2024-10-21 12:10:21.046459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:29880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.686 [2024-10-21 12:10:21.046465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:59.686 [2024-10-21 12:10:21.046475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.686 [2024-10-21 12:10:21.046481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:59.686 [2024-10-21 12:10:21.046491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:29896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.686 [2024-10-21 12:10:21.046496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:59.686 [2024-10-21 12:10:21.046507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:29904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.686 [2024-10-21 12:10:21.046512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:59.686 [2024-10-21 12:10:21.046522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:29912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.686 [2024-10-21 12:10:21.046528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:59.686 [2024-10-21 12:10:21.046538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:29920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.686 [2024-10-21 12:10:21.046543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:59.686 [2024-10-21 12:10:21.046553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:29928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.686 [2024-10-21 12:10:21.046558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:59.686 [2024-10-21 12:10:21.046569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:29936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.686 [2024-10-21 12:10:21.046574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:59.686 [2024-10-21 12:10:21.046586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:29944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.686 [2024-10-21 12:10:21.046591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:59.686 [2024-10-21 12:10:21.046601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:29952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.686 [2024-10-21 12:10:21.046606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.686 [2024-10-21 12:10:21.046617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:29960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.686 [2024-10-21 12:10:21.046622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.686 [2024-10-21 12:10:21.046633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:29968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.686 [2024-10-21 12:10:21.046638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:59.686 [2024-10-21 12:10:21.046648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:29976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.686 [2024-10-21 12:10:21.046653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:59.686 [2024-10-21 12:10:21.046663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:29984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.686 [2024-10-21 12:10:21.046668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:59.686 [2024-10-21 12:10:21.046679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:29992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.686 [2024-10-21 12:10:21.046684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:59.686 [2024-10-21 12:10:21.046694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:30000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.686 [2024-10-21 12:10:21.046699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:59.686 [2024-10-21 12:10:21.046710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:30008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.686 [2024-10-21 12:10:21.046715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:59.686 [2024-10-21 12:10:21.046725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:30016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.686 [2024-10-21 12:10:21.046730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:59.686 [2024-10-21 12:10:21.046740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:30024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.686 [2024-10-21 12:10:21.046745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:59.686 [2024-10-21 12:10:21.046756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:30032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.686 [2024-10-21 12:10:21.046761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:59.686 [2024-10-21 12:10:21.046772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:30040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.686 [2024-10-21 12:10:21.046777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:59.686 [2024-10-21 12:10:21.046788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:30048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.686 [2024-10-21 12:10:21.046793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:59.687 [2024-10-21 12:10:21.046803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:30064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.687 [2024-10-21 12:10:21.046809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:59.687 [2024-10-21 12:10:21.046819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:30072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.687 [2024-10-21 12:10:21.046824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:59.687 [2024-10-21 12:10:21.046834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:30080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.687 [2024-10-21 12:10:21.046839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:59.687 [2024-10-21 12:10:21.046849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:30088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.687 [2024-10-21 12:10:21.046854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:59.687 [2024-10-21 12:10:21.046865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:30096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.687 [2024-10-21 12:10:21.046870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:59.687 [2024-10-21 12:10:21.046880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:30104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.687 [2024-10-21 12:10:21.046886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:59.687 [2024-10-21 12:10:21.046896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:30112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.687 [2024-10-21 12:10:21.046902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:59.687 [2024-10-21 12:10:21.047491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:30120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.687 [2024-10-21 12:10:21.047501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:59.687 [2024-10-21 12:10:21.047517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:30056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.687 [2024-10-21 12:10:21.047522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:59.687 [2024-10-21 12:10:21.047536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:30128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.687 [2024-10-21 12:10:21.047541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:59.687 [2024-10-21 12:10:21.047555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.687 [2024-10-21 12:10:21.047562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:59.687 [2024-10-21 12:10:21.047577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:30144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.687 [2024-10-21 12:10:21.047582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:59.687 [2024-10-21 12:10:21.047596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:30152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.687 [2024-10-21 12:10:21.047601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:59.687 [2024-10-21 12:10:21.047615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:30160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.687 [2024-10-21 12:10:21.047620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:59.687 [2024-10-21 12:10:21.047634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.687 [2024-10-21 12:10:21.047639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:59.687 [2024-10-21 12:10:21.047653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.687 [2024-10-21 12:10:21.047658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:59.687 [2024-10-21 12:10:21.047672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:30184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.687 [2024-10-21 12:10:21.047677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:59.687 [2024-10-21 12:10:21.047692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:30192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.687 [2024-10-21 12:10:21.047697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:59.687 [2024-10-21 12:10:21.047711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:30200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.687 [2024-10-21 12:10:21.047716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:59.687 [2024-10-21 12:10:21.047730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.687 [2024-10-21 12:10:21.047735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:59.687 [2024-10-21 12:10:21.047749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:30216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.687 [2024-10-21 12:10:21.047754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.687 [2024-10-21 12:10:21.047768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:30224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.687 [2024-10-21 12:10:21.047773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:59.687 [2024-10-21 12:10:21.047787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:30232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.687 [2024-10-21 12:10:21.047793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:59.687 [2024-10-21 12:10:21.047807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:30240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.687 [2024-10-21 12:10:21.047813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:59.687 [2024-10-21 12:10:21.047826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:30248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.687 [2024-10-21 12:10:21.047831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:59.687 [2024-10-21 12:10:21.047845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:30256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.687 [2024-10-21 12:10:21.047851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:59.687 [2024-10-21 12:10:21.047864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:30264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.687 [2024-10-21 12:10:21.047869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:59.687 [2024-10-21 12:10:21.047883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.687 [2024-10-21 12:10:21.047888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:59.687 [2024-10-21 12:10:21.047902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:30280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.687 [2024-10-21 12:10:21.047908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:59.687 [2024-10-21 12:10:21.047921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.687 [2024-10-21 12:10:21.047926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:59.687 [2024-10-21 12:10:21.047940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:30296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.687 [2024-10-21 12:10:21.047945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:59.687 [2024-10-21 12:10:21.047959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:30304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.687 [2024-10-21 12:10:21.047965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:59.687 [2024-10-21 12:10:21.047978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:30312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.687 [2024-10-21 12:10:21.047983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:59.687 [2024-10-21 12:10:21.047997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:30320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.687 [2024-10-21 12:10:21.048002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:59.687 [2024-10-21 12:10:21.048017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:30328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.687 [2024-10-21 12:10:21.048023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:59.687 [2024-10-21 12:10:21.048038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:30336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.687 [2024-10-21 12:10:21.048043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:59.687 [2024-10-21 12:10:21.048057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:30344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.687 [2024-10-21 12:10:21.048062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:59.687 [2024-10-21 12:10:21.048076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:30352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.687 [2024-10-21 12:10:21.048081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:59.687 [2024-10-21 12:10:21.048095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:30360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.687 [2024-10-21 12:10:21.048100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:59.687 [2024-10-21 12:10:21.048114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.687 [2024-10-21 12:10:21.048119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:59.687 [2024-10-21 12:10:21.048133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:30376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.688 [2024-10-21 12:10:21.048138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:59.688 [2024-10-21 12:10:21.048152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:30384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.688 [2024-10-21 12:10:21.048157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:59.688 [2024-10-21 12:10:21.048171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:30392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.688 [2024-10-21 12:10:21.048176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:59.688 [2024-10-21 12:10:21.048189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.688 [2024-10-21 12:10:21.048194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:59.688 [2024-10-21 12:10:21.048208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:30408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.688 [2024-10-21 12:10:21.048214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:59.688 [2024-10-21 12:10:21.048228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.688 [2024-10-21 12:10:21.048233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:59.688 [2024-10-21 12:10:21.048246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:30424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.688 [2024-10-21 12:10:21.048251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:59.688 [2024-10-21 12:10:21.048269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:30432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.688 [2024-10-21 12:10:21.048274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:59.688 [2024-10-21 12:10:21.048288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:30440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.688 [2024-10-21 12:10:21.048293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:59.688 [2024-10-21 12:10:21.048308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:30448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.688 [2024-10-21 12:10:21.048313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:59.688 [2024-10-21 12:10:21.048396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:30456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.688 [2024-10-21 12:10:21.048402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:59.688 [2024-10-21 12:10:21.048419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:30464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.688 [2024-10-21 12:10:21.048424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:59.688 [2024-10-21 12:10:21.048440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:30472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.688 [2024-10-21 12:10:21.048445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.688 [2024-10-21 12:10:21.048461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:30480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.688 [2024-10-21 12:10:21.048466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:59.688 [2024-10-21 12:10:21.048482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:30488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.688 [2024-10-21 12:10:21.048487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:59.688 [2024-10-21 12:10:21.048503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:30496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.688 [2024-10-21 12:10:21.048509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:59.688 [2024-10-21 12:10:21.048524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:30504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.688 [2024-10-21 12:10:21.048529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:59.688 [2024-10-21 12:10:21.048545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:30512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.688 [2024-10-21 12:10:21.048550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:59.688 [2024-10-21 12:10:21.048566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:30520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.688 [2024-10-21 12:10:21.048572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:59.688 [2024-10-21 12:10:21.048588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:30528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.688 [2024-10-21 12:10:21.048594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:59.688 [2024-10-21 12:10:21.048611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:30536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.688 [2024-10-21 12:10:21.048616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:59.688 12597.08 IOPS, 49.21 MiB/s [2024-10-21T10:10:36.283Z] 11628.08 IOPS, 45.42 MiB/s [2024-10-21T10:10:36.283Z] 10797.50 IOPS, 42.18 MiB/s [2024-10-21T10:10:36.283Z] 10132.67 IOPS, 39.58 MiB/s [2024-10-21T10:10:36.283Z] 10302.50 IOPS, 40.24 MiB/s [2024-10-21T10:10:36.283Z] 10459.53 IOPS, 40.86 MiB/s [2024-10-21T10:10:36.283Z] 10805.67 IOPS, 42.21 MiB/s [2024-10-21T10:10:36.283Z] 11147.95 IOPS, 43.55 MiB/s [2024-10-21T10:10:36.283Z] 11354.95 IOPS, 44.36 MiB/s [2024-10-21T10:10:36.283Z] 11426.86 IOPS, 44.64 MiB/s [2024-10-21T10:10:36.283Z] 11488.32 IOPS, 44.88 MiB/s [2024-10-21T10:10:36.283Z] 11699.04 IOPS, 45.70 MiB/s [2024-10-21T10:10:36.283Z] 11915.17 IOPS, 46.54 MiB/s [2024-10-21T10:10:36.283Z] [2024-10-21 12:10:33.772146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.688 [2024-10-21 12:10:33.772179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:59.688 [2024-10-21 12:10:33.772209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.688 [2024-10-21 12:10:33.772216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:59.688 [2024-10-21 12:10:33.772227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.688 [2024-10-21 12:10:33.772232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:59.688 [2024-10-21 12:10:33.772243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.688 [2024-10-21 12:10:33.772248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:59.688 [2024-10-21 12:10:33.772259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.688 [2024-10-21 12:10:33.772264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:59.688 [2024-10-21 12:10:33.772274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.688 [2024-10-21 12:10:33.772279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:59.688 [2024-10-21 12:10:33.772290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.688 [2024-10-21 12:10:33.772295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:59.688 [2024-10-21 12:10:33.772305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.688 [2024-10-21 12:10:33.772311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:59.688 [2024-10-21 12:10:33.772325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.688 [2024-10-21 12:10:33.772330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:59.688 [2024-10-21 12:10:33.772341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.688 [2024-10-21 12:10:33.772351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:59.688 [2024-10-21 12:10:33.772361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.688 [2024-10-21 12:10:33.772366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:59.688 [2024-10-21 12:10:33.772377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.688 [2024-10-21 12:10:33.772382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:59.688 [2024-10-21 12:10:33.772392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.688 [2024-10-21 12:10:33.772397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:59.688 [2024-10-21 12:10:33.772408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.688 [2024-10-21 12:10:33.772413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:59.688 [2024-10-21 12:10:33.772423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.688 [2024-10-21 12:10:33.772428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:59.688 [2024-10-21 12:10:33.772438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.688 [2024-10-21 12:10:33.772444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:59.688 [2024-10-21 12:10:33.772454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.688 [2024-10-21 12:10:33.772459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:59.688 [2024-10-21 12:10:33.772470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:13416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.688 [2024-10-21 12:10:33.772475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:59.688 [2024-10-21 12:10:33.772486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.689 [2024-10-21 12:10:33.772492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:59.689 [2024-10-21 12:10:33.772502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.689 [2024-10-21 12:10:33.772508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:59.689 [2024-10-21 12:10:33.772518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.689 [2024-10-21 12:10:33.772523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:59.689 [2024-10-21 12:10:33.772534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.689 [2024-10-21 12:10:33.772540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:59.689 [2024-10-21 12:10:33.772551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.689 [2024-10-21 12:10:33.772556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:59.689 [2024-10-21 12:10:33.772566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.689 [2024-10-21 12:10:33.772571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:59.689 [2024-10-21 12:10:33.772581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.689 [2024-10-21 12:10:33.772587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.689 [2024-10-21 12:10:33.772598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.689 [2024-10-21 12:10:33.772603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:59.689 [2024-10-21 12:10:33.772613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.689 [2024-10-21 12:10:33.772618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:59.689 [2024-10-21 12:10:33.772629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:13616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.689 [2024-10-21 12:10:33.772634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:59.689 [2024-10-21 12:10:33.773555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.689 [2024-10-21 12:10:33.773565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:59.689 [2024-10-21 12:10:33.773577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.689 [2024-10-21 12:10:33.773582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:59.689 [2024-10-21 12:10:33.773592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.689 [2024-10-21 12:10:33.773598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:59.689 [2024-10-21 12:10:33.773608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.689 [2024-10-21 12:10:33.773613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:59.689 [2024-10-21 12:10:33.773624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.689 [2024-10-21 12:10:33.773629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:59.689 [2024-10-21 12:10:33.773639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.689 [2024-10-21 12:10:33.773645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:59.689 [2024-10-21 12:10:33.773664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.689 [2024-10-21 12:10:33.773669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:59.689 [2024-10-21 12:10:33.773679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.689 [2024-10-21 12:10:33.773684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:59.689 [2024-10-21 12:10:33.773695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.689 [2024-10-21 12:10:33.773700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:59.689 12052.80 IOPS, 47.08 MiB/s [2024-10-21T10:10:36.284Z] 12082.77 IOPS, 47.20 MiB/s [2024-10-21T10:10:36.284Z] Received shutdown signal, test time was about 26.840492 seconds 00:25:59.689 00:25:59.689 Latency(us) 00:25:59.689 [2024-10-21T10:10:36.284Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:59.689 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:59.689 Verification LBA range: start 0x0 length 0x4000 00:25:59.689 Nvme0n1 : 26.84 12095.10 47.25 0.00 0.00 10565.42 607.57 3019898.88 00:25:59.689 [2024-10-21T10:10:36.284Z] =================================================================================================================== 00:25:59.689 [2024-10-21T10:10:36.284Z] Total : 12095.10 47.25 0.00 0.00 10565.42 607.57 3019898.88 00:25:59.689 12:10:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:59.689 12:10:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:59.689 12:10:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:59.689 12:10:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:59.689 12:10:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:59.689 12:10:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:25:59.689 12:10:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:59.689 12:10:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:25:59.689 12:10:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:59.689 12:10:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:59.689 rmmod nvme_tcp 00:25:59.950 rmmod nvme_fabrics 00:25:59.950 rmmod nvme_keyring 00:25:59.950 12:10:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:59.950 12:10:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:25:59.950 12:10:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:25:59.950 12:10:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@515 -- # '[' -n 1100314 ']' 00:25:59.950 12:10:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # killprocess 1100314 00:25:59.950 12:10:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1100314 ']' 00:25:59.950 12:10:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1100314 00:25:59.950 12:10:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:25:59.950 12:10:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:59.950 12:10:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1100314 00:25:59.950 12:10:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:59.950 12:10:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:59.950 12:10:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1100314' 00:25:59.950 killing process with pid 1100314 00:25:59.950 12:10:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1100314 00:25:59.950 12:10:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1100314 00:25:59.950 12:10:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:59.950 12:10:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:59.950 12:10:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:59.950 12:10:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:25:59.950 12:10:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:59.950 12:10:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-save 00:25:59.950 12:10:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-restore 00:25:59.950 12:10:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:59.950 12:10:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:59.950 12:10:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:59.950 12:10:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:59.950 12:10:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:02.499 00:26:02.499 real 0m41.180s 00:26:02.499 user 1m46.427s 00:26:02.499 sys 0m11.590s 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:02.499 ************************************ 00:26:02.499 END TEST nvmf_host_multipath_status 00:26:02.499 ************************************ 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.499 ************************************ 00:26:02.499 START TEST nvmf_discovery_remove_ifc 00:26:02.499 ************************************ 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:02.499 * Looking for test storage... 00:26:02.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:02.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.499 --rc genhtml_branch_coverage=1 00:26:02.499 --rc genhtml_function_coverage=1 00:26:02.499 --rc genhtml_legend=1 00:26:02.499 --rc geninfo_all_blocks=1 00:26:02.499 --rc geninfo_unexecuted_blocks=1 00:26:02.499 00:26:02.499 ' 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:02.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.499 --rc genhtml_branch_coverage=1 00:26:02.499 --rc genhtml_function_coverage=1 00:26:02.499 --rc genhtml_legend=1 00:26:02.499 --rc geninfo_all_blocks=1 00:26:02.499 --rc geninfo_unexecuted_blocks=1 00:26:02.499 00:26:02.499 ' 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:02.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.499 --rc genhtml_branch_coverage=1 00:26:02.499 --rc genhtml_function_coverage=1 00:26:02.499 --rc genhtml_legend=1 00:26:02.499 --rc geninfo_all_blocks=1 00:26:02.499 --rc geninfo_unexecuted_blocks=1 00:26:02.499 00:26:02.499 ' 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:02.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.499 --rc genhtml_branch_coverage=1 00:26:02.499 --rc genhtml_function_coverage=1 00:26:02.499 --rc genhtml_legend=1 00:26:02.499 --rc geninfo_all_blocks=1 00:26:02.499 --rc geninfo_unexecuted_blocks=1 00:26:02.499 00:26:02.499 ' 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.499 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:02.500 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.500 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:26:02.500 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:02.500 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:02.500 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:02.500 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:02.500 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:02.500 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:02.500 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:02.500 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:02.500 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:02.500 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:02.500 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:02.500 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:02.500 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:02.500 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:02.500 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:02.500 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:02.500 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:02.500 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:02.500 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:02.500 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:02.500 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:02.500 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:02.500 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.500 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:02.500 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:02.500 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:02.500 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:02.500 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:26:02.500 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:10.641 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:10.641 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:26:10.641 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:10.641 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:10.641 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:10.641 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:10.641 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:10.641 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:26:10.641 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:10.641 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:26:10.641 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:26:10.641 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:26:10.641 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:26:10.641 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:26:10.641 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:26:10.641 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:10.641 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:10.641 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:10.641 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:10.641 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:10.641 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:10.641 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:10.641 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:10.641 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:10.641 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:10.641 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:10.641 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:10.641 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:10.641 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:10.641 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:10.641 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:10.641 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:10.641 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:10.641 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:10.641 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:10.641 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:10.641 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:10.641 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:10.641 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:10.641 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:10.641 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:10.641 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:10.641 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:10.641 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:10.641 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:10.641 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:10.641 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:10.641 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:10.641 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:10.641 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:10.641 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:10.641 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:10.641 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:10.642 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:10.642 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # is_hw=yes 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:10.642 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:10.642 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:26:10.642 00:26:10.642 --- 10.0.0.2 ping statistics --- 00:26:10.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:10.642 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:10.642 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:10.642 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:26:10.642 00:26:10.642 --- 10.0.0.1 ping statistics --- 00:26:10.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:10.642 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # return 0 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # nvmfpid=1110773 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # waitforlisten 1110773 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1110773 ']' 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:10.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:10.642 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:10.642 [2024-10-21 12:10:46.491902] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:26:10.642 [2024-10-21 12:10:46.491968] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:10.642 [2024-10-21 12:10:46.582968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.642 [2024-10-21 12:10:46.633621] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:10.642 [2024-10-21 12:10:46.633674] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:10.642 [2024-10-21 12:10:46.633683] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:10.642 [2024-10-21 12:10:46.633691] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:10.642 [2024-10-21 12:10:46.633697] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:10.642 [2024-10-21 12:10:46.634479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:10.903 12:10:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:10.903 12:10:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:26:10.903 12:10:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:10.903 12:10:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:10.903 12:10:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:10.903 12:10:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:10.903 12:10:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:10.903 12:10:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.903 12:10:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:10.903 [2024-10-21 12:10:47.360730] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:10.903 [2024-10-21 12:10:47.369002] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:10.903 null0 00:26:10.903 [2024-10-21 12:10:47.400953] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:10.903 12:10:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.903 12:10:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1110910 00:26:10.903 12:10:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:10.903 12:10:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1110910 /tmp/host.sock 00:26:10.903 12:10:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1110910 ']' 00:26:10.903 12:10:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:26:10.903 12:10:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:10.903 12:10:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:10.903 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:10.903 12:10:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:10.903 12:10:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:10.903 [2024-10-21 12:10:47.476760] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:26:10.904 [2024-10-21 12:10:47.476827] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1110910 ] 00:26:11.164 [2024-10-21 12:10:47.557254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.164 [2024-10-21 12:10:47.610797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:11.736 12:10:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:11.736 12:10:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:26:11.736 12:10:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:11.736 12:10:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:11.736 12:10:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.736 12:10:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:11.736 12:10:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.736 12:10:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:11.736 12:10:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.736 12:10:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:11.997 12:10:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.997 12:10:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:11.997 12:10:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.997 12:10:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:12.941 [2024-10-21 12:10:49.450298] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:12.941 [2024-10-21 12:10:49.450341] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:12.941 [2024-10-21 12:10:49.450357] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:13.201 [2024-10-21 12:10:49.581778] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:13.201 [2024-10-21 12:10:49.682438] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:13.201 [2024-10-21 12:10:49.682496] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:13.201 [2024-10-21 12:10:49.682520] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:13.201 [2024-10-21 12:10:49.682536] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:13.201 [2024-10-21 12:10:49.682557] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:13.201 12:10:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.201 12:10:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:13.201 12:10:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:13.201 [2024-10-21 12:10:49.688887] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x15dcd90 was disconnected and freed. delete nvme_qpair. 00:26:13.201 12:10:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:13.201 12:10:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:13.201 12:10:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.201 12:10:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:13.201 12:10:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:13.201 12:10:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:13.201 12:10:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.201 12:10:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:13.201 12:10:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:13.201 12:10:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:13.463 12:10:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:13.463 12:10:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:13.463 12:10:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:13.463 12:10:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:13.463 12:10:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.463 12:10:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:13.463 12:10:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:13.463 12:10:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:13.463 12:10:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.463 12:10:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:13.463 12:10:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:14.404 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:14.404 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:14.404 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:14.404 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.404 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:14.404 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:14.404 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:14.404 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.404 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:14.404 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:15.789 12:10:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:15.789 12:10:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:15.789 12:10:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:15.789 12:10:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.789 12:10:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:15.789 12:10:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:15.789 12:10:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:15.789 12:10:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.789 12:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:15.790 12:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:16.730 12:10:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:16.730 12:10:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:16.730 12:10:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:16.730 12:10:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.730 12:10:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:16.730 12:10:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:16.730 12:10:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:16.730 12:10:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.731 12:10:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:16.731 12:10:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:17.671 12:10:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:17.671 12:10:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:17.671 12:10:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:17.671 12:10:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.671 12:10:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:17.671 12:10:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:17.671 12:10:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:17.671 12:10:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.671 12:10:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:17.671 12:10:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:18.616 12:10:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:18.616 12:10:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:18.616 [2024-10-21 12:10:55.122956] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:18.616 [2024-10-21 12:10:55.122995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.616 [2024-10-21 12:10:55.123004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.616 [2024-10-21 12:10:55.123012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.616 [2024-10-21 12:10:55.123017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.616 [2024-10-21 12:10:55.123024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.616 [2024-10-21 12:10:55.123030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.616 [2024-10-21 12:10:55.123035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.616 [2024-10-21 12:10:55.123040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.616 [2024-10-21 12:10:55.123046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.616 [2024-10-21 12:10:55.123055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.616 [2024-10-21 12:10:55.123061] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9800 is same with the state(6) to be set 00:26:18.616 12:10:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:18.616 12:10:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.616 12:10:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:18.616 12:10:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:18.616 12:10:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:18.616 [2024-10-21 12:10:55.132977] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b9800 (9): Bad file descriptor 00:26:18.616 [2024-10-21 12:10:55.143013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:18.616 12:10:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.616 12:10:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:18.616 12:10:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:20.002 [2024-10-21 12:10:56.167378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:20.002 [2024-10-21 12:10:56.167470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b9800 with addr=10.0.0.2, port=4420 00:26:20.002 [2024-10-21 12:10:56.167501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9800 is same with the state(6) to be set 00:26:20.002 [2024-10-21 12:10:56.167557] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b9800 (9): Bad file descriptor 00:26:20.003 [2024-10-21 12:10:56.167668] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:20.003 [2024-10-21 12:10:56.167725] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:20.003 [2024-10-21 12:10:56.167746] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:20.003 [2024-10-21 12:10:56.167770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:20.003 [2024-10-21 12:10:56.167813] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.003 [2024-10-21 12:10:56.167836] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:20.003 12:10:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:20.003 12:10:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:20.003 12:10:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:20.003 12:10:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.003 12:10:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:20.003 12:10:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:20.003 12:10:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:20.003 12:10:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.003 12:10:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:20.003 12:10:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:20.944 [2024-10-21 12:10:57.170240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:20.944 [2024-10-21 12:10:57.170261] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:20.944 [2024-10-21 12:10:57.170268] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:20.944 [2024-10-21 12:10:57.170273] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:26:20.944 [2024-10-21 12:10:57.170282] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.944 [2024-10-21 12:10:57.170297] bdev_nvme.c:6904:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:20.944 [2024-10-21 12:10:57.170314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:20.944 [2024-10-21 12:10:57.170325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.944 [2024-10-21 12:10:57.170333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:20.944 [2024-10-21 12:10:57.170338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.944 [2024-10-21 12:10:57.170344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:20.944 [2024-10-21 12:10:57.170348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.944 [2024-10-21 12:10:57.170354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:20.944 [2024-10-21 12:10:57.170359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.944 [2024-10-21 12:10:57.170365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:20.944 [2024-10-21 12:10:57.170370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.944 [2024-10-21 12:10:57.170375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:26:20.944 [2024-10-21 12:10:57.170395] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a8f40 (9): Bad file descriptor 00:26:20.944 [2024-10-21 12:10:57.171394] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:20.944 [2024-10-21 12:10:57.171402] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:26:20.944 12:10:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:20.944 12:10:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:20.944 12:10:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:20.944 12:10:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.944 12:10:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:20.944 12:10:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:20.944 12:10:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:20.944 12:10:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.944 12:10:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:20.944 12:10:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:20.944 12:10:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:20.944 12:10:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:20.944 12:10:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:20.944 12:10:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:20.944 12:10:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:20.944 12:10:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.944 12:10:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:20.944 12:10:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:20.944 12:10:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:20.944 12:10:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.944 12:10:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:20.944 12:10:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:21.886 12:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:21.886 12:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:21.886 12:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:21.886 12:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.886 12:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:21.886 12:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:21.886 12:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:21.886 12:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.886 12:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:21.886 12:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:22.825 [2024-10-21 12:10:59.184491] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:22.825 [2024-10-21 12:10:59.184507] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:22.825 [2024-10-21 12:10:59.184517] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:22.826 [2024-10-21 12:10:59.273771] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:22.826 [2024-10-21 12:10:59.333851] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:22.826 [2024-10-21 12:10:59.333883] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:22.826 [2024-10-21 12:10:59.333898] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:22.826 [2024-10-21 12:10:59.333909] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:22.826 [2024-10-21 12:10:59.333915] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:22.826 [2024-10-21 12:10:59.382792] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x15c3d40 was disconnected and freed. delete nvme_qpair. 00:26:23.086 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:23.086 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:23.086 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:23.086 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.086 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:23.086 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:23.086 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:23.086 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.086 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:23.086 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:23.086 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1110910 00:26:23.086 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1110910 ']' 00:26:23.086 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1110910 00:26:23.086 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:26:23.086 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:23.086 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1110910 00:26:23.086 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:23.086 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:23.086 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1110910' 00:26:23.086 killing process with pid 1110910 00:26:23.086 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1110910 00:26:23.086 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1110910 00:26:23.347 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:23.347 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:23.347 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:26:23.347 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:23.347 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:26:23.347 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:23.347 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:23.347 rmmod nvme_tcp 00:26:23.347 rmmod nvme_fabrics 00:26:23.347 rmmod nvme_keyring 00:26:23.347 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:23.347 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:26:23.347 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:26:23.347 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@515 -- # '[' -n 1110773 ']' 00:26:23.347 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # killprocess 1110773 00:26:23.347 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1110773 ']' 00:26:23.347 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1110773 00:26:23.347 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:26:23.347 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:23.347 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1110773 00:26:23.347 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:23.347 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:23.347 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1110773' 00:26:23.347 killing process with pid 1110773 00:26:23.347 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1110773 00:26:23.347 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1110773 00:26:23.347 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:23.347 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:23.347 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:23.347 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:26:23.607 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-restore 00:26:23.607 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-save 00:26:23.607 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:23.607 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:23.607 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:23.607 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:23.607 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:23.607 12:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.521 12:11:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:25.521 00:26:25.521 real 0m23.334s 00:26:25.521 user 0m27.301s 00:26:25.521 sys 0m7.164s 00:26:25.521 12:11:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:25.521 12:11:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:25.521 ************************************ 00:26:25.521 END TEST nvmf_discovery_remove_ifc 00:26:25.521 ************************************ 00:26:25.521 12:11:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:25.521 12:11:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:25.521 12:11:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:25.521 12:11:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.521 ************************************ 00:26:25.521 START TEST nvmf_identify_kernel_target 00:26:25.521 ************************************ 00:26:25.521 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:25.783 * Looking for test storage... 00:26:25.783 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:25.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:25.783 --rc genhtml_branch_coverage=1 00:26:25.783 --rc genhtml_function_coverage=1 00:26:25.783 --rc genhtml_legend=1 00:26:25.783 --rc geninfo_all_blocks=1 00:26:25.783 --rc geninfo_unexecuted_blocks=1 00:26:25.783 00:26:25.783 ' 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:25.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:25.783 --rc genhtml_branch_coverage=1 00:26:25.783 --rc genhtml_function_coverage=1 00:26:25.783 --rc genhtml_legend=1 00:26:25.783 --rc geninfo_all_blocks=1 00:26:25.783 --rc geninfo_unexecuted_blocks=1 00:26:25.783 00:26:25.783 ' 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:25.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:25.783 --rc genhtml_branch_coverage=1 00:26:25.783 --rc genhtml_function_coverage=1 00:26:25.783 --rc genhtml_legend=1 00:26:25.783 --rc geninfo_all_blocks=1 00:26:25.783 --rc geninfo_unexecuted_blocks=1 00:26:25.783 00:26:25.783 ' 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:25.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:25.783 --rc genhtml_branch_coverage=1 00:26:25.783 --rc genhtml_function_coverage=1 00:26:25.783 --rc genhtml_legend=1 00:26:25.783 --rc geninfo_all_blocks=1 00:26:25.783 --rc geninfo_unexecuted_blocks=1 00:26:25.783 00:26:25.783 ' 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:25.783 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:25.784 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:25.784 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.784 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.784 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.784 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:25.784 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.784 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:26:25.784 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:25.784 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:25.784 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:25.784 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:25.784 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:25.784 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:25.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:25.784 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:25.784 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:25.784 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:25.784 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:25.784 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:25.784 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:25.784 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:25.784 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:25.784 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:25.784 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:25.784 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:25.784 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.784 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:25.784 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:25.784 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:26:25.784 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:33.924 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:33.924 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:26:33.924 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:33.924 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:33.924 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:33.924 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:33.924 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:33.924 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:26:33.924 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:33.924 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:26:33.924 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:26:33.924 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:26:33.924 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:26:33.924 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:26:33.924 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:26:33.924 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:33.924 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:33.924 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:33.924 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:33.924 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:33.924 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:33.924 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:33.924 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:33.924 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:33.924 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:33.924 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:33.924 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:33.924 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:33.924 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:33.924 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:33.924 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:33.924 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:33.924 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:33.924 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:33.924 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:33.924 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:33.924 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:33.924 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:33.924 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:33.924 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:33.924 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:33.924 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:33.924 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:33.924 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:33.924 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:33.924 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:33.924 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:33.925 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:33.925 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # is_hw=yes 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:33.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:33.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:26:33.925 00:26:33.925 --- 10.0.0.2 ping statistics --- 00:26:33.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:33.925 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:33.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:33.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:26:33.925 00:26:33.925 --- 10.0.0.1 ping statistics --- 00:26:33.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:33.925 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # return 0 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@767 -- # local ip 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # local block nvme 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # modprobe nvmet 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:33.925 12:11:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:37.227 Waiting for block devices as requested 00:26:37.227 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:37.227 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:37.227 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:37.227 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:37.227 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:37.227 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:37.227 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:37.488 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:37.488 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:26:37.749 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:37.749 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:37.749 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:38.010 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:38.010 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:38.010 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:38.271 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:38.271 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:38.533 12:11:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:26:38.533 12:11:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:38.533 12:11:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:26:38.533 12:11:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:26:38.533 12:11:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:38.533 12:11:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:38.533 12:11:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:26:38.533 12:11:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:38.533 12:11:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:38.533 No valid GPT data, bailing 00:26:38.533 12:11:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:38.533 12:11:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:26:38.533 12:11:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:26:38.533 12:11:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:26:38.533 12:11:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:26:38.533 12:11:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:38.794 12:11:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:38.794 12:11:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:38.794 12:11:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:38.794 12:11:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:26:38.794 12:11:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:26:38.794 12:11:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:26:38.794 12:11:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:26:38.794 12:11:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo tcp 00:26:38.794 12:11:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 4420 00:26:38.794 12:11:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo ipv4 00:26:38.794 12:11:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:38.794 12:11:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:26:38.794 00:26:38.794 Discovery Log Number of Records 2, Generation counter 2 00:26:38.794 =====Discovery Log Entry 0====== 00:26:38.794 trtype: tcp 00:26:38.794 adrfam: ipv4 00:26:38.794 subtype: current discovery subsystem 00:26:38.794 treq: not specified, sq flow control disable supported 00:26:38.794 portid: 1 00:26:38.794 trsvcid: 4420 00:26:38.794 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:38.794 traddr: 10.0.0.1 00:26:38.794 eflags: none 00:26:38.794 sectype: none 00:26:38.794 =====Discovery Log Entry 1====== 00:26:38.794 trtype: tcp 00:26:38.794 adrfam: ipv4 00:26:38.794 subtype: nvme subsystem 00:26:38.794 treq: not specified, sq flow control disable supported 00:26:38.794 portid: 1 00:26:38.794 trsvcid: 4420 00:26:38.794 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:38.794 traddr: 10.0.0.1 00:26:38.794 eflags: none 00:26:38.794 sectype: none 00:26:38.794 12:11:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:38.794 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:38.794 ===================================================== 00:26:38.794 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:38.794 ===================================================== 00:26:38.794 Controller Capabilities/Features 00:26:38.794 ================================ 00:26:38.794 Vendor ID: 0000 00:26:38.794 Subsystem Vendor ID: 0000 00:26:38.795 Serial Number: 893a689197b9abb4e886 00:26:38.795 Model Number: Linux 00:26:38.795 Firmware Version: 6.8.9-20 00:26:38.795 Recommended Arb Burst: 0 00:26:38.795 IEEE OUI Identifier: 00 00 00 00:26:38.795 Multi-path I/O 00:26:38.795 May have multiple subsystem ports: No 00:26:38.795 May have multiple controllers: No 00:26:38.795 Associated with SR-IOV VF: No 00:26:38.795 Max Data Transfer Size: Unlimited 00:26:38.795 Max Number of Namespaces: 0 00:26:38.795 Max Number of I/O Queues: 1024 00:26:38.795 NVMe Specification Version (VS): 1.3 00:26:38.795 NVMe Specification Version (Identify): 1.3 00:26:38.795 Maximum Queue Entries: 1024 00:26:38.795 Contiguous Queues Required: No 00:26:38.795 Arbitration Mechanisms Supported 00:26:38.795 Weighted Round Robin: Not Supported 00:26:38.795 Vendor Specific: Not Supported 00:26:38.795 Reset Timeout: 7500 ms 00:26:38.795 Doorbell Stride: 4 bytes 00:26:38.795 NVM Subsystem Reset: Not Supported 00:26:38.795 Command Sets Supported 00:26:38.795 NVM Command Set: Supported 00:26:38.795 Boot Partition: Not Supported 00:26:38.795 Memory Page Size Minimum: 4096 bytes 00:26:38.795 Memory Page Size Maximum: 4096 bytes 00:26:38.795 Persistent Memory Region: Not Supported 00:26:38.795 Optional Asynchronous Events Supported 00:26:38.795 Namespace Attribute Notices: Not Supported 00:26:38.795 Firmware Activation Notices: Not Supported 00:26:38.795 ANA Change Notices: Not Supported 00:26:38.795 PLE Aggregate Log Change Notices: Not Supported 00:26:38.795 LBA Status Info Alert Notices: Not Supported 00:26:38.795 EGE Aggregate Log Change Notices: Not Supported 00:26:38.795 Normal NVM Subsystem Shutdown event: Not Supported 00:26:38.795 Zone Descriptor Change Notices: Not Supported 00:26:38.795 Discovery Log Change Notices: Supported 00:26:38.795 Controller Attributes 00:26:38.795 128-bit Host Identifier: Not Supported 00:26:38.795 Non-Operational Permissive Mode: Not Supported 00:26:38.795 NVM Sets: Not Supported 00:26:38.795 Read Recovery Levels: Not Supported 00:26:38.795 Endurance Groups: Not Supported 00:26:38.795 Predictable Latency Mode: Not Supported 00:26:38.795 Traffic Based Keep ALive: Not Supported 00:26:38.795 Namespace Granularity: Not Supported 00:26:38.795 SQ Associations: Not Supported 00:26:38.795 UUID List: Not Supported 00:26:38.795 Multi-Domain Subsystem: Not Supported 00:26:38.795 Fixed Capacity Management: Not Supported 00:26:38.795 Variable Capacity Management: Not Supported 00:26:38.795 Delete Endurance Group: Not Supported 00:26:38.795 Delete NVM Set: Not Supported 00:26:38.795 Extended LBA Formats Supported: Not Supported 00:26:38.795 Flexible Data Placement Supported: Not Supported 00:26:38.795 00:26:38.795 Controller Memory Buffer Support 00:26:38.795 ================================ 00:26:38.795 Supported: No 00:26:38.795 00:26:38.795 Persistent Memory Region Support 00:26:38.795 ================================ 00:26:38.795 Supported: No 00:26:38.795 00:26:38.795 Admin Command Set Attributes 00:26:38.795 ============================ 00:26:38.795 Security Send/Receive: Not Supported 00:26:38.795 Format NVM: Not Supported 00:26:38.795 Firmware Activate/Download: Not Supported 00:26:38.795 Namespace Management: Not Supported 00:26:38.795 Device Self-Test: Not Supported 00:26:38.795 Directives: Not Supported 00:26:38.795 NVMe-MI: Not Supported 00:26:38.795 Virtualization Management: Not Supported 00:26:38.795 Doorbell Buffer Config: Not Supported 00:26:38.795 Get LBA Status Capability: Not Supported 00:26:38.795 Command & Feature Lockdown Capability: Not Supported 00:26:38.795 Abort Command Limit: 1 00:26:38.795 Async Event Request Limit: 1 00:26:38.795 Number of Firmware Slots: N/A 00:26:38.795 Firmware Slot 1 Read-Only: N/A 00:26:38.795 Firmware Activation Without Reset: N/A 00:26:38.795 Multiple Update Detection Support: N/A 00:26:38.795 Firmware Update Granularity: No Information Provided 00:26:38.795 Per-Namespace SMART Log: No 00:26:38.795 Asymmetric Namespace Access Log Page: Not Supported 00:26:38.795 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:38.795 Command Effects Log Page: Not Supported 00:26:38.795 Get Log Page Extended Data: Supported 00:26:38.795 Telemetry Log Pages: Not Supported 00:26:38.795 Persistent Event Log Pages: Not Supported 00:26:38.795 Supported Log Pages Log Page: May Support 00:26:38.795 Commands Supported & Effects Log Page: Not Supported 00:26:38.795 Feature Identifiers & Effects Log Page:May Support 00:26:38.795 NVMe-MI Commands & Effects Log Page: May Support 00:26:38.795 Data Area 4 for Telemetry Log: Not Supported 00:26:38.795 Error Log Page Entries Supported: 1 00:26:38.795 Keep Alive: Not Supported 00:26:38.795 00:26:38.795 NVM Command Set Attributes 00:26:38.795 ========================== 00:26:38.795 Submission Queue Entry Size 00:26:38.795 Max: 1 00:26:38.795 Min: 1 00:26:38.795 Completion Queue Entry Size 00:26:38.795 Max: 1 00:26:38.795 Min: 1 00:26:38.795 Number of Namespaces: 0 00:26:38.795 Compare Command: Not Supported 00:26:38.795 Write Uncorrectable Command: Not Supported 00:26:38.795 Dataset Management Command: Not Supported 00:26:38.795 Write Zeroes Command: Not Supported 00:26:38.795 Set Features Save Field: Not Supported 00:26:38.795 Reservations: Not Supported 00:26:38.795 Timestamp: Not Supported 00:26:38.795 Copy: Not Supported 00:26:38.795 Volatile Write Cache: Not Present 00:26:38.795 Atomic Write Unit (Normal): 1 00:26:38.795 Atomic Write Unit (PFail): 1 00:26:38.795 Atomic Compare & Write Unit: 1 00:26:38.795 Fused Compare & Write: Not Supported 00:26:38.795 Scatter-Gather List 00:26:38.795 SGL Command Set: Supported 00:26:38.795 SGL Keyed: Not Supported 00:26:38.795 SGL Bit Bucket Descriptor: Not Supported 00:26:38.795 SGL Metadata Pointer: Not Supported 00:26:38.795 Oversized SGL: Not Supported 00:26:38.795 SGL Metadata Address: Not Supported 00:26:38.795 SGL Offset: Supported 00:26:38.795 Transport SGL Data Block: Not Supported 00:26:38.795 Replay Protected Memory Block: Not Supported 00:26:38.795 00:26:38.795 Firmware Slot Information 00:26:38.795 ========================= 00:26:38.795 Active slot: 0 00:26:38.795 00:26:38.795 00:26:38.795 Error Log 00:26:38.795 ========= 00:26:38.795 00:26:38.795 Active Namespaces 00:26:38.795 ================= 00:26:38.795 Discovery Log Page 00:26:38.795 ================== 00:26:38.795 Generation Counter: 2 00:26:38.795 Number of Records: 2 00:26:38.795 Record Format: 0 00:26:38.795 00:26:38.795 Discovery Log Entry 0 00:26:38.795 ---------------------- 00:26:38.795 Transport Type: 3 (TCP) 00:26:38.795 Address Family: 1 (IPv4) 00:26:38.795 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:38.795 Entry Flags: 00:26:38.795 Duplicate Returned Information: 0 00:26:38.795 Explicit Persistent Connection Support for Discovery: 0 00:26:38.795 Transport Requirements: 00:26:38.795 Secure Channel: Not Specified 00:26:38.795 Port ID: 1 (0x0001) 00:26:38.795 Controller ID: 65535 (0xffff) 00:26:38.795 Admin Max SQ Size: 32 00:26:38.795 Transport Service Identifier: 4420 00:26:38.795 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:38.795 Transport Address: 10.0.0.1 00:26:38.795 Discovery Log Entry 1 00:26:38.795 ---------------------- 00:26:38.795 Transport Type: 3 (TCP) 00:26:38.795 Address Family: 1 (IPv4) 00:26:38.795 Subsystem Type: 2 (NVM Subsystem) 00:26:38.795 Entry Flags: 00:26:38.795 Duplicate Returned Information: 0 00:26:38.795 Explicit Persistent Connection Support for Discovery: 0 00:26:38.795 Transport Requirements: 00:26:38.795 Secure Channel: Not Specified 00:26:38.795 Port ID: 1 (0x0001) 00:26:38.795 Controller ID: 65535 (0xffff) 00:26:38.795 Admin Max SQ Size: 32 00:26:38.795 Transport Service Identifier: 4420 00:26:38.795 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:38.795 Transport Address: 10.0.0.1 00:26:38.795 12:11:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:39.057 get_feature(0x01) failed 00:26:39.057 get_feature(0x02) failed 00:26:39.057 get_feature(0x04) failed 00:26:39.057 ===================================================== 00:26:39.057 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:39.057 ===================================================== 00:26:39.057 Controller Capabilities/Features 00:26:39.058 ================================ 00:26:39.058 Vendor ID: 0000 00:26:39.058 Subsystem Vendor ID: 0000 00:26:39.058 Serial Number: 76069d770101a058d1d5 00:26:39.058 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:39.058 Firmware Version: 6.8.9-20 00:26:39.058 Recommended Arb Burst: 6 00:26:39.058 IEEE OUI Identifier: 00 00 00 00:26:39.058 Multi-path I/O 00:26:39.058 May have multiple subsystem ports: Yes 00:26:39.058 May have multiple controllers: Yes 00:26:39.058 Associated with SR-IOV VF: No 00:26:39.058 Max Data Transfer Size: Unlimited 00:26:39.058 Max Number of Namespaces: 1024 00:26:39.058 Max Number of I/O Queues: 128 00:26:39.058 NVMe Specification Version (VS): 1.3 00:26:39.058 NVMe Specification Version (Identify): 1.3 00:26:39.058 Maximum Queue Entries: 1024 00:26:39.058 Contiguous Queues Required: No 00:26:39.058 Arbitration Mechanisms Supported 00:26:39.058 Weighted Round Robin: Not Supported 00:26:39.058 Vendor Specific: Not Supported 00:26:39.058 Reset Timeout: 7500 ms 00:26:39.058 Doorbell Stride: 4 bytes 00:26:39.058 NVM Subsystem Reset: Not Supported 00:26:39.058 Command Sets Supported 00:26:39.058 NVM Command Set: Supported 00:26:39.058 Boot Partition: Not Supported 00:26:39.058 Memory Page Size Minimum: 4096 bytes 00:26:39.058 Memory Page Size Maximum: 4096 bytes 00:26:39.058 Persistent Memory Region: Not Supported 00:26:39.058 Optional Asynchronous Events Supported 00:26:39.058 Namespace Attribute Notices: Supported 00:26:39.058 Firmware Activation Notices: Not Supported 00:26:39.058 ANA Change Notices: Supported 00:26:39.058 PLE Aggregate Log Change Notices: Not Supported 00:26:39.058 LBA Status Info Alert Notices: Not Supported 00:26:39.058 EGE Aggregate Log Change Notices: Not Supported 00:26:39.058 Normal NVM Subsystem Shutdown event: Not Supported 00:26:39.058 Zone Descriptor Change Notices: Not Supported 00:26:39.058 Discovery Log Change Notices: Not Supported 00:26:39.058 Controller Attributes 00:26:39.058 128-bit Host Identifier: Supported 00:26:39.058 Non-Operational Permissive Mode: Not Supported 00:26:39.058 NVM Sets: Not Supported 00:26:39.058 Read Recovery Levels: Not Supported 00:26:39.058 Endurance Groups: Not Supported 00:26:39.058 Predictable Latency Mode: Not Supported 00:26:39.058 Traffic Based Keep ALive: Supported 00:26:39.058 Namespace Granularity: Not Supported 00:26:39.058 SQ Associations: Not Supported 00:26:39.058 UUID List: Not Supported 00:26:39.058 Multi-Domain Subsystem: Not Supported 00:26:39.058 Fixed Capacity Management: Not Supported 00:26:39.058 Variable Capacity Management: Not Supported 00:26:39.058 Delete Endurance Group: Not Supported 00:26:39.058 Delete NVM Set: Not Supported 00:26:39.058 Extended LBA Formats Supported: Not Supported 00:26:39.058 Flexible Data Placement Supported: Not Supported 00:26:39.058 00:26:39.058 Controller Memory Buffer Support 00:26:39.058 ================================ 00:26:39.058 Supported: No 00:26:39.058 00:26:39.058 Persistent Memory Region Support 00:26:39.058 ================================ 00:26:39.058 Supported: No 00:26:39.058 00:26:39.058 Admin Command Set Attributes 00:26:39.058 ============================ 00:26:39.058 Security Send/Receive: Not Supported 00:26:39.058 Format NVM: Not Supported 00:26:39.058 Firmware Activate/Download: Not Supported 00:26:39.058 Namespace Management: Not Supported 00:26:39.058 Device Self-Test: Not Supported 00:26:39.058 Directives: Not Supported 00:26:39.058 NVMe-MI: Not Supported 00:26:39.058 Virtualization Management: Not Supported 00:26:39.058 Doorbell Buffer Config: Not Supported 00:26:39.058 Get LBA Status Capability: Not Supported 00:26:39.058 Command & Feature Lockdown Capability: Not Supported 00:26:39.058 Abort Command Limit: 4 00:26:39.058 Async Event Request Limit: 4 00:26:39.058 Number of Firmware Slots: N/A 00:26:39.058 Firmware Slot 1 Read-Only: N/A 00:26:39.058 Firmware Activation Without Reset: N/A 00:26:39.058 Multiple Update Detection Support: N/A 00:26:39.058 Firmware Update Granularity: No Information Provided 00:26:39.058 Per-Namespace SMART Log: Yes 00:26:39.058 Asymmetric Namespace Access Log Page: Supported 00:26:39.058 ANA Transition Time : 10 sec 00:26:39.058 00:26:39.058 Asymmetric Namespace Access Capabilities 00:26:39.058 ANA Optimized State : Supported 00:26:39.058 ANA Non-Optimized State : Supported 00:26:39.058 ANA Inaccessible State : Supported 00:26:39.058 ANA Persistent Loss State : Supported 00:26:39.058 ANA Change State : Supported 00:26:39.058 ANAGRPID is not changed : No 00:26:39.058 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:39.058 00:26:39.058 ANA Group Identifier Maximum : 128 00:26:39.058 Number of ANA Group Identifiers : 128 00:26:39.058 Max Number of Allowed Namespaces : 1024 00:26:39.058 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:39.058 Command Effects Log Page: Supported 00:26:39.058 Get Log Page Extended Data: Supported 00:26:39.058 Telemetry Log Pages: Not Supported 00:26:39.058 Persistent Event Log Pages: Not Supported 00:26:39.058 Supported Log Pages Log Page: May Support 00:26:39.058 Commands Supported & Effects Log Page: Not Supported 00:26:39.058 Feature Identifiers & Effects Log Page:May Support 00:26:39.058 NVMe-MI Commands & Effects Log Page: May Support 00:26:39.058 Data Area 4 for Telemetry Log: Not Supported 00:26:39.058 Error Log Page Entries Supported: 128 00:26:39.058 Keep Alive: Supported 00:26:39.058 Keep Alive Granularity: 1000 ms 00:26:39.058 00:26:39.058 NVM Command Set Attributes 00:26:39.058 ========================== 00:26:39.058 Submission Queue Entry Size 00:26:39.058 Max: 64 00:26:39.058 Min: 64 00:26:39.058 Completion Queue Entry Size 00:26:39.058 Max: 16 00:26:39.058 Min: 16 00:26:39.058 Number of Namespaces: 1024 00:26:39.058 Compare Command: Not Supported 00:26:39.058 Write Uncorrectable Command: Not Supported 00:26:39.058 Dataset Management Command: Supported 00:26:39.058 Write Zeroes Command: Supported 00:26:39.058 Set Features Save Field: Not Supported 00:26:39.058 Reservations: Not Supported 00:26:39.058 Timestamp: Not Supported 00:26:39.058 Copy: Not Supported 00:26:39.058 Volatile Write Cache: Present 00:26:39.058 Atomic Write Unit (Normal): 1 00:26:39.058 Atomic Write Unit (PFail): 1 00:26:39.058 Atomic Compare & Write Unit: 1 00:26:39.058 Fused Compare & Write: Not Supported 00:26:39.058 Scatter-Gather List 00:26:39.058 SGL Command Set: Supported 00:26:39.058 SGL Keyed: Not Supported 00:26:39.058 SGL Bit Bucket Descriptor: Not Supported 00:26:39.058 SGL Metadata Pointer: Not Supported 00:26:39.058 Oversized SGL: Not Supported 00:26:39.058 SGL Metadata Address: Not Supported 00:26:39.058 SGL Offset: Supported 00:26:39.058 Transport SGL Data Block: Not Supported 00:26:39.058 Replay Protected Memory Block: Not Supported 00:26:39.058 00:26:39.058 Firmware Slot Information 00:26:39.058 ========================= 00:26:39.058 Active slot: 0 00:26:39.058 00:26:39.058 Asymmetric Namespace Access 00:26:39.058 =========================== 00:26:39.058 Change Count : 0 00:26:39.058 Number of ANA Group Descriptors : 1 00:26:39.058 ANA Group Descriptor : 0 00:26:39.058 ANA Group ID : 1 00:26:39.058 Number of NSID Values : 1 00:26:39.058 Change Count : 0 00:26:39.058 ANA State : 1 00:26:39.058 Namespace Identifier : 1 00:26:39.058 00:26:39.058 Commands Supported and Effects 00:26:39.058 ============================== 00:26:39.058 Admin Commands 00:26:39.058 -------------- 00:26:39.058 Get Log Page (02h): Supported 00:26:39.058 Identify (06h): Supported 00:26:39.058 Abort (08h): Supported 00:26:39.058 Set Features (09h): Supported 00:26:39.058 Get Features (0Ah): Supported 00:26:39.058 Asynchronous Event Request (0Ch): Supported 00:26:39.058 Keep Alive (18h): Supported 00:26:39.058 I/O Commands 00:26:39.058 ------------ 00:26:39.058 Flush (00h): Supported 00:26:39.058 Write (01h): Supported LBA-Change 00:26:39.058 Read (02h): Supported 00:26:39.058 Write Zeroes (08h): Supported LBA-Change 00:26:39.058 Dataset Management (09h): Supported 00:26:39.058 00:26:39.058 Error Log 00:26:39.058 ========= 00:26:39.058 Entry: 0 00:26:39.058 Error Count: 0x3 00:26:39.058 Submission Queue Id: 0x0 00:26:39.058 Command Id: 0x5 00:26:39.058 Phase Bit: 0 00:26:39.058 Status Code: 0x2 00:26:39.058 Status Code Type: 0x0 00:26:39.058 Do Not Retry: 1 00:26:39.058 Error Location: 0x28 00:26:39.058 LBA: 0x0 00:26:39.058 Namespace: 0x0 00:26:39.058 Vendor Log Page: 0x0 00:26:39.058 ----------- 00:26:39.058 Entry: 1 00:26:39.058 Error Count: 0x2 00:26:39.058 Submission Queue Id: 0x0 00:26:39.058 Command Id: 0x5 00:26:39.058 Phase Bit: 0 00:26:39.058 Status Code: 0x2 00:26:39.058 Status Code Type: 0x0 00:26:39.058 Do Not Retry: 1 00:26:39.059 Error Location: 0x28 00:26:39.059 LBA: 0x0 00:26:39.059 Namespace: 0x0 00:26:39.059 Vendor Log Page: 0x0 00:26:39.059 ----------- 00:26:39.059 Entry: 2 00:26:39.059 Error Count: 0x1 00:26:39.059 Submission Queue Id: 0x0 00:26:39.059 Command Id: 0x4 00:26:39.059 Phase Bit: 0 00:26:39.059 Status Code: 0x2 00:26:39.059 Status Code Type: 0x0 00:26:39.059 Do Not Retry: 1 00:26:39.059 Error Location: 0x28 00:26:39.059 LBA: 0x0 00:26:39.059 Namespace: 0x0 00:26:39.059 Vendor Log Page: 0x0 00:26:39.059 00:26:39.059 Number of Queues 00:26:39.059 ================ 00:26:39.059 Number of I/O Submission Queues: 128 00:26:39.059 Number of I/O Completion Queues: 128 00:26:39.059 00:26:39.059 ZNS Specific Controller Data 00:26:39.059 ============================ 00:26:39.059 Zone Append Size Limit: 0 00:26:39.059 00:26:39.059 00:26:39.059 Active Namespaces 00:26:39.059 ================= 00:26:39.059 get_feature(0x05) failed 00:26:39.059 Namespace ID:1 00:26:39.059 Command Set Identifier: NVM (00h) 00:26:39.059 Deallocate: Supported 00:26:39.059 Deallocated/Unwritten Error: Not Supported 00:26:39.059 Deallocated Read Value: Unknown 00:26:39.059 Deallocate in Write Zeroes: Not Supported 00:26:39.059 Deallocated Guard Field: 0xFFFF 00:26:39.059 Flush: Supported 00:26:39.059 Reservation: Not Supported 00:26:39.059 Namespace Sharing Capabilities: Multiple Controllers 00:26:39.059 Size (in LBAs): 3750748848 (1788GiB) 00:26:39.059 Capacity (in LBAs): 3750748848 (1788GiB) 00:26:39.059 Utilization (in LBAs): 3750748848 (1788GiB) 00:26:39.059 UUID: 6a82ba7a-43ce-4024-9626-401325d901a8 00:26:39.059 Thin Provisioning: Not Supported 00:26:39.059 Per-NS Atomic Units: Yes 00:26:39.059 Atomic Write Unit (Normal): 8 00:26:39.059 Atomic Write Unit (PFail): 8 00:26:39.059 Preferred Write Granularity: 8 00:26:39.059 Atomic Compare & Write Unit: 8 00:26:39.059 Atomic Boundary Size (Normal): 0 00:26:39.059 Atomic Boundary Size (PFail): 0 00:26:39.059 Atomic Boundary Offset: 0 00:26:39.059 NGUID/EUI64 Never Reused: No 00:26:39.059 ANA group ID: 1 00:26:39.059 Namespace Write Protected: No 00:26:39.059 Number of LBA Formats: 1 00:26:39.059 Current LBA Format: LBA Format #00 00:26:39.059 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:39.059 00:26:39.059 12:11:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:39.059 12:11:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:39.059 12:11:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:26:39.059 12:11:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:39.059 12:11:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:26:39.059 12:11:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:39.059 12:11:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:39.059 rmmod nvme_tcp 00:26:39.059 rmmod nvme_fabrics 00:26:39.059 12:11:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:39.059 12:11:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:26:39.059 12:11:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:26:39.059 12:11:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:26:39.059 12:11:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:39.059 12:11:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:39.059 12:11:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:39.059 12:11:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:26:39.059 12:11:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-save 00:26:39.059 12:11:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:39.059 12:11:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-restore 00:26:39.059 12:11:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:39.059 12:11:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:39.059 12:11:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:39.059 12:11:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:39.059 12:11:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:41.607 12:11:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:41.607 12:11:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:41.607 12:11:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:41.607 12:11:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # echo 0 00:26:41.607 12:11:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:41.607 12:11:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:41.607 12:11:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:41.607 12:11:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:41.607 12:11:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:26:41.607 12:11:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:26:41.607 12:11:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:44.909 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:26:44.909 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:26:44.909 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:26:44.909 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:26:44.909 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:26:44.909 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:26:44.909 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:26:44.909 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:26:44.909 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:26:44.909 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:26:44.909 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:26:44.909 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:26:44.909 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:26:44.909 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:26:44.909 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:26:44.909 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:26:44.909 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:26:45.169 00:26:45.169 real 0m19.614s 00:26:45.169 user 0m5.240s 00:26:45.169 sys 0m11.401s 00:26:45.170 12:11:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:45.170 12:11:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:45.170 ************************************ 00:26:45.170 END TEST nvmf_identify_kernel_target 00:26:45.170 ************************************ 00:26:45.170 12:11:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:45.170 12:11:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:45.170 12:11:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:45.170 12:11:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.430 ************************************ 00:26:45.430 START TEST nvmf_auth_host 00:26:45.430 ************************************ 00:26:45.430 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:45.430 * Looking for test storage... 00:26:45.430 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:45.430 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:45.430 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:26:45.430 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:45.430 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:45.430 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:45.430 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:45.430 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:45.430 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:45.430 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:45.430 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:45.430 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:45.430 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:45.430 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:45.430 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:45.430 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:45.430 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:26:45.430 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:26:45.431 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:45.431 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:45.431 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:26:45.431 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:26:45.431 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:45.431 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:26:45.431 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:45.431 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:26:45.431 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:26:45.431 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:45.431 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:26:45.431 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:45.431 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:45.431 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:45.431 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:26:45.431 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:45.431 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:45.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:45.431 --rc genhtml_branch_coverage=1 00:26:45.431 --rc genhtml_function_coverage=1 00:26:45.431 --rc genhtml_legend=1 00:26:45.431 --rc geninfo_all_blocks=1 00:26:45.431 --rc geninfo_unexecuted_blocks=1 00:26:45.431 00:26:45.431 ' 00:26:45.431 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:45.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:45.431 --rc genhtml_branch_coverage=1 00:26:45.431 --rc genhtml_function_coverage=1 00:26:45.431 --rc genhtml_legend=1 00:26:45.431 --rc geninfo_all_blocks=1 00:26:45.431 --rc geninfo_unexecuted_blocks=1 00:26:45.431 00:26:45.431 ' 00:26:45.431 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:45.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:45.431 --rc genhtml_branch_coverage=1 00:26:45.431 --rc genhtml_function_coverage=1 00:26:45.431 --rc genhtml_legend=1 00:26:45.431 --rc geninfo_all_blocks=1 00:26:45.431 --rc geninfo_unexecuted_blocks=1 00:26:45.431 00:26:45.431 ' 00:26:45.431 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:45.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:45.431 --rc genhtml_branch_coverage=1 00:26:45.431 --rc genhtml_function_coverage=1 00:26:45.431 --rc genhtml_legend=1 00:26:45.431 --rc geninfo_all_blocks=1 00:26:45.431 --rc geninfo_unexecuted_blocks=1 00:26:45.431 00:26:45.431 ' 00:26:45.431 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:45.431 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:26:45.431 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:45.431 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:45.431 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:45.431 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:45.431 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:45.431 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:45.431 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:45.431 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:45.431 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:45.431 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:45.431 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:45.431 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:45.431 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:45.431 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:45.431 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:45.431 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:45.431 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:45.691 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:45.691 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:45.691 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:45.691 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:45.691 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.691 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.691 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.691 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:26:45.691 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.691 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:26:45.691 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:45.691 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:45.691 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:45.691 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:45.691 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:45.691 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:45.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:45.691 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:45.691 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:45.691 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:45.691 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:45.691 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:45.691 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:45.691 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:45.691 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:45.691 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:45.691 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:26:45.691 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:26:45.691 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:26:45.692 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:45.692 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:45.692 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:45.692 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:45.692 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:45.692 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:45.692 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:45.692 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:45.692 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:45.692 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:45.692 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:26:45.692 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:53.948 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:53.948 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:53.948 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:53.948 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # is_hw=yes 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:53.948 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:53.949 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:53.949 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:53.949 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:53.949 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:53.949 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:53.949 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:53.949 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:53.949 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:53.949 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:53.949 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:53.949 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:53.949 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:53.949 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:53.949 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:53.949 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:53.949 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:53.949 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:53.949 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:53.949 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:53.949 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:26:53.949 00:26:53.949 --- 10.0.0.2 ping statistics --- 00:26:53.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:53.949 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:26:53.949 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:53.949 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:53.949 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:26:53.949 00:26:53.949 --- 10.0.0.1 ping statistics --- 00:26:53.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:53.949 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:26:53.949 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:53.949 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # return 0 00:26:53.949 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:53.949 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:53.949 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:53.949 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:53.949 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:53.949 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:53.949 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:53.949 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:26:53.949 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:53.949 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:53.949 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.949 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # nvmfpid=1125173 00:26:53.949 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # waitforlisten 1125173 00:26:53.949 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:53.949 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1125173 ']' 00:26:53.949 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:53.949 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:53.949 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:53.949 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:53.949 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.949 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:53.949 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:26:53.949 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:53.949 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:53.949 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.949 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:53.949 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:26:53.949 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:26:53.949 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:53.949 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:53.949 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:53.949 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:26:53.949 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:26:53.949 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:53.949 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=f14f9aa510f4f5a2318882b197009913 00:26:53.949 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:26:53.949 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.HGs 00:26:53.949 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key f14f9aa510f4f5a2318882b197009913 0 00:26:53.949 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 f14f9aa510f4f5a2318882b197009913 0 00:26:53.949 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:53.949 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:53.949 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=f14f9aa510f4f5a2318882b197009913 00:26:53.949 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:26:53.949 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:53.949 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.HGs 00:26:53.949 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.HGs 00:26:53.949 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.HGs 00:26:53.949 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:26:53.949 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:53.949 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:53.949 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:53.949 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:26:53.949 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:26:53.949 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:53.949 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=0f1edbdf78de035a641d3881fdc81a6be1afe5b3d81f070f270665efaaab9ba0 00:26:53.949 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:26:53.949 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.MNV 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 0f1edbdf78de035a641d3881fdc81a6be1afe5b3d81f070f270665efaaab9ba0 3 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 0f1edbdf78de035a641d3881fdc81a6be1afe5b3d81f070f270665efaaab9ba0 3 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=0f1edbdf78de035a641d3881fdc81a6be1afe5b3d81f070f270665efaaab9ba0 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.MNV 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.MNV 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.MNV 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=8e0400930312eda8d0cd00615279a45218ae99d538351c90 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.SrI 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 8e0400930312eda8d0cd00615279a45218ae99d538351c90 0 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 8e0400930312eda8d0cd00615279a45218ae99d538351c90 0 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=8e0400930312eda8d0cd00615279a45218ae99d538351c90 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.SrI 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.SrI 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.SrI 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=c2e75ddb7d920246713e454941a7a238674db295621026a6 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.Z6U 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key c2e75ddb7d920246713e454941a7a238674db295621026a6 2 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 c2e75ddb7d920246713e454941a7a238674db295621026a6 2 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=c2e75ddb7d920246713e454941a7a238674db295621026a6 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.Z6U 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.Z6U 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Z6U 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=4551fb45e5e9078d3253ea657e93e1bb 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.EUj 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 4551fb45e5e9078d3253ea657e93e1bb 1 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 4551fb45e5e9078d3253ea657e93e1bb 1 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=4551fb45e5e9078d3253ea657e93e1bb 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.EUj 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.EUj 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.EUj 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:26:54.211 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:26:54.473 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:54.473 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=382df574de09758f1b036c7f72d8824a 00:26:54.473 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:26:54.473 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.2fG 00:26:54.473 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 382df574de09758f1b036c7f72d8824a 1 00:26:54.473 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 382df574de09758f1b036c7f72d8824a 1 00:26:54.473 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:54.473 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:54.473 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=382df574de09758f1b036c7f72d8824a 00:26:54.473 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:26:54.473 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:54.473 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.2fG 00:26:54.473 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.2fG 00:26:54.473 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.2fG 00:26:54.473 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:26:54.473 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:54.473 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:54.473 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:54.473 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:26:54.473 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:26:54.473 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:54.473 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=3a6aeb595c471ff63203880d1444e75b8e80f883713a008a 00:26:54.473 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:26:54.473 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.pwN 00:26:54.473 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 3a6aeb595c471ff63203880d1444e75b8e80f883713a008a 2 00:26:54.473 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 3a6aeb595c471ff63203880d1444e75b8e80f883713a008a 2 00:26:54.473 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:54.473 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:54.473 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=3a6aeb595c471ff63203880d1444e75b8e80f883713a008a 00:26:54.473 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:26:54.473 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:54.473 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.pwN 00:26:54.473 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.pwN 00:26:54.473 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.pwN 00:26:54.473 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:26:54.473 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:54.473 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:54.473 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:54.473 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:26:54.473 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:26:54.473 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:54.473 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=5d013ea21253f60667702da5e69aeef3 00:26:54.473 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:26:54.473 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.QBy 00:26:54.473 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 5d013ea21253f60667702da5e69aeef3 0 00:26:54.473 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 5d013ea21253f60667702da5e69aeef3 0 00:26:54.473 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:54.473 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:54.473 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=5d013ea21253f60667702da5e69aeef3 00:26:54.473 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:26:54.473 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:54.473 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.QBy 00:26:54.473 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.QBy 00:26:54.473 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.QBy 00:26:54.473 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:26:54.473 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:54.473 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:54.473 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:54.473 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:26:54.473 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:26:54.473 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:54.473 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=81a45deff6fe4c8c5efe4ca3890ad60dca30fe09a02757bbb2706e73602f71f8 00:26:54.473 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:26:54.473 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.Gn1 00:26:54.473 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 81a45deff6fe4c8c5efe4ca3890ad60dca30fe09a02757bbb2706e73602f71f8 3 00:26:54.473 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 81a45deff6fe4c8c5efe4ca3890ad60dca30fe09a02757bbb2706e73602f71f8 3 00:26:54.473 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:54.474 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:54.474 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=81a45deff6fe4c8c5efe4ca3890ad60dca30fe09a02757bbb2706e73602f71f8 00:26:54.474 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:26:54.474 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:54.735 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.Gn1 00:26:54.735 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.Gn1 00:26:54.735 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Gn1 00:26:54.735 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:26:54.735 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1125173 00:26:54.735 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1125173 ']' 00:26:54.735 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:54.735 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:54.735 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:54.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:54.735 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:54.735 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.735 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:54.735 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:26:54.735 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:54.735 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.HGs 00:26:54.735 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.735 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.735 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.735 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.MNV ]] 00:26:54.735 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.MNV 00:26:54.735 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.735 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.735 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.735 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:54.735 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.SrI 00:26:54.735 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.735 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.735 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.735 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Z6U ]] 00:26:54.735 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Z6U 00:26:54.735 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.735 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.996 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.996 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:54.996 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.EUj 00:26:54.996 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.996 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.996 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.996 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.2fG ]] 00:26:54.996 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.2fG 00:26:54.996 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.996 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.997 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.997 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:54.997 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.pwN 00:26:54.997 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.997 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.997 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.997 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.QBy ]] 00:26:54.997 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.QBy 00:26:54.997 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.997 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.997 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.997 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:54.997 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Gn1 00:26:54.997 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.997 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.997 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.997 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:26:54.997 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:26:54.997 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:26:54.997 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:54.997 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:54.997 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:54.997 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.997 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.997 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:54.997 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.997 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:54.997 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:54.997 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:54.997 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:26:54.997 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:26:54.997 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:26:54.997 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:54.997 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:54.997 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:54.997 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # local block nvme 00:26:54.997 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:26:54.997 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # modprobe nvmet 00:26:54.997 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:54.997 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:58.298 Waiting for block devices as requested 00:26:58.298 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:58.298 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:58.558 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:58.558 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:58.558 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:58.817 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:58.817 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:58.817 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:59.077 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:26:59.077 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:59.336 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:59.336 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:59.336 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:59.336 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:59.596 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:59.596 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:59.596 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:00.536 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:27:00.536 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:00.536 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:27:00.536 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:27:00.536 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:00.536 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:00.536 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:27:00.536 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:00.536 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:00.536 No valid GPT data, bailing 00:27:00.536 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:00.536 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:27:00.536 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:27:00.536 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:27:00.536 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:27:00.536 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:00.536 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:00.536 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:00.536 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:00.536 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:27:00.536 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:27:00.536 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:27:00.536 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:27:00.536 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo tcp 00:27:00.536 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 4420 00:27:00.536 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo ipv4 00:27:00.536 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:00.536 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:00.797 00:27:00.797 Discovery Log Number of Records 2, Generation counter 2 00:27:00.797 =====Discovery Log Entry 0====== 00:27:00.797 trtype: tcp 00:27:00.797 adrfam: ipv4 00:27:00.797 subtype: current discovery subsystem 00:27:00.797 treq: not specified, sq flow control disable supported 00:27:00.797 portid: 1 00:27:00.797 trsvcid: 4420 00:27:00.797 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:00.797 traddr: 10.0.0.1 00:27:00.797 eflags: none 00:27:00.797 sectype: none 00:27:00.797 =====Discovery Log Entry 1====== 00:27:00.797 trtype: tcp 00:27:00.797 adrfam: ipv4 00:27:00.797 subtype: nvme subsystem 00:27:00.797 treq: not specified, sq flow control disable supported 00:27:00.797 portid: 1 00:27:00.797 trsvcid: 4420 00:27:00.797 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:00.797 traddr: 10.0.0.1 00:27:00.797 eflags: none 00:27:00.797 sectype: none 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGUwNDAwOTMwMzEyZWRhOGQwY2QwMDYxNTI3OWE0NTIxOGFlOTlkNTM4MzUxYzkwwApCnA==: 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGUwNDAwOTMwMzEyZWRhOGQwY2QwMDYxNTI3OWE0NTIxOGFlOTlkNTM4MzUxYzkwwApCnA==: 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: ]] 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.797 nvme0n1 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.797 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjE0ZjlhYTUxMGY0ZjVhMjMxODg4MmIxOTcwMDk5MTNeROwx: 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGYxZWRiZGY3OGRlMDM1YTY0MWQzODgxZmRjODFhNmJlMWFmZTViM2Q4MWYwNzBmMjcwNjY1ZWZhYWFiOWJhMApbt2w=: 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjE0ZjlhYTUxMGY0ZjVhMjMxODg4MmIxOTcwMDk5MTNeROwx: 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGYxZWRiZGY3OGRlMDM1YTY0MWQzODgxZmRjODFhNmJlMWFmZTViM2Q4MWYwNzBmMjcwNjY1ZWZhYWFiOWJhMApbt2w=: ]] 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGYxZWRiZGY3OGRlMDM1YTY0MWQzODgxZmRjODFhNmJlMWFmZTViM2Q4MWYwNzBmMjcwNjY1ZWZhYWFiOWJhMApbt2w=: 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.058 nvme0n1 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGUwNDAwOTMwMzEyZWRhOGQwY2QwMDYxNTI3OWE0NTIxOGFlOTlkNTM4MzUxYzkwwApCnA==: 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGUwNDAwOTMwMzEyZWRhOGQwY2QwMDYxNTI3OWE0NTIxOGFlOTlkNTM4MzUxYzkwwApCnA==: 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: ]] 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:01.058 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:01.059 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.059 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.059 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:01.059 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.059 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:01.059 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:01.059 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:01.059 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:01.319 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.320 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.320 nvme0n1 00:27:01.320 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.320 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.320 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.320 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.320 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.320 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.320 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.320 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.320 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.320 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.320 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.320 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.320 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:01.320 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.320 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:01.320 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:01.320 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:01.320 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU1MWZiNDVlNWU5MDc4ZDMyNTNlYTY1N2U5M2UxYmLEK/65: 00:27:01.320 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzgyZGY1NzRkZTA5NzU4ZjFiMDM2YzdmNzJkODgyNGEYTdTy: 00:27:01.320 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:01.320 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:01.320 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU1MWZiNDVlNWU5MDc4ZDMyNTNlYTY1N2U5M2UxYmLEK/65: 00:27:01.320 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzgyZGY1NzRkZTA5NzU4ZjFiMDM2YzdmNzJkODgyNGEYTdTy: ]] 00:27:01.320 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzgyZGY1NzRkZTA5NzU4ZjFiMDM2YzdmNzJkODgyNGEYTdTy: 00:27:01.320 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:01.320 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.320 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:01.320 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:01.320 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:01.320 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.320 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:01.320 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.320 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.320 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.320 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.320 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:01.320 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:01.320 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:01.320 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.320 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.320 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:01.320 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.320 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:01.320 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:01.320 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:01.320 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:01.320 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.320 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.581 nvme0n1 00:27:01.581 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.581 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.581 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.581 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.581 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.581 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.581 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.581 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.581 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.581 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.581 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.581 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.581 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:01.581 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.581 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:01.581 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:01.581 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:01.581 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E2YWViNTk1YzQ3MWZmNjMyMDM4ODBkMTQ0NGU3NWI4ZTgwZjg4MzcxM2EwMDhh8H6ceA==: 00:27:01.581 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWQwMTNlYTIxMjUzZjYwNjY3NzAyZGE1ZTY5YWVlZjO289xs: 00:27:01.581 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:01.581 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:01.581 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E2YWViNTk1YzQ3MWZmNjMyMDM4ODBkMTQ0NGU3NWI4ZTgwZjg4MzcxM2EwMDhh8H6ceA==: 00:27:01.581 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWQwMTNlYTIxMjUzZjYwNjY3NzAyZGE1ZTY5YWVlZjO289xs: ]] 00:27:01.581 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWQwMTNlYTIxMjUzZjYwNjY3NzAyZGE1ZTY5YWVlZjO289xs: 00:27:01.581 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:01.581 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.581 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:01.581 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:01.581 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:01.581 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.581 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:01.581 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.581 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.581 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.581 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.581 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:01.581 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:01.581 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:01.581 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.581 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.581 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:01.581 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.581 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:01.581 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:01.581 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:01.581 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:01.581 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.581 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.842 nvme0n1 00:27:01.842 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.842 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.842 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.842 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.842 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.842 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.842 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.842 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.842 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.842 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.842 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.842 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.842 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:01.842 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.842 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:01.842 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:01.842 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:01.842 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFhNDVkZWZmNmZlNGM4YzVlZmU0Y2EzODkwYWQ2MGRjYTMwZmUwOWEwMjc1N2JiYjI3MDZlNzM2MDJmNzFmOG3AVpY=: 00:27:01.842 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:01.842 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:01.842 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:01.842 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFhNDVkZWZmNmZlNGM4YzVlZmU0Y2EzODkwYWQ2MGRjYTMwZmUwOWEwMjc1N2JiYjI3MDZlNzM2MDJmNzFmOG3AVpY=: 00:27:01.842 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:01.842 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:01.842 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.842 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:01.842 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:01.842 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:01.842 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.842 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:01.842 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.842 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.842 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.842 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.842 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:01.842 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:01.842 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:01.842 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.842 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.842 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:01.842 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.842 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:01.842 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:01.842 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:01.842 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:01.842 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.842 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.103 nvme0n1 00:27:02.103 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.103 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.103 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.103 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.103 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.103 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.103 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.103 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.103 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.103 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.103 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.103 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:02.103 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.103 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:02.103 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.103 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:02.103 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:02.103 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:02.103 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjE0ZjlhYTUxMGY0ZjVhMjMxODg4MmIxOTcwMDk5MTNeROwx: 00:27:02.103 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGYxZWRiZGY3OGRlMDM1YTY0MWQzODgxZmRjODFhNmJlMWFmZTViM2Q4MWYwNzBmMjcwNjY1ZWZhYWFiOWJhMApbt2w=: 00:27:02.103 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:02.103 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:02.103 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjE0ZjlhYTUxMGY0ZjVhMjMxODg4MmIxOTcwMDk5MTNeROwx: 00:27:02.103 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGYxZWRiZGY3OGRlMDM1YTY0MWQzODgxZmRjODFhNmJlMWFmZTViM2Q4MWYwNzBmMjcwNjY1ZWZhYWFiOWJhMApbt2w=: ]] 00:27:02.103 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGYxZWRiZGY3OGRlMDM1YTY0MWQzODgxZmRjODFhNmJlMWFmZTViM2Q4MWYwNzBmMjcwNjY1ZWZhYWFiOWJhMApbt2w=: 00:27:02.103 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:02.103 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.103 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:02.103 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:02.103 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:02.103 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.103 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:02.103 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.103 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.103 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.103 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.103 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:02.103 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:02.103 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:02.103 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.103 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.103 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:02.103 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.103 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:02.103 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:02.103 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:02.103 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:02.103 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.103 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.363 nvme0n1 00:27:02.363 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.363 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.363 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.363 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.363 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.363 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.363 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.363 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.363 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.363 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.363 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.363 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.363 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:02.363 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.363 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:02.363 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:02.363 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:02.363 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGUwNDAwOTMwMzEyZWRhOGQwY2QwMDYxNTI3OWE0NTIxOGFlOTlkNTM4MzUxYzkwwApCnA==: 00:27:02.363 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: 00:27:02.363 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:02.363 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:02.363 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGUwNDAwOTMwMzEyZWRhOGQwY2QwMDYxNTI3OWE0NTIxOGFlOTlkNTM4MzUxYzkwwApCnA==: 00:27:02.363 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: ]] 00:27:02.363 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: 00:27:02.363 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:02.363 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.363 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:02.363 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:02.363 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:02.363 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.363 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:02.363 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.363 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.363 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.363 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.363 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:02.363 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:02.363 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:02.363 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.363 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.363 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:02.363 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.363 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:02.363 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:02.363 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:02.363 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:02.363 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.363 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.623 nvme0n1 00:27:02.623 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.623 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.623 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.623 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.623 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.623 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.623 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.623 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.623 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.623 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.623 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.623 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.623 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:02.623 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.623 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:02.623 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:02.623 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:02.623 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU1MWZiNDVlNWU5MDc4ZDMyNTNlYTY1N2U5M2UxYmLEK/65: 00:27:02.623 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzgyZGY1NzRkZTA5NzU4ZjFiMDM2YzdmNzJkODgyNGEYTdTy: 00:27:02.623 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:02.623 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:02.623 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU1MWZiNDVlNWU5MDc4ZDMyNTNlYTY1N2U5M2UxYmLEK/65: 00:27:02.623 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzgyZGY1NzRkZTA5NzU4ZjFiMDM2YzdmNzJkODgyNGEYTdTy: ]] 00:27:02.623 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzgyZGY1NzRkZTA5NzU4ZjFiMDM2YzdmNzJkODgyNGEYTdTy: 00:27:02.623 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:02.623 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.623 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:02.623 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:02.623 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:02.623 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.623 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:02.623 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.623 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.623 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.623 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.623 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:02.623 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:02.624 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:02.624 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.624 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.624 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:02.624 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.624 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:02.624 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:02.624 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:02.624 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:02.624 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.624 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.884 nvme0n1 00:27:02.885 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.885 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.885 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.885 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.885 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.885 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.885 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.885 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.885 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.885 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.885 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.885 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.885 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:02.885 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.885 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:02.885 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:02.885 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:02.885 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E2YWViNTk1YzQ3MWZmNjMyMDM4ODBkMTQ0NGU3NWI4ZTgwZjg4MzcxM2EwMDhh8H6ceA==: 00:27:02.885 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWQwMTNlYTIxMjUzZjYwNjY3NzAyZGE1ZTY5YWVlZjO289xs: 00:27:02.885 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:02.885 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:02.885 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E2YWViNTk1YzQ3MWZmNjMyMDM4ODBkMTQ0NGU3NWI4ZTgwZjg4MzcxM2EwMDhh8H6ceA==: 00:27:02.885 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWQwMTNlYTIxMjUzZjYwNjY3NzAyZGE1ZTY5YWVlZjO289xs: ]] 00:27:02.885 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWQwMTNlYTIxMjUzZjYwNjY3NzAyZGE1ZTY5YWVlZjO289xs: 00:27:02.885 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:02.885 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.885 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:02.885 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:02.885 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:02.885 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.885 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:02.885 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.885 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.885 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.885 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.885 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:02.885 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:02.885 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:02.885 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.885 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.885 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:02.885 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.885 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:02.885 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:02.885 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:02.885 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:02.885 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.885 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.146 nvme0n1 00:27:03.146 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.146 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.146 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.146 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.146 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.146 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.146 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.146 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.146 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.146 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.146 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.146 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.146 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:03.146 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.146 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:03.146 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:03.146 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:03.146 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFhNDVkZWZmNmZlNGM4YzVlZmU0Y2EzODkwYWQ2MGRjYTMwZmUwOWEwMjc1N2JiYjI3MDZlNzM2MDJmNzFmOG3AVpY=: 00:27:03.146 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:03.146 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:03.146 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:03.146 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFhNDVkZWZmNmZlNGM4YzVlZmU0Y2EzODkwYWQ2MGRjYTMwZmUwOWEwMjc1N2JiYjI3MDZlNzM2MDJmNzFmOG3AVpY=: 00:27:03.146 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:03.146 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:03.146 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.146 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:03.146 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:03.146 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:03.147 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.147 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:03.147 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.147 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.147 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.147 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.147 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:03.147 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:03.147 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:03.147 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.147 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.147 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:03.147 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.147 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:03.147 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:03.147 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:03.147 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:03.147 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.147 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.409 nvme0n1 00:27:03.409 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.409 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.409 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.409 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.409 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.409 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.409 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.409 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.409 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.409 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.409 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.409 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:03.409 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.409 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:03.409 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.409 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:03.409 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:03.409 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:03.409 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjE0ZjlhYTUxMGY0ZjVhMjMxODg4MmIxOTcwMDk5MTNeROwx: 00:27:03.409 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGYxZWRiZGY3OGRlMDM1YTY0MWQzODgxZmRjODFhNmJlMWFmZTViM2Q4MWYwNzBmMjcwNjY1ZWZhYWFiOWJhMApbt2w=: 00:27:03.409 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:03.409 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:03.409 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjE0ZjlhYTUxMGY0ZjVhMjMxODg4MmIxOTcwMDk5MTNeROwx: 00:27:03.409 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGYxZWRiZGY3OGRlMDM1YTY0MWQzODgxZmRjODFhNmJlMWFmZTViM2Q4MWYwNzBmMjcwNjY1ZWZhYWFiOWJhMApbt2w=: ]] 00:27:03.409 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGYxZWRiZGY3OGRlMDM1YTY0MWQzODgxZmRjODFhNmJlMWFmZTViM2Q4MWYwNzBmMjcwNjY1ZWZhYWFiOWJhMApbt2w=: 00:27:03.409 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:03.409 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.409 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:03.409 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:03.409 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:03.409 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.409 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:03.409 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.409 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.409 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.409 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.409 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:03.409 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:03.409 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:03.409 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.409 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.409 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:03.409 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.409 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:03.409 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:03.409 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:03.409 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:03.409 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.409 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.670 nvme0n1 00:27:03.670 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.670 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.670 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.670 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.670 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.670 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.670 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.670 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.670 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.670 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.670 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.670 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.670 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:03.670 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.670 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:03.670 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:03.670 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:03.670 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGUwNDAwOTMwMzEyZWRhOGQwY2QwMDYxNTI3OWE0NTIxOGFlOTlkNTM4MzUxYzkwwApCnA==: 00:27:03.670 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: 00:27:03.670 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:03.670 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:03.670 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGUwNDAwOTMwMzEyZWRhOGQwY2QwMDYxNTI3OWE0NTIxOGFlOTlkNTM4MzUxYzkwwApCnA==: 00:27:03.670 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: ]] 00:27:03.670 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: 00:27:03.670 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:03.670 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.670 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:03.670 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:03.670 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:03.670 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.670 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:03.671 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.671 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.671 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.671 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.671 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:03.671 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:03.671 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:03.671 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.671 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.671 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:03.671 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.671 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:03.671 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:03.671 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:03.671 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:03.671 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.671 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.931 nvme0n1 00:27:03.931 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.931 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.931 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.931 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.931 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.931 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.931 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.931 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.931 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.931 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.192 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.192 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.192 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:04.192 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.192 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:04.192 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:04.192 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:04.192 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU1MWZiNDVlNWU5MDc4ZDMyNTNlYTY1N2U5M2UxYmLEK/65: 00:27:04.192 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzgyZGY1NzRkZTA5NzU4ZjFiMDM2YzdmNzJkODgyNGEYTdTy: 00:27:04.192 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:04.192 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:04.192 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU1MWZiNDVlNWU5MDc4ZDMyNTNlYTY1N2U5M2UxYmLEK/65: 00:27:04.192 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzgyZGY1NzRkZTA5NzU4ZjFiMDM2YzdmNzJkODgyNGEYTdTy: ]] 00:27:04.192 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzgyZGY1NzRkZTA5NzU4ZjFiMDM2YzdmNzJkODgyNGEYTdTy: 00:27:04.192 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:04.192 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.192 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:04.192 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:04.192 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:04.192 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.192 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:04.192 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.192 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.192 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.192 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.192 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:04.192 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:04.192 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:04.192 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.192 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.192 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:04.192 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.192 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:04.192 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:04.192 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:04.192 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:04.192 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.192 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.453 nvme0n1 00:27:04.453 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.453 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.453 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.453 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.453 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.453 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.453 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.453 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.453 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.453 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.453 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.453 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.453 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:04.453 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.453 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:04.453 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:04.453 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:04.453 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E2YWViNTk1YzQ3MWZmNjMyMDM4ODBkMTQ0NGU3NWI4ZTgwZjg4MzcxM2EwMDhh8H6ceA==: 00:27:04.453 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWQwMTNlYTIxMjUzZjYwNjY3NzAyZGE1ZTY5YWVlZjO289xs: 00:27:04.453 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:04.453 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:04.453 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E2YWViNTk1YzQ3MWZmNjMyMDM4ODBkMTQ0NGU3NWI4ZTgwZjg4MzcxM2EwMDhh8H6ceA==: 00:27:04.453 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWQwMTNlYTIxMjUzZjYwNjY3NzAyZGE1ZTY5YWVlZjO289xs: ]] 00:27:04.453 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWQwMTNlYTIxMjUzZjYwNjY3NzAyZGE1ZTY5YWVlZjO289xs: 00:27:04.453 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:04.453 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.453 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:04.453 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:04.453 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:04.453 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.453 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:04.453 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.453 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.453 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.453 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.453 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:04.453 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:04.453 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:04.453 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.453 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.453 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:04.453 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.453 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:04.453 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:04.453 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:04.453 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:04.453 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.453 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.713 nvme0n1 00:27:04.714 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.714 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.714 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.714 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.714 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.714 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.714 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.714 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.714 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.714 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.714 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.714 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.714 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:04.714 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.714 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:04.714 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:04.714 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:04.714 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFhNDVkZWZmNmZlNGM4YzVlZmU0Y2EzODkwYWQ2MGRjYTMwZmUwOWEwMjc1N2JiYjI3MDZlNzM2MDJmNzFmOG3AVpY=: 00:27:04.714 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:04.714 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:04.714 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:04.714 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFhNDVkZWZmNmZlNGM4YzVlZmU0Y2EzODkwYWQ2MGRjYTMwZmUwOWEwMjc1N2JiYjI3MDZlNzM2MDJmNzFmOG3AVpY=: 00:27:04.714 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:04.714 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:04.714 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.714 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:04.714 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:04.714 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:04.714 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.714 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:04.714 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.714 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.714 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.714 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.714 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:04.714 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:04.714 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:04.714 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.714 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.714 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:04.714 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.714 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:04.714 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:04.714 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:04.714 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:04.714 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.714 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.974 nvme0n1 00:27:04.974 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.974 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.974 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.974 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.974 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.974 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.974 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.974 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.974 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.974 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.974 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.974 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:04.974 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.974 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:04.974 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.974 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:04.974 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:04.974 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:04.974 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjE0ZjlhYTUxMGY0ZjVhMjMxODg4MmIxOTcwMDk5MTNeROwx: 00:27:04.974 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGYxZWRiZGY3OGRlMDM1YTY0MWQzODgxZmRjODFhNmJlMWFmZTViM2Q4MWYwNzBmMjcwNjY1ZWZhYWFiOWJhMApbt2w=: 00:27:04.974 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:04.974 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:04.974 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjE0ZjlhYTUxMGY0ZjVhMjMxODg4MmIxOTcwMDk5MTNeROwx: 00:27:04.974 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGYxZWRiZGY3OGRlMDM1YTY0MWQzODgxZmRjODFhNmJlMWFmZTViM2Q4MWYwNzBmMjcwNjY1ZWZhYWFiOWJhMApbt2w=: ]] 00:27:04.974 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGYxZWRiZGY3OGRlMDM1YTY0MWQzODgxZmRjODFhNmJlMWFmZTViM2Q4MWYwNzBmMjcwNjY1ZWZhYWFiOWJhMApbt2w=: 00:27:04.974 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:04.974 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.974 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:04.974 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:04.974 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:04.974 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.974 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:04.974 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.974 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.974 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.235 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.235 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:05.235 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:05.235 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:05.235 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.235 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.235 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:05.235 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.235 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:05.235 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:05.235 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:05.235 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:05.235 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.235 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.497 nvme0n1 00:27:05.497 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.497 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.497 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.497 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.497 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.497 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.497 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.497 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.497 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.497 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.497 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.497 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:05.497 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:05.497 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.497 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:05.497 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:05.497 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:05.497 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGUwNDAwOTMwMzEyZWRhOGQwY2QwMDYxNTI3OWE0NTIxOGFlOTlkNTM4MzUxYzkwwApCnA==: 00:27:05.497 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: 00:27:05.497 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:05.497 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:05.497 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGUwNDAwOTMwMzEyZWRhOGQwY2QwMDYxNTI3OWE0NTIxOGFlOTlkNTM4MzUxYzkwwApCnA==: 00:27:05.497 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: ]] 00:27:05.497 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: 00:27:05.498 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:05.498 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.498 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:05.498 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:05.498 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:05.498 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.498 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:05.498 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.498 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.498 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.498 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.498 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:05.498 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:05.498 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:05.498 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.498 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.498 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:05.498 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.498 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:05.498 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:05.498 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:05.498 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:05.498 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.498 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.070 nvme0n1 00:27:06.070 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.070 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.070 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.070 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.070 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.070 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.070 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.070 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.070 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.070 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.070 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.070 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.070 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:06.070 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.070 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:06.071 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:06.071 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:06.071 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU1MWZiNDVlNWU5MDc4ZDMyNTNlYTY1N2U5M2UxYmLEK/65: 00:27:06.071 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzgyZGY1NzRkZTA5NzU4ZjFiMDM2YzdmNzJkODgyNGEYTdTy: 00:27:06.071 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:06.071 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:06.071 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU1MWZiNDVlNWU5MDc4ZDMyNTNlYTY1N2U5M2UxYmLEK/65: 00:27:06.071 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzgyZGY1NzRkZTA5NzU4ZjFiMDM2YzdmNzJkODgyNGEYTdTy: ]] 00:27:06.071 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzgyZGY1NzRkZTA5NzU4ZjFiMDM2YzdmNzJkODgyNGEYTdTy: 00:27:06.071 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:06.071 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.071 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:06.071 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:06.071 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:06.071 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.071 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:06.071 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.071 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.071 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.071 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.071 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:06.071 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:06.071 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:06.071 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.071 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.071 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:06.071 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.071 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:06.071 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:06.071 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:06.071 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:06.071 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.071 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.644 nvme0n1 00:27:06.644 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.644 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.644 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.644 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.644 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.644 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.644 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.644 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.644 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.644 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.644 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.644 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.644 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:06.644 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.644 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:06.644 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:06.644 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:06.644 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E2YWViNTk1YzQ3MWZmNjMyMDM4ODBkMTQ0NGU3NWI4ZTgwZjg4MzcxM2EwMDhh8H6ceA==: 00:27:06.644 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWQwMTNlYTIxMjUzZjYwNjY3NzAyZGE1ZTY5YWVlZjO289xs: 00:27:06.644 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:06.644 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:06.644 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E2YWViNTk1YzQ3MWZmNjMyMDM4ODBkMTQ0NGU3NWI4ZTgwZjg4MzcxM2EwMDhh8H6ceA==: 00:27:06.644 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWQwMTNlYTIxMjUzZjYwNjY3NzAyZGE1ZTY5YWVlZjO289xs: ]] 00:27:06.644 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWQwMTNlYTIxMjUzZjYwNjY3NzAyZGE1ZTY5YWVlZjO289xs: 00:27:06.644 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:06.644 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.644 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:06.644 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:06.644 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:06.644 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.644 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:06.644 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.644 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.644 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.644 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.644 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:06.644 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:06.644 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:06.644 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.644 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.644 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:06.644 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.644 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:06.644 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:06.644 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:06.644 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:06.644 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.644 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.905 nvme0n1 00:27:06.905 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.905 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.905 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.905 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.905 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.905 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.167 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.167 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.167 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.167 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.167 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.167 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.167 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:07.167 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.167 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:07.167 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:07.167 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:07.167 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFhNDVkZWZmNmZlNGM4YzVlZmU0Y2EzODkwYWQ2MGRjYTMwZmUwOWEwMjc1N2JiYjI3MDZlNzM2MDJmNzFmOG3AVpY=: 00:27:07.167 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:07.167 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:07.167 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:07.167 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFhNDVkZWZmNmZlNGM4YzVlZmU0Y2EzODkwYWQ2MGRjYTMwZmUwOWEwMjc1N2JiYjI3MDZlNzM2MDJmNzFmOG3AVpY=: 00:27:07.167 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:07.167 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:07.167 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.167 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:07.167 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:07.167 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:07.167 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.167 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:07.167 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.167 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.167 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.167 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.167 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:07.167 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:07.167 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:07.167 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.167 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.167 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:07.167 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.168 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:07.168 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:07.168 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:07.168 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:07.168 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.168 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.428 nvme0n1 00:27:07.428 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.428 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.428 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.428 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.428 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.428 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.428 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.428 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.428 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.428 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.428 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.428 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:07.428 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.428 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:07.688 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.688 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:07.688 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:07.688 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:07.688 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjE0ZjlhYTUxMGY0ZjVhMjMxODg4MmIxOTcwMDk5MTNeROwx: 00:27:07.688 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGYxZWRiZGY3OGRlMDM1YTY0MWQzODgxZmRjODFhNmJlMWFmZTViM2Q4MWYwNzBmMjcwNjY1ZWZhYWFiOWJhMApbt2w=: 00:27:07.688 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:07.688 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:07.688 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjE0ZjlhYTUxMGY0ZjVhMjMxODg4MmIxOTcwMDk5MTNeROwx: 00:27:07.688 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGYxZWRiZGY3OGRlMDM1YTY0MWQzODgxZmRjODFhNmJlMWFmZTViM2Q4MWYwNzBmMjcwNjY1ZWZhYWFiOWJhMApbt2w=: ]] 00:27:07.688 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGYxZWRiZGY3OGRlMDM1YTY0MWQzODgxZmRjODFhNmJlMWFmZTViM2Q4MWYwNzBmMjcwNjY1ZWZhYWFiOWJhMApbt2w=: 00:27:07.688 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:07.688 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.688 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:07.688 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:07.688 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:07.688 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.688 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:07.689 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.689 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.689 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.689 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.689 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:07.689 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:07.689 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:07.689 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.689 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.689 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:07.689 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.689 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:07.689 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:07.689 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:07.689 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:07.689 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.689 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.258 nvme0n1 00:27:08.258 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.258 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.258 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.258 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.258 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.258 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.258 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.258 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.258 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.258 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.258 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.258 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.258 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:08.258 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.258 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:08.258 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:08.258 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:08.258 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGUwNDAwOTMwMzEyZWRhOGQwY2QwMDYxNTI3OWE0NTIxOGFlOTlkNTM4MzUxYzkwwApCnA==: 00:27:08.258 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: 00:27:08.258 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:08.258 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:08.258 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGUwNDAwOTMwMzEyZWRhOGQwY2QwMDYxNTI3OWE0NTIxOGFlOTlkNTM4MzUxYzkwwApCnA==: 00:27:08.258 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: ]] 00:27:08.258 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: 00:27:08.258 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:08.258 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.258 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:08.258 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:08.258 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:08.258 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.258 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:08.258 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.258 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.258 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.258 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.258 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:08.258 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:08.258 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:08.258 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.258 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.258 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:08.258 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.258 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:08.258 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:08.258 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:08.258 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:08.258 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.258 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.828 nvme0n1 00:27:08.828 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.828 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.828 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.828 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.829 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.829 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.091 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.091 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.091 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.091 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.091 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.091 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.091 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:09.091 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.091 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:09.091 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:09.091 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:09.091 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU1MWZiNDVlNWU5MDc4ZDMyNTNlYTY1N2U5M2UxYmLEK/65: 00:27:09.091 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzgyZGY1NzRkZTA5NzU4ZjFiMDM2YzdmNzJkODgyNGEYTdTy: 00:27:09.091 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:09.091 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:09.091 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU1MWZiNDVlNWU5MDc4ZDMyNTNlYTY1N2U5M2UxYmLEK/65: 00:27:09.091 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzgyZGY1NzRkZTA5NzU4ZjFiMDM2YzdmNzJkODgyNGEYTdTy: ]] 00:27:09.091 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzgyZGY1NzRkZTA5NzU4ZjFiMDM2YzdmNzJkODgyNGEYTdTy: 00:27:09.091 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:09.091 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.091 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:09.091 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:09.091 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:09.091 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.091 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:09.091 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.091 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.091 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.091 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.091 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:09.091 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:09.091 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:09.091 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.091 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.091 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:09.091 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.091 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:09.091 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:09.091 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:09.091 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:09.091 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.091 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.664 nvme0n1 00:27:09.664 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.664 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.664 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.664 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.664 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.664 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.664 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.664 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.664 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.664 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.664 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.664 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.664 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:09.664 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.664 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:09.664 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:09.664 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:09.664 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E2YWViNTk1YzQ3MWZmNjMyMDM4ODBkMTQ0NGU3NWI4ZTgwZjg4MzcxM2EwMDhh8H6ceA==: 00:27:09.664 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWQwMTNlYTIxMjUzZjYwNjY3NzAyZGE1ZTY5YWVlZjO289xs: 00:27:09.664 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:09.664 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:09.664 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E2YWViNTk1YzQ3MWZmNjMyMDM4ODBkMTQ0NGU3NWI4ZTgwZjg4MzcxM2EwMDhh8H6ceA==: 00:27:09.664 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWQwMTNlYTIxMjUzZjYwNjY3NzAyZGE1ZTY5YWVlZjO289xs: ]] 00:27:09.664 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWQwMTNlYTIxMjUzZjYwNjY3NzAyZGE1ZTY5YWVlZjO289xs: 00:27:09.664 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:09.664 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.664 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:09.664 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:09.664 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:09.664 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.664 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:09.664 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.664 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.664 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.665 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.665 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:09.665 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:09.665 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:09.665 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.665 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.665 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:09.665 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.665 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:09.665 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:09.665 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:09.665 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:09.665 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.665 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.608 nvme0n1 00:27:10.608 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.608 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.608 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.608 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.608 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.608 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.608 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.608 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.608 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.608 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.608 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.608 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.608 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:10.608 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.608 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:10.608 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:10.608 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:10.608 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFhNDVkZWZmNmZlNGM4YzVlZmU0Y2EzODkwYWQ2MGRjYTMwZmUwOWEwMjc1N2JiYjI3MDZlNzM2MDJmNzFmOG3AVpY=: 00:27:10.608 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:10.608 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:10.608 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:10.608 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFhNDVkZWZmNmZlNGM4YzVlZmU0Y2EzODkwYWQ2MGRjYTMwZmUwOWEwMjc1N2JiYjI3MDZlNzM2MDJmNzFmOG3AVpY=: 00:27:10.608 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:10.608 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:10.608 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.608 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:10.608 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:10.608 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:10.608 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.608 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:10.608 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.608 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.608 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.608 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.608 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:10.608 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:10.608 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:10.608 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.608 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.608 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:10.608 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.608 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:10.608 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:10.608 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:10.608 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:10.608 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.608 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.181 nvme0n1 00:27:11.181 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.181 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.181 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.181 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.181 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.181 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.181 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.181 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.181 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.181 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.181 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.182 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:11.182 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:11.182 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.182 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:11.182 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.182 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:11.182 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:11.182 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:11.182 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjE0ZjlhYTUxMGY0ZjVhMjMxODg4MmIxOTcwMDk5MTNeROwx: 00:27:11.182 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGYxZWRiZGY3OGRlMDM1YTY0MWQzODgxZmRjODFhNmJlMWFmZTViM2Q4MWYwNzBmMjcwNjY1ZWZhYWFiOWJhMApbt2w=: 00:27:11.182 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:11.182 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:11.182 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjE0ZjlhYTUxMGY0ZjVhMjMxODg4MmIxOTcwMDk5MTNeROwx: 00:27:11.182 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGYxZWRiZGY3OGRlMDM1YTY0MWQzODgxZmRjODFhNmJlMWFmZTViM2Q4MWYwNzBmMjcwNjY1ZWZhYWFiOWJhMApbt2w=: ]] 00:27:11.182 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGYxZWRiZGY3OGRlMDM1YTY0MWQzODgxZmRjODFhNmJlMWFmZTViM2Q4MWYwNzBmMjcwNjY1ZWZhYWFiOWJhMApbt2w=: 00:27:11.182 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:11.182 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.182 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:11.182 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:11.182 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:11.182 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.182 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:11.182 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.182 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.182 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.182 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.182 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:11.182 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:11.182 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:11.182 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.182 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.182 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:11.182 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.182 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:11.182 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:11.182 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:11.182 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:11.182 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.182 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.443 nvme0n1 00:27:11.443 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.443 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.443 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.443 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.443 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.443 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.443 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.443 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.443 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.443 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.443 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.443 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.443 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:11.443 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.443 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:11.443 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:11.443 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:11.443 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGUwNDAwOTMwMzEyZWRhOGQwY2QwMDYxNTI3OWE0NTIxOGFlOTlkNTM4MzUxYzkwwApCnA==: 00:27:11.443 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: 00:27:11.443 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:11.443 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:11.443 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGUwNDAwOTMwMzEyZWRhOGQwY2QwMDYxNTI3OWE0NTIxOGFlOTlkNTM4MzUxYzkwwApCnA==: 00:27:11.443 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: ]] 00:27:11.443 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: 00:27:11.443 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:11.443 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.443 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:11.443 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:11.443 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:11.443 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.443 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:11.443 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.443 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.443 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.443 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.443 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:11.443 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:11.443 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:11.443 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.443 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.443 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:11.443 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.443 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:11.443 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:11.443 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:11.443 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:11.443 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.443 12:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.706 nvme0n1 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU1MWZiNDVlNWU5MDc4ZDMyNTNlYTY1N2U5M2UxYmLEK/65: 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzgyZGY1NzRkZTA5NzU4ZjFiMDM2YzdmNzJkODgyNGEYTdTy: 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU1MWZiNDVlNWU5MDc4ZDMyNTNlYTY1N2U5M2UxYmLEK/65: 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzgyZGY1NzRkZTA5NzU4ZjFiMDM2YzdmNzJkODgyNGEYTdTy: ]] 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzgyZGY1NzRkZTA5NzU4ZjFiMDM2YzdmNzJkODgyNGEYTdTy: 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.706 nvme0n1 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.706 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.969 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.969 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.969 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.969 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.969 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.969 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.969 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:11.969 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.969 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:11.969 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:11.969 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:11.969 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E2YWViNTk1YzQ3MWZmNjMyMDM4ODBkMTQ0NGU3NWI4ZTgwZjg4MzcxM2EwMDhh8H6ceA==: 00:27:11.969 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWQwMTNlYTIxMjUzZjYwNjY3NzAyZGE1ZTY5YWVlZjO289xs: 00:27:11.969 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:11.969 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:11.969 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E2YWViNTk1YzQ3MWZmNjMyMDM4ODBkMTQ0NGU3NWI4ZTgwZjg4MzcxM2EwMDhh8H6ceA==: 00:27:11.969 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWQwMTNlYTIxMjUzZjYwNjY3NzAyZGE1ZTY5YWVlZjO289xs: ]] 00:27:11.969 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWQwMTNlYTIxMjUzZjYwNjY3NzAyZGE1ZTY5YWVlZjO289xs: 00:27:11.969 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:11.969 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.969 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:11.969 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:11.969 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:11.969 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.969 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:11.969 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.969 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.969 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.969 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.969 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:11.969 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:11.969 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:11.969 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.969 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.969 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:11.969 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.970 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:11.970 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:11.970 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:11.970 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:11.970 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.970 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.970 nvme0n1 00:27:11.970 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.970 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.970 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.970 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.970 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.970 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.970 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.970 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.970 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.970 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.232 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.232 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.232 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:12.232 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.232 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:12.232 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:12.232 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:12.232 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFhNDVkZWZmNmZlNGM4YzVlZmU0Y2EzODkwYWQ2MGRjYTMwZmUwOWEwMjc1N2JiYjI3MDZlNzM2MDJmNzFmOG3AVpY=: 00:27:12.232 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:12.232 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:12.232 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:12.232 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFhNDVkZWZmNmZlNGM4YzVlZmU0Y2EzODkwYWQ2MGRjYTMwZmUwOWEwMjc1N2JiYjI3MDZlNzM2MDJmNzFmOG3AVpY=: 00:27:12.232 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:12.232 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:12.232 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.232 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:12.232 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:12.232 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:12.232 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.232 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:12.232 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.232 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.232 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.232 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.232 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:12.232 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:12.232 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:12.232 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.232 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.232 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:12.232 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.232 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:12.232 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:12.232 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:12.232 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:12.232 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.232 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.232 nvme0n1 00:27:12.232 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.232 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.232 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.232 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.232 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.232 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.232 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.232 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.232 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.233 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.233 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.233 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:12.233 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.233 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:12.233 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.233 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:12.233 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:12.233 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:12.233 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjE0ZjlhYTUxMGY0ZjVhMjMxODg4MmIxOTcwMDk5MTNeROwx: 00:27:12.233 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGYxZWRiZGY3OGRlMDM1YTY0MWQzODgxZmRjODFhNmJlMWFmZTViM2Q4MWYwNzBmMjcwNjY1ZWZhYWFiOWJhMApbt2w=: 00:27:12.233 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:12.233 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:12.233 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjE0ZjlhYTUxMGY0ZjVhMjMxODg4MmIxOTcwMDk5MTNeROwx: 00:27:12.233 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGYxZWRiZGY3OGRlMDM1YTY0MWQzODgxZmRjODFhNmJlMWFmZTViM2Q4MWYwNzBmMjcwNjY1ZWZhYWFiOWJhMApbt2w=: ]] 00:27:12.233 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGYxZWRiZGY3OGRlMDM1YTY0MWQzODgxZmRjODFhNmJlMWFmZTViM2Q4MWYwNzBmMjcwNjY1ZWZhYWFiOWJhMApbt2w=: 00:27:12.233 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:12.233 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.233 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:12.233 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:12.233 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:12.233 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.233 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:12.233 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.233 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.233 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.493 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.493 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:12.493 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:12.493 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:12.493 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.493 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.493 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:12.493 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.493 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:12.493 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:12.493 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:12.493 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:12.493 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.493 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.493 nvme0n1 00:27:12.493 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.493 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.493 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.493 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.493 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.493 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.493 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.493 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.493 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.493 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.493 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.493 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.493 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:12.493 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.493 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:12.493 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:12.493 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:12.493 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGUwNDAwOTMwMzEyZWRhOGQwY2QwMDYxNTI3OWE0NTIxOGFlOTlkNTM4MzUxYzkwwApCnA==: 00:27:12.493 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: 00:27:12.493 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:12.493 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:12.493 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGUwNDAwOTMwMzEyZWRhOGQwY2QwMDYxNTI3OWE0NTIxOGFlOTlkNTM4MzUxYzkwwApCnA==: 00:27:12.493 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: ]] 00:27:12.493 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: 00:27:12.493 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:12.493 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.493 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:12.493 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:12.493 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:12.493 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.494 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:12.494 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.494 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.755 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.755 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.755 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:12.755 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:12.755 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:12.755 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.755 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.755 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:12.755 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.755 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:12.755 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:12.755 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:12.755 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:12.755 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.755 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.755 nvme0n1 00:27:12.755 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.756 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.756 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.756 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.756 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.756 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.756 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.756 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.756 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.756 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.756 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.756 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.756 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:12.756 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.756 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:12.756 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:12.756 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:12.756 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU1MWZiNDVlNWU5MDc4ZDMyNTNlYTY1N2U5M2UxYmLEK/65: 00:27:12.756 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzgyZGY1NzRkZTA5NzU4ZjFiMDM2YzdmNzJkODgyNGEYTdTy: 00:27:12.756 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:12.756 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:12.756 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU1MWZiNDVlNWU5MDc4ZDMyNTNlYTY1N2U5M2UxYmLEK/65: 00:27:12.756 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzgyZGY1NzRkZTA5NzU4ZjFiMDM2YzdmNzJkODgyNGEYTdTy: ]] 00:27:12.756 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzgyZGY1NzRkZTA5NzU4ZjFiMDM2YzdmNzJkODgyNGEYTdTy: 00:27:12.756 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:12.756 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.756 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:12.756 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:12.756 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:12.756 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.756 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:12.756 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.756 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.018 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.018 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.018 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:13.018 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:13.018 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:13.018 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.018 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.018 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:13.018 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.018 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:13.018 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:13.018 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:13.018 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:13.018 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.018 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.018 nvme0n1 00:27:13.018 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.018 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.018 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.018 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.018 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.018 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.018 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.018 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.018 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.018 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.018 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.018 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.018 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:13.018 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.018 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:13.018 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:13.018 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:13.018 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E2YWViNTk1YzQ3MWZmNjMyMDM4ODBkMTQ0NGU3NWI4ZTgwZjg4MzcxM2EwMDhh8H6ceA==: 00:27:13.018 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWQwMTNlYTIxMjUzZjYwNjY3NzAyZGE1ZTY5YWVlZjO289xs: 00:27:13.018 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:13.018 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:13.018 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E2YWViNTk1YzQ3MWZmNjMyMDM4ODBkMTQ0NGU3NWI4ZTgwZjg4MzcxM2EwMDhh8H6ceA==: 00:27:13.018 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWQwMTNlYTIxMjUzZjYwNjY3NzAyZGE1ZTY5YWVlZjO289xs: ]] 00:27:13.018 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWQwMTNlYTIxMjUzZjYwNjY3NzAyZGE1ZTY5YWVlZjO289xs: 00:27:13.018 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:13.018 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.018 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:13.018 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:13.018 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:13.018 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.018 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:13.018 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.018 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.280 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.280 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.280 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:13.280 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:13.280 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:13.280 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.280 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.280 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:13.280 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.280 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:13.280 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:13.280 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:13.280 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:13.280 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.280 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.280 nvme0n1 00:27:13.280 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.280 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.280 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.280 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.280 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.280 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.280 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.280 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.280 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.280 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.280 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.280 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.280 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:13.280 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.280 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:13.280 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:13.280 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:13.280 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFhNDVkZWZmNmZlNGM4YzVlZmU0Y2EzODkwYWQ2MGRjYTMwZmUwOWEwMjc1N2JiYjI3MDZlNzM2MDJmNzFmOG3AVpY=: 00:27:13.280 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:13.280 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:13.280 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:13.280 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFhNDVkZWZmNmZlNGM4YzVlZmU0Y2EzODkwYWQ2MGRjYTMwZmUwOWEwMjc1N2JiYjI3MDZlNzM2MDJmNzFmOG3AVpY=: 00:27:13.280 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:13.280 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:13.280 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.280 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:13.280 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:13.280 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:13.280 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.280 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:13.280 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.280 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.541 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.541 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.541 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:13.541 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:13.541 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:13.541 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.541 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.541 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:13.541 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.541 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:13.541 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:13.541 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:13.541 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:13.541 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.541 12:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.541 nvme0n1 00:27:13.542 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.542 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.542 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.542 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.542 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.542 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.542 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.542 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.542 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.542 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.542 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.542 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:13.542 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.542 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:13.542 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.542 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:13.542 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:13.542 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:13.542 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjE0ZjlhYTUxMGY0ZjVhMjMxODg4MmIxOTcwMDk5MTNeROwx: 00:27:13.542 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGYxZWRiZGY3OGRlMDM1YTY0MWQzODgxZmRjODFhNmJlMWFmZTViM2Q4MWYwNzBmMjcwNjY1ZWZhYWFiOWJhMApbt2w=: 00:27:13.542 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:13.542 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:13.542 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjE0ZjlhYTUxMGY0ZjVhMjMxODg4MmIxOTcwMDk5MTNeROwx: 00:27:13.542 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGYxZWRiZGY3OGRlMDM1YTY0MWQzODgxZmRjODFhNmJlMWFmZTViM2Q4MWYwNzBmMjcwNjY1ZWZhYWFiOWJhMApbt2w=: ]] 00:27:13.542 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGYxZWRiZGY3OGRlMDM1YTY0MWQzODgxZmRjODFhNmJlMWFmZTViM2Q4MWYwNzBmMjcwNjY1ZWZhYWFiOWJhMApbt2w=: 00:27:13.542 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:13.542 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.542 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:13.542 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:13.542 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:13.542 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.542 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:13.542 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.542 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.803 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.803 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.803 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:13.803 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:13.803 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:13.803 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.803 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.803 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:13.803 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.803 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:13.803 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:13.803 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:13.803 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:13.803 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.803 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.065 nvme0n1 00:27:14.065 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.065 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.065 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.065 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.065 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.065 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.065 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.065 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.065 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.065 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.065 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.065 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.065 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:14.065 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.065 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:14.065 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:14.065 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:14.065 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGUwNDAwOTMwMzEyZWRhOGQwY2QwMDYxNTI3OWE0NTIxOGFlOTlkNTM4MzUxYzkwwApCnA==: 00:27:14.065 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: 00:27:14.065 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:14.065 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:14.065 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGUwNDAwOTMwMzEyZWRhOGQwY2QwMDYxNTI3OWE0NTIxOGFlOTlkNTM4MzUxYzkwwApCnA==: 00:27:14.065 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: ]] 00:27:14.065 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: 00:27:14.065 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:14.065 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.065 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:14.065 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:14.065 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:14.065 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.065 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:14.065 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.065 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.065 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.065 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.065 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:14.065 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:14.065 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:14.065 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.065 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.065 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:14.065 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.065 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:14.065 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:14.065 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:14.065 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:14.065 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.065 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.327 nvme0n1 00:27:14.327 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.327 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.327 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.327 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.327 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.327 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.327 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.327 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.327 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.327 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.327 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.327 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.327 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:14.327 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.327 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:14.327 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:14.327 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:14.327 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU1MWZiNDVlNWU5MDc4ZDMyNTNlYTY1N2U5M2UxYmLEK/65: 00:27:14.327 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzgyZGY1NzRkZTA5NzU4ZjFiMDM2YzdmNzJkODgyNGEYTdTy: 00:27:14.327 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:14.327 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:14.327 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU1MWZiNDVlNWU5MDc4ZDMyNTNlYTY1N2U5M2UxYmLEK/65: 00:27:14.327 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzgyZGY1NzRkZTA5NzU4ZjFiMDM2YzdmNzJkODgyNGEYTdTy: ]] 00:27:14.327 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzgyZGY1NzRkZTA5NzU4ZjFiMDM2YzdmNzJkODgyNGEYTdTy: 00:27:14.327 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:14.327 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.327 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:14.327 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:14.327 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:14.327 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.327 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:14.327 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.327 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.327 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.327 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.327 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:14.327 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:14.327 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:14.327 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.327 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.327 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:14.327 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.327 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:14.327 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:14.327 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:14.327 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:14.327 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.327 12:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.589 nvme0n1 00:27:14.589 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.589 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.590 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.590 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.590 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.590 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.590 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.590 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.590 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.590 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.590 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.590 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.590 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:14.590 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.590 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:14.590 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:14.590 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:14.590 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E2YWViNTk1YzQ3MWZmNjMyMDM4ODBkMTQ0NGU3NWI4ZTgwZjg4MzcxM2EwMDhh8H6ceA==: 00:27:14.590 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWQwMTNlYTIxMjUzZjYwNjY3NzAyZGE1ZTY5YWVlZjO289xs: 00:27:14.590 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:14.590 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:14.590 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E2YWViNTk1YzQ3MWZmNjMyMDM4ODBkMTQ0NGU3NWI4ZTgwZjg4MzcxM2EwMDhh8H6ceA==: 00:27:14.590 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWQwMTNlYTIxMjUzZjYwNjY3NzAyZGE1ZTY5YWVlZjO289xs: ]] 00:27:14.590 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWQwMTNlYTIxMjUzZjYwNjY3NzAyZGE1ZTY5YWVlZjO289xs: 00:27:14.590 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:14.590 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.590 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:14.590 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:14.590 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:14.590 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.590 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:14.590 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.590 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.590 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.590 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.590 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:14.590 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:14.590 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:14.590 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.590 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.590 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:14.590 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.590 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:14.590 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:14.590 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:14.590 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:14.590 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.590 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.851 nvme0n1 00:27:14.851 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.851 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.851 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.851 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.851 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.851 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.113 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.113 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.113 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.113 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.113 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.113 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.113 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:15.113 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.113 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:15.113 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:15.113 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:15.113 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFhNDVkZWZmNmZlNGM4YzVlZmU0Y2EzODkwYWQ2MGRjYTMwZmUwOWEwMjc1N2JiYjI3MDZlNzM2MDJmNzFmOG3AVpY=: 00:27:15.113 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:15.113 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:15.113 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:15.113 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFhNDVkZWZmNmZlNGM4YzVlZmU0Y2EzODkwYWQ2MGRjYTMwZmUwOWEwMjc1N2JiYjI3MDZlNzM2MDJmNzFmOG3AVpY=: 00:27:15.113 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:15.113 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:15.113 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.113 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:15.113 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:15.113 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:15.113 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.113 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:15.113 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.113 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.114 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.114 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.114 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:15.114 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:15.114 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:15.114 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.114 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.114 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:15.114 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.114 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:15.114 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:15.114 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:15.114 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:15.114 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.114 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.377 nvme0n1 00:27:15.377 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.377 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.377 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.377 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.377 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.377 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.377 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.377 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.377 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.377 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.377 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.377 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:15.377 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.377 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:15.377 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.377 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:15.377 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:15.377 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:15.377 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjE0ZjlhYTUxMGY0ZjVhMjMxODg4MmIxOTcwMDk5MTNeROwx: 00:27:15.377 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGYxZWRiZGY3OGRlMDM1YTY0MWQzODgxZmRjODFhNmJlMWFmZTViM2Q4MWYwNzBmMjcwNjY1ZWZhYWFiOWJhMApbt2w=: 00:27:15.377 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:15.377 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:15.377 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjE0ZjlhYTUxMGY0ZjVhMjMxODg4MmIxOTcwMDk5MTNeROwx: 00:27:15.377 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGYxZWRiZGY3OGRlMDM1YTY0MWQzODgxZmRjODFhNmJlMWFmZTViM2Q4MWYwNzBmMjcwNjY1ZWZhYWFiOWJhMApbt2w=: ]] 00:27:15.377 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGYxZWRiZGY3OGRlMDM1YTY0MWQzODgxZmRjODFhNmJlMWFmZTViM2Q4MWYwNzBmMjcwNjY1ZWZhYWFiOWJhMApbt2w=: 00:27:15.377 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:15.377 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.377 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:15.377 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:15.378 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:15.378 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.378 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:15.378 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.378 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.378 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.378 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.378 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:15.378 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:15.378 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:15.378 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.378 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.378 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:15.378 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.378 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:15.378 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:15.378 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:15.378 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:15.378 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.378 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.950 nvme0n1 00:27:15.950 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.950 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.950 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.950 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.950 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.950 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.950 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.950 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.950 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.950 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.950 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.951 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.951 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:15.951 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.951 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:15.951 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:15.951 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:15.951 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGUwNDAwOTMwMzEyZWRhOGQwY2QwMDYxNTI3OWE0NTIxOGFlOTlkNTM4MzUxYzkwwApCnA==: 00:27:15.951 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: 00:27:15.951 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:15.951 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:15.951 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGUwNDAwOTMwMzEyZWRhOGQwY2QwMDYxNTI3OWE0NTIxOGFlOTlkNTM4MzUxYzkwwApCnA==: 00:27:15.951 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: ]] 00:27:15.951 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: 00:27:15.951 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:15.951 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.951 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:15.951 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:15.951 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:15.951 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.951 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:15.951 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.951 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.951 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.951 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.951 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:15.951 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:15.951 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:15.951 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.951 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.951 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:15.951 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.951 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:15.951 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:15.951 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:15.951 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:15.951 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.951 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.212 nvme0n1 00:27:16.212 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.212 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.212 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.212 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.212 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.212 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.475 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.475 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.475 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.475 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.475 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.475 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.475 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:16.475 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.475 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:16.475 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:16.475 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:16.475 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU1MWZiNDVlNWU5MDc4ZDMyNTNlYTY1N2U5M2UxYmLEK/65: 00:27:16.475 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzgyZGY1NzRkZTA5NzU4ZjFiMDM2YzdmNzJkODgyNGEYTdTy: 00:27:16.475 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:16.475 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:16.475 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU1MWZiNDVlNWU5MDc4ZDMyNTNlYTY1N2U5M2UxYmLEK/65: 00:27:16.475 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzgyZGY1NzRkZTA5NzU4ZjFiMDM2YzdmNzJkODgyNGEYTdTy: ]] 00:27:16.475 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzgyZGY1NzRkZTA5NzU4ZjFiMDM2YzdmNzJkODgyNGEYTdTy: 00:27:16.475 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:16.475 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.475 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:16.475 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:16.475 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:16.475 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.475 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:16.475 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.475 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.475 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.475 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.475 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:16.475 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:16.475 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:16.475 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.475 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.475 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:16.475 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.475 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:16.475 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:16.475 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:16.475 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:16.475 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.475 12:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.736 nvme0n1 00:27:16.736 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.736 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.736 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.736 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.736 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.736 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.736 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.737 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.737 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.737 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.737 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.737 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.737 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:16.737 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.737 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:16.737 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:16.737 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:16.737 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E2YWViNTk1YzQ3MWZmNjMyMDM4ODBkMTQ0NGU3NWI4ZTgwZjg4MzcxM2EwMDhh8H6ceA==: 00:27:16.737 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWQwMTNlYTIxMjUzZjYwNjY3NzAyZGE1ZTY5YWVlZjO289xs: 00:27:16.737 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:16.737 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:16.737 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E2YWViNTk1YzQ3MWZmNjMyMDM4ODBkMTQ0NGU3NWI4ZTgwZjg4MzcxM2EwMDhh8H6ceA==: 00:27:16.737 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWQwMTNlYTIxMjUzZjYwNjY3NzAyZGE1ZTY5YWVlZjO289xs: ]] 00:27:16.737 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWQwMTNlYTIxMjUzZjYwNjY3NzAyZGE1ZTY5YWVlZjO289xs: 00:27:16.737 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:16.737 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.737 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:16.737 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:16.737 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:16.737 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.737 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:16.737 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.737 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.997 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.997 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.997 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:16.997 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:16.998 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:16.998 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.998 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.998 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:16.998 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.998 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:16.998 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:16.998 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:16.998 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:16.998 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.998 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.259 nvme0n1 00:27:17.259 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.259 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.259 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.259 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.259 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.259 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.259 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.259 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.259 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.259 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.259 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.259 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.259 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:17.259 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.259 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:17.259 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:17.259 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:17.259 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFhNDVkZWZmNmZlNGM4YzVlZmU0Y2EzODkwYWQ2MGRjYTMwZmUwOWEwMjc1N2JiYjI3MDZlNzM2MDJmNzFmOG3AVpY=: 00:27:17.259 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:17.259 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:17.259 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:17.260 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFhNDVkZWZmNmZlNGM4YzVlZmU0Y2EzODkwYWQ2MGRjYTMwZmUwOWEwMjc1N2JiYjI3MDZlNzM2MDJmNzFmOG3AVpY=: 00:27:17.260 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:17.260 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:17.260 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.260 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:17.260 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:17.260 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:17.260 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.260 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:17.260 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.260 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.260 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.260 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.260 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:17.260 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:17.260 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:17.260 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.260 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.260 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:17.260 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.260 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:17.260 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:17.260 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:17.260 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:17.260 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.260 12:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.834 nvme0n1 00:27:17.834 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.834 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.834 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.834 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.834 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.834 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.834 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.834 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.834 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.834 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.834 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.834 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:17.834 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.834 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:17.834 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.834 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:17.834 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:17.834 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:17.834 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjE0ZjlhYTUxMGY0ZjVhMjMxODg4MmIxOTcwMDk5MTNeROwx: 00:27:17.834 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGYxZWRiZGY3OGRlMDM1YTY0MWQzODgxZmRjODFhNmJlMWFmZTViM2Q4MWYwNzBmMjcwNjY1ZWZhYWFiOWJhMApbt2w=: 00:27:17.834 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:17.834 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:17.834 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjE0ZjlhYTUxMGY0ZjVhMjMxODg4MmIxOTcwMDk5MTNeROwx: 00:27:17.834 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGYxZWRiZGY3OGRlMDM1YTY0MWQzODgxZmRjODFhNmJlMWFmZTViM2Q4MWYwNzBmMjcwNjY1ZWZhYWFiOWJhMApbt2w=: ]] 00:27:17.834 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGYxZWRiZGY3OGRlMDM1YTY0MWQzODgxZmRjODFhNmJlMWFmZTViM2Q4MWYwNzBmMjcwNjY1ZWZhYWFiOWJhMApbt2w=: 00:27:17.834 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:17.834 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.834 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:17.834 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:17.834 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:17.834 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.834 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:17.834 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.834 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.834 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.834 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.834 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:17.834 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:17.834 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:17.834 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.834 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.834 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:17.834 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.834 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:17.834 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:17.834 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:17.834 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:17.834 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.834 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.408 nvme0n1 00:27:18.408 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.408 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.408 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.408 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.408 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.408 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.669 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.669 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.669 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.669 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.669 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.669 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.669 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:18.669 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.669 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:18.669 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:18.669 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:18.669 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGUwNDAwOTMwMzEyZWRhOGQwY2QwMDYxNTI3OWE0NTIxOGFlOTlkNTM4MzUxYzkwwApCnA==: 00:27:18.669 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: 00:27:18.669 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:18.669 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:18.669 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGUwNDAwOTMwMzEyZWRhOGQwY2QwMDYxNTI3OWE0NTIxOGFlOTlkNTM4MzUxYzkwwApCnA==: 00:27:18.669 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: ]] 00:27:18.669 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: 00:27:18.669 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:18.669 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.669 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:18.669 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:18.669 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:18.669 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.669 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:18.669 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.669 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.669 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.669 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.669 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:18.669 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:18.669 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:18.669 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.669 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.669 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:18.669 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.669 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:18.669 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:18.669 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:18.669 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:18.669 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.669 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.241 nvme0n1 00:27:19.241 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.241 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.242 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.242 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.242 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.242 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.242 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.242 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.242 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.242 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.242 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.242 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.242 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:19.242 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.242 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:19.242 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:19.242 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:19.242 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU1MWZiNDVlNWU5MDc4ZDMyNTNlYTY1N2U5M2UxYmLEK/65: 00:27:19.242 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzgyZGY1NzRkZTA5NzU4ZjFiMDM2YzdmNzJkODgyNGEYTdTy: 00:27:19.242 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:19.242 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:19.242 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU1MWZiNDVlNWU5MDc4ZDMyNTNlYTY1N2U5M2UxYmLEK/65: 00:27:19.242 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzgyZGY1NzRkZTA5NzU4ZjFiMDM2YzdmNzJkODgyNGEYTdTy: ]] 00:27:19.242 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzgyZGY1NzRkZTA5NzU4ZjFiMDM2YzdmNzJkODgyNGEYTdTy: 00:27:19.242 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:19.242 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.242 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:19.242 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:19.242 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:19.242 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.242 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:19.242 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.242 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.242 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.242 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.242 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:19.242 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:19.242 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:19.242 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.242 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.242 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:19.242 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.242 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:19.242 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:19.242 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:19.242 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:19.242 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.242 12:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.187 nvme0n1 00:27:20.187 12:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.187 12:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.187 12:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.187 12:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.187 12:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.187 12:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.187 12:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.187 12:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.187 12:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.187 12:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.187 12:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.187 12:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.187 12:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:20.187 12:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.187 12:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:20.187 12:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:20.187 12:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:20.187 12:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E2YWViNTk1YzQ3MWZmNjMyMDM4ODBkMTQ0NGU3NWI4ZTgwZjg4MzcxM2EwMDhh8H6ceA==: 00:27:20.187 12:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWQwMTNlYTIxMjUzZjYwNjY3NzAyZGE1ZTY5YWVlZjO289xs: 00:27:20.187 12:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:20.187 12:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:20.187 12:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E2YWViNTk1YzQ3MWZmNjMyMDM4ODBkMTQ0NGU3NWI4ZTgwZjg4MzcxM2EwMDhh8H6ceA==: 00:27:20.187 12:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWQwMTNlYTIxMjUzZjYwNjY3NzAyZGE1ZTY5YWVlZjO289xs: ]] 00:27:20.187 12:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWQwMTNlYTIxMjUzZjYwNjY3NzAyZGE1ZTY5YWVlZjO289xs: 00:27:20.187 12:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:20.187 12:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.187 12:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:20.187 12:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:20.187 12:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:20.187 12:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.187 12:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:20.187 12:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.187 12:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.187 12:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.187 12:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.188 12:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:20.188 12:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:20.188 12:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:20.188 12:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.188 12:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.188 12:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:20.188 12:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.188 12:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:20.188 12:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:20.188 12:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:20.188 12:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:20.188 12:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.188 12:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.761 nvme0n1 00:27:20.761 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.761 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.761 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.761 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.761 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.761 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.761 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.761 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.761 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.761 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.761 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.761 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.761 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:20.761 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.761 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:20.761 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:20.761 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:20.761 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFhNDVkZWZmNmZlNGM4YzVlZmU0Y2EzODkwYWQ2MGRjYTMwZmUwOWEwMjc1N2JiYjI3MDZlNzM2MDJmNzFmOG3AVpY=: 00:27:20.761 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:20.761 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:20.761 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:20.761 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFhNDVkZWZmNmZlNGM4YzVlZmU0Y2EzODkwYWQ2MGRjYTMwZmUwOWEwMjc1N2JiYjI3MDZlNzM2MDJmNzFmOG3AVpY=: 00:27:20.761 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:20.761 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:20.761 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.761 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:20.761 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:20.761 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:20.761 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.761 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:20.761 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.761 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.761 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.761 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.761 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:20.761 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:20.761 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:20.761 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.761 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.761 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:20.761 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.762 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:20.762 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:20.762 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:20.762 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:20.762 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.762 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.335 nvme0n1 00:27:21.335 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.335 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.335 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.335 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.335 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.335 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.597 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.597 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.597 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.597 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.597 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.597 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:21.597 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:21.597 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.597 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:21.597 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.597 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:21.597 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:21.597 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:21.597 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjE0ZjlhYTUxMGY0ZjVhMjMxODg4MmIxOTcwMDk5MTNeROwx: 00:27:21.597 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGYxZWRiZGY3OGRlMDM1YTY0MWQzODgxZmRjODFhNmJlMWFmZTViM2Q4MWYwNzBmMjcwNjY1ZWZhYWFiOWJhMApbt2w=: 00:27:21.597 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:21.597 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:21.597 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjE0ZjlhYTUxMGY0ZjVhMjMxODg4MmIxOTcwMDk5MTNeROwx: 00:27:21.597 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGYxZWRiZGY3OGRlMDM1YTY0MWQzODgxZmRjODFhNmJlMWFmZTViM2Q4MWYwNzBmMjcwNjY1ZWZhYWFiOWJhMApbt2w=: ]] 00:27:21.597 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGYxZWRiZGY3OGRlMDM1YTY0MWQzODgxZmRjODFhNmJlMWFmZTViM2Q4MWYwNzBmMjcwNjY1ZWZhYWFiOWJhMApbt2w=: 00:27:21.597 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:21.597 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.597 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:21.597 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:21.597 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:21.597 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.597 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:21.597 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.597 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.597 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.597 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.597 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:21.597 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:21.597 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:21.597 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.597 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.597 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:21.597 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.597 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:21.597 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:21.597 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:21.597 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:21.597 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.597 12:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.597 nvme0n1 00:27:21.597 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.597 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.597 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.597 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.597 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.597 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.597 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.597 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.597 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.597 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.597 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.597 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.597 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:21.597 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.597 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:21.597 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:21.597 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:21.597 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGUwNDAwOTMwMzEyZWRhOGQwY2QwMDYxNTI3OWE0NTIxOGFlOTlkNTM4MzUxYzkwwApCnA==: 00:27:21.597 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: 00:27:21.597 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:21.597 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:21.597 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGUwNDAwOTMwMzEyZWRhOGQwY2QwMDYxNTI3OWE0NTIxOGFlOTlkNTM4MzUxYzkwwApCnA==: 00:27:21.597 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: ]] 00:27:21.597 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: 00:27:21.597 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:21.597 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.860 nvme0n1 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU1MWZiNDVlNWU5MDc4ZDMyNTNlYTY1N2U5M2UxYmLEK/65: 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzgyZGY1NzRkZTA5NzU4ZjFiMDM2YzdmNzJkODgyNGEYTdTy: 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU1MWZiNDVlNWU5MDc4ZDMyNTNlYTY1N2U5M2UxYmLEK/65: 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzgyZGY1NzRkZTA5NzU4ZjFiMDM2YzdmNzJkODgyNGEYTdTy: ]] 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzgyZGY1NzRkZTA5NzU4ZjFiMDM2YzdmNzJkODgyNGEYTdTy: 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.860 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.122 nvme0n1 00:27:22.122 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.122 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.122 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.122 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.122 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.122 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.122 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.122 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.122 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.122 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.122 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.122 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.122 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:22.122 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.122 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:22.122 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:22.122 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:22.122 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E2YWViNTk1YzQ3MWZmNjMyMDM4ODBkMTQ0NGU3NWI4ZTgwZjg4MzcxM2EwMDhh8H6ceA==: 00:27:22.122 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWQwMTNlYTIxMjUzZjYwNjY3NzAyZGE1ZTY5YWVlZjO289xs: 00:27:22.122 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:22.122 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:22.122 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E2YWViNTk1YzQ3MWZmNjMyMDM4ODBkMTQ0NGU3NWI4ZTgwZjg4MzcxM2EwMDhh8H6ceA==: 00:27:22.122 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWQwMTNlYTIxMjUzZjYwNjY3NzAyZGE1ZTY5YWVlZjO289xs: ]] 00:27:22.122 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWQwMTNlYTIxMjUzZjYwNjY3NzAyZGE1ZTY5YWVlZjO289xs: 00:27:22.122 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:22.122 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.122 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:22.122 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:22.122 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:22.122 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.122 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:22.122 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.122 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.122 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.122 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.122 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:22.122 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:22.122 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:22.122 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.122 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.122 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:22.122 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.122 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:22.122 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:22.122 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:22.122 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:22.122 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.122 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.387 nvme0n1 00:27:22.387 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.387 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.387 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.387 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.387 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.387 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.387 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.387 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.387 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.387 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.387 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.387 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.387 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:22.387 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.387 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:22.387 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:22.387 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:22.387 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFhNDVkZWZmNmZlNGM4YzVlZmU0Y2EzODkwYWQ2MGRjYTMwZmUwOWEwMjc1N2JiYjI3MDZlNzM2MDJmNzFmOG3AVpY=: 00:27:22.387 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:22.387 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:22.387 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:22.387 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFhNDVkZWZmNmZlNGM4YzVlZmU0Y2EzODkwYWQ2MGRjYTMwZmUwOWEwMjc1N2JiYjI3MDZlNzM2MDJmNzFmOG3AVpY=: 00:27:22.387 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:22.387 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:22.387 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.387 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:22.387 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:22.387 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:22.387 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.387 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:22.387 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.387 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.387 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.387 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.387 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:22.387 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:22.387 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:22.387 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.387 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.387 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:22.387 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.387 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:22.387 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:22.387 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:22.387 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:22.387 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.387 12:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.647 nvme0n1 00:27:22.647 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.647 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.647 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.647 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.648 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.648 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.648 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.648 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.648 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.648 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.648 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.648 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:22.648 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.648 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:22.648 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.648 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:22.648 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:22.648 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:22.648 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjE0ZjlhYTUxMGY0ZjVhMjMxODg4MmIxOTcwMDk5MTNeROwx: 00:27:22.648 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGYxZWRiZGY3OGRlMDM1YTY0MWQzODgxZmRjODFhNmJlMWFmZTViM2Q4MWYwNzBmMjcwNjY1ZWZhYWFiOWJhMApbt2w=: 00:27:22.648 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:22.648 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:22.648 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjE0ZjlhYTUxMGY0ZjVhMjMxODg4MmIxOTcwMDk5MTNeROwx: 00:27:22.648 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGYxZWRiZGY3OGRlMDM1YTY0MWQzODgxZmRjODFhNmJlMWFmZTViM2Q4MWYwNzBmMjcwNjY1ZWZhYWFiOWJhMApbt2w=: ]] 00:27:22.648 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGYxZWRiZGY3OGRlMDM1YTY0MWQzODgxZmRjODFhNmJlMWFmZTViM2Q4MWYwNzBmMjcwNjY1ZWZhYWFiOWJhMApbt2w=: 00:27:22.648 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:22.648 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.648 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:22.648 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:22.648 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:22.648 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.648 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:22.648 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.648 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.648 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.648 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.648 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:22.648 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:22.648 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:22.648 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.648 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.648 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:22.648 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.648 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:22.648 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:22.648 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:22.648 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:22.648 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.648 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.908 nvme0n1 00:27:22.909 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.909 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.909 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.909 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.909 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.909 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.909 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.909 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.909 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.909 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.909 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.909 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.909 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:22.909 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.909 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:22.909 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:22.909 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:22.909 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGUwNDAwOTMwMzEyZWRhOGQwY2QwMDYxNTI3OWE0NTIxOGFlOTlkNTM4MzUxYzkwwApCnA==: 00:27:22.909 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: 00:27:22.909 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:22.909 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:22.909 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGUwNDAwOTMwMzEyZWRhOGQwY2QwMDYxNTI3OWE0NTIxOGFlOTlkNTM4MzUxYzkwwApCnA==: 00:27:22.909 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: ]] 00:27:22.909 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: 00:27:22.909 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:22.909 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.909 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:22.909 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:22.909 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:22.909 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.909 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:22.909 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.909 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.909 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.909 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.909 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:22.909 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:22.909 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:22.909 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.909 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.909 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:22.909 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.909 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:22.909 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:22.909 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:22.909 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:22.909 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.909 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.171 nvme0n1 00:27:23.171 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.171 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.171 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.171 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.171 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.171 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.171 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.171 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.171 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.171 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.171 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.171 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.171 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:23.171 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.171 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:23.171 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:23.171 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:23.171 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU1MWZiNDVlNWU5MDc4ZDMyNTNlYTY1N2U5M2UxYmLEK/65: 00:27:23.171 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzgyZGY1NzRkZTA5NzU4ZjFiMDM2YzdmNzJkODgyNGEYTdTy: 00:27:23.171 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:23.171 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:23.171 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU1MWZiNDVlNWU5MDc4ZDMyNTNlYTY1N2U5M2UxYmLEK/65: 00:27:23.171 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzgyZGY1NzRkZTA5NzU4ZjFiMDM2YzdmNzJkODgyNGEYTdTy: ]] 00:27:23.171 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzgyZGY1NzRkZTA5NzU4ZjFiMDM2YzdmNzJkODgyNGEYTdTy: 00:27:23.171 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:23.171 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.171 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:23.171 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:23.171 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:23.171 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.171 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:23.171 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.171 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.171 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.171 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.171 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:23.171 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:23.171 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:23.171 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.171 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.171 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:23.171 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.171 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:23.171 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:23.171 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:23.172 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:23.172 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.172 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.433 nvme0n1 00:27:23.433 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.433 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.433 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.433 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.433 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.433 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.433 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.433 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.433 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.433 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.433 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.433 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.433 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:23.433 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.433 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:23.433 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:23.433 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:23.433 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E2YWViNTk1YzQ3MWZmNjMyMDM4ODBkMTQ0NGU3NWI4ZTgwZjg4MzcxM2EwMDhh8H6ceA==: 00:27:23.433 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWQwMTNlYTIxMjUzZjYwNjY3NzAyZGE1ZTY5YWVlZjO289xs: 00:27:23.433 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:23.433 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:23.433 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E2YWViNTk1YzQ3MWZmNjMyMDM4ODBkMTQ0NGU3NWI4ZTgwZjg4MzcxM2EwMDhh8H6ceA==: 00:27:23.433 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWQwMTNlYTIxMjUzZjYwNjY3NzAyZGE1ZTY5YWVlZjO289xs: ]] 00:27:23.433 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWQwMTNlYTIxMjUzZjYwNjY3NzAyZGE1ZTY5YWVlZjO289xs: 00:27:23.433 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:23.433 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.433 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:23.433 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:23.433 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:23.433 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.433 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:23.433 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.433 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.433 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.433 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.433 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:23.433 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:23.433 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:23.433 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.433 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.433 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:23.433 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.433 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:23.433 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:23.433 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:23.433 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:23.433 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.433 12:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.694 nvme0n1 00:27:23.694 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.694 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.694 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.694 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.694 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.694 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.694 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.694 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.694 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.694 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.694 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.694 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.694 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:23.694 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.694 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:23.694 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:23.694 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:23.694 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFhNDVkZWZmNmZlNGM4YzVlZmU0Y2EzODkwYWQ2MGRjYTMwZmUwOWEwMjc1N2JiYjI3MDZlNzM2MDJmNzFmOG3AVpY=: 00:27:23.694 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:23.694 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:23.694 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:23.694 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFhNDVkZWZmNmZlNGM4YzVlZmU0Y2EzODkwYWQ2MGRjYTMwZmUwOWEwMjc1N2JiYjI3MDZlNzM2MDJmNzFmOG3AVpY=: 00:27:23.694 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:23.694 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:23.694 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.694 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:23.695 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:23.695 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:23.695 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.695 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:23.695 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.695 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.695 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.695 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.695 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:23.695 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:23.695 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:23.695 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.695 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.695 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:23.695 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.695 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:23.695 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:23.695 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:23.695 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:23.695 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.695 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.955 nvme0n1 00:27:23.955 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.955 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.955 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.955 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.955 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.955 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.955 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.955 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.955 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.955 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.955 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.955 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:23.955 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.955 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:23.955 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.955 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:23.955 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:23.955 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:23.955 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjE0ZjlhYTUxMGY0ZjVhMjMxODg4MmIxOTcwMDk5MTNeROwx: 00:27:23.955 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGYxZWRiZGY3OGRlMDM1YTY0MWQzODgxZmRjODFhNmJlMWFmZTViM2Q4MWYwNzBmMjcwNjY1ZWZhYWFiOWJhMApbt2w=: 00:27:23.955 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:23.955 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:23.955 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjE0ZjlhYTUxMGY0ZjVhMjMxODg4MmIxOTcwMDk5MTNeROwx: 00:27:23.955 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGYxZWRiZGY3OGRlMDM1YTY0MWQzODgxZmRjODFhNmJlMWFmZTViM2Q4MWYwNzBmMjcwNjY1ZWZhYWFiOWJhMApbt2w=: ]] 00:27:23.955 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGYxZWRiZGY3OGRlMDM1YTY0MWQzODgxZmRjODFhNmJlMWFmZTViM2Q4MWYwNzBmMjcwNjY1ZWZhYWFiOWJhMApbt2w=: 00:27:23.955 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:23.955 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.955 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:23.955 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:23.955 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:23.955 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.955 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:23.955 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.955 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.955 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.955 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.955 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:23.955 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:23.955 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:23.955 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.955 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.955 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:23.955 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.955 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:23.955 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:23.955 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:23.955 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:23.955 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.955 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.214 nvme0n1 00:27:24.214 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.214 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.214 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.214 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.214 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.214 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.214 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.214 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.214 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.214 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.214 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.214 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.214 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:24.214 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.214 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:24.214 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:24.214 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:24.214 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGUwNDAwOTMwMzEyZWRhOGQwY2QwMDYxNTI3OWE0NTIxOGFlOTlkNTM4MzUxYzkwwApCnA==: 00:27:24.214 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: 00:27:24.214 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:24.214 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:24.214 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGUwNDAwOTMwMzEyZWRhOGQwY2QwMDYxNTI3OWE0NTIxOGFlOTlkNTM4MzUxYzkwwApCnA==: 00:27:24.214 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: ]] 00:27:24.214 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: 00:27:24.214 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:24.214 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.214 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:24.214 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:24.214 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:24.214 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.214 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:24.214 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.214 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.214 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.214 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.214 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:24.214 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:24.214 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:24.214 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.214 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.214 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:24.214 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.214 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:24.214 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:24.214 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:24.214 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:24.214 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.214 12:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.474 nvme0n1 00:27:24.474 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.474 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.474 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.474 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.474 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.474 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.734 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.734 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.734 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.734 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.734 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.734 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.734 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:24.734 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.734 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:24.734 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:24.734 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:24.734 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU1MWZiNDVlNWU5MDc4ZDMyNTNlYTY1N2U5M2UxYmLEK/65: 00:27:24.734 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzgyZGY1NzRkZTA5NzU4ZjFiMDM2YzdmNzJkODgyNGEYTdTy: 00:27:24.734 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:24.734 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:24.734 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU1MWZiNDVlNWU5MDc4ZDMyNTNlYTY1N2U5M2UxYmLEK/65: 00:27:24.734 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzgyZGY1NzRkZTA5NzU4ZjFiMDM2YzdmNzJkODgyNGEYTdTy: ]] 00:27:24.734 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzgyZGY1NzRkZTA5NzU4ZjFiMDM2YzdmNzJkODgyNGEYTdTy: 00:27:24.734 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:24.734 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.734 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:24.734 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:24.734 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:24.734 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.734 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:24.734 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.734 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.734 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.734 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.734 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:24.734 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:24.734 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:24.734 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.734 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.734 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:24.734 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.735 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:24.735 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:24.735 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:24.735 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:24.735 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.735 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.995 nvme0n1 00:27:24.995 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.995 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.995 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.995 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.995 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.995 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.995 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.995 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.995 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.995 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.995 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.995 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.995 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:24.995 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.995 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:24.995 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:24.995 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:24.995 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E2YWViNTk1YzQ3MWZmNjMyMDM4ODBkMTQ0NGU3NWI4ZTgwZjg4MzcxM2EwMDhh8H6ceA==: 00:27:24.995 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWQwMTNlYTIxMjUzZjYwNjY3NzAyZGE1ZTY5YWVlZjO289xs: 00:27:24.995 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:24.995 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:24.995 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E2YWViNTk1YzQ3MWZmNjMyMDM4ODBkMTQ0NGU3NWI4ZTgwZjg4MzcxM2EwMDhh8H6ceA==: 00:27:24.995 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWQwMTNlYTIxMjUzZjYwNjY3NzAyZGE1ZTY5YWVlZjO289xs: ]] 00:27:24.995 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWQwMTNlYTIxMjUzZjYwNjY3NzAyZGE1ZTY5YWVlZjO289xs: 00:27:24.995 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:24.995 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.995 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:24.995 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:24.995 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:24.995 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.995 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:24.995 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.995 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.995 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.995 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.995 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:24.995 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:24.995 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:24.995 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.995 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.996 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:24.996 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.996 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:24.996 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:24.996 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:24.996 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:24.996 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.996 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.256 nvme0n1 00:27:25.256 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.256 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.256 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.256 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.256 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.256 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.256 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.256 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.256 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.256 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.256 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.256 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.256 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:25.256 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.256 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:25.256 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:25.256 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:25.256 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFhNDVkZWZmNmZlNGM4YzVlZmU0Y2EzODkwYWQ2MGRjYTMwZmUwOWEwMjc1N2JiYjI3MDZlNzM2MDJmNzFmOG3AVpY=: 00:27:25.256 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:25.256 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:25.256 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:25.256 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFhNDVkZWZmNmZlNGM4YzVlZmU0Y2EzODkwYWQ2MGRjYTMwZmUwOWEwMjc1N2JiYjI3MDZlNzM2MDJmNzFmOG3AVpY=: 00:27:25.256 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:25.256 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:25.256 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.256 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:25.256 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:25.256 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:25.256 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.256 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:25.256 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.256 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.256 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.256 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.256 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:25.256 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:25.256 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:25.256 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.256 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.256 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:25.256 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.256 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:25.256 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:25.256 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:25.256 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:25.256 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.256 12:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.516 nvme0n1 00:27:25.516 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.516 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.516 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.516 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.516 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.516 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.516 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.516 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.516 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.516 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.776 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.776 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:25.776 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.776 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:25.776 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.776 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:25.776 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:25.776 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:25.776 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjE0ZjlhYTUxMGY0ZjVhMjMxODg4MmIxOTcwMDk5MTNeROwx: 00:27:25.776 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGYxZWRiZGY3OGRlMDM1YTY0MWQzODgxZmRjODFhNmJlMWFmZTViM2Q4MWYwNzBmMjcwNjY1ZWZhYWFiOWJhMApbt2w=: 00:27:25.776 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:25.776 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:25.776 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjE0ZjlhYTUxMGY0ZjVhMjMxODg4MmIxOTcwMDk5MTNeROwx: 00:27:25.776 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGYxZWRiZGY3OGRlMDM1YTY0MWQzODgxZmRjODFhNmJlMWFmZTViM2Q4MWYwNzBmMjcwNjY1ZWZhYWFiOWJhMApbt2w=: ]] 00:27:25.776 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGYxZWRiZGY3OGRlMDM1YTY0MWQzODgxZmRjODFhNmJlMWFmZTViM2Q4MWYwNzBmMjcwNjY1ZWZhYWFiOWJhMApbt2w=: 00:27:25.776 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:25.776 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.776 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:25.776 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:25.776 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:25.776 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.776 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:25.776 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.776 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.776 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.776 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.776 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:25.776 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:25.776 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:25.776 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.776 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.776 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:25.776 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.776 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:25.776 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:25.776 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:25.776 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:25.776 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.776 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.036 nvme0n1 00:27:26.036 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.036 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.036 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.036 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.036 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.036 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.036 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.037 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.037 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.037 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.037 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.037 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.037 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:26.037 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.037 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:26.037 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:26.037 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:26.037 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGUwNDAwOTMwMzEyZWRhOGQwY2QwMDYxNTI3OWE0NTIxOGFlOTlkNTM4MzUxYzkwwApCnA==: 00:27:26.037 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: 00:27:26.037 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:26.037 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:26.037 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGUwNDAwOTMwMzEyZWRhOGQwY2QwMDYxNTI3OWE0NTIxOGFlOTlkNTM4MzUxYzkwwApCnA==: 00:27:26.037 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: ]] 00:27:26.037 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: 00:27:26.037 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:26.037 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.037 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:26.037 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:26.037 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:26.296 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.296 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:26.296 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.296 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.296 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.296 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.296 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:26.296 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:26.296 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:26.296 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.296 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.296 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:26.296 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.296 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:26.296 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:26.296 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:26.296 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:26.296 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.296 12:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.555 nvme0n1 00:27:26.555 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.555 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.555 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.555 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.555 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.555 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.555 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.555 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.555 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.555 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.555 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.555 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.555 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:26.555 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.555 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:26.555 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:26.555 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:26.555 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU1MWZiNDVlNWU5MDc4ZDMyNTNlYTY1N2U5M2UxYmLEK/65: 00:27:26.555 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzgyZGY1NzRkZTA5NzU4ZjFiMDM2YzdmNzJkODgyNGEYTdTy: 00:27:26.555 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:26.555 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:26.555 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU1MWZiNDVlNWU5MDc4ZDMyNTNlYTY1N2U5M2UxYmLEK/65: 00:27:26.555 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzgyZGY1NzRkZTA5NzU4ZjFiMDM2YzdmNzJkODgyNGEYTdTy: ]] 00:27:26.555 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzgyZGY1NzRkZTA5NzU4ZjFiMDM2YzdmNzJkODgyNGEYTdTy: 00:27:26.555 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:26.555 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.555 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:26.555 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:26.555 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:26.555 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.555 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:26.555 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.555 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.555 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.555 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.555 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:26.555 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:26.555 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:26.555 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.556 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.556 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:26.556 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.556 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:26.556 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:26.556 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:26.815 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:26.815 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.815 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.077 nvme0n1 00:27:27.077 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.077 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.077 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.077 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.077 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.077 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.077 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.077 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.077 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.077 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.077 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.077 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.077 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:27.077 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.077 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:27.077 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:27.077 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:27.077 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E2YWViNTk1YzQ3MWZmNjMyMDM4ODBkMTQ0NGU3NWI4ZTgwZjg4MzcxM2EwMDhh8H6ceA==: 00:27:27.077 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWQwMTNlYTIxMjUzZjYwNjY3NzAyZGE1ZTY5YWVlZjO289xs: 00:27:27.077 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:27.077 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:27.077 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E2YWViNTk1YzQ3MWZmNjMyMDM4ODBkMTQ0NGU3NWI4ZTgwZjg4MzcxM2EwMDhh8H6ceA==: 00:27:27.077 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWQwMTNlYTIxMjUzZjYwNjY3NzAyZGE1ZTY5YWVlZjO289xs: ]] 00:27:27.077 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWQwMTNlYTIxMjUzZjYwNjY3NzAyZGE1ZTY5YWVlZjO289xs: 00:27:27.077 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:27.077 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.077 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:27.077 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:27.077 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:27.077 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.077 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:27.077 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.077 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.077 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.077 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.077 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:27.077 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:27.077 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:27.077 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.077 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.077 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:27.077 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.077 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:27.077 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:27.077 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:27.077 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:27.077 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.077 12:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.646 nvme0n1 00:27:27.646 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.646 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.646 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.646 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.646 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.646 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.646 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.646 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.646 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.646 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.646 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.646 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.646 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:27.646 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.646 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:27.646 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:27.646 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:27.646 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFhNDVkZWZmNmZlNGM4YzVlZmU0Y2EzODkwYWQ2MGRjYTMwZmUwOWEwMjc1N2JiYjI3MDZlNzM2MDJmNzFmOG3AVpY=: 00:27:27.646 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:27.646 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:27.646 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:27.646 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFhNDVkZWZmNmZlNGM4YzVlZmU0Y2EzODkwYWQ2MGRjYTMwZmUwOWEwMjc1N2JiYjI3MDZlNzM2MDJmNzFmOG3AVpY=: 00:27:27.646 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:27.646 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:27.646 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.646 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:27.646 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:27.646 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:27.646 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.646 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:27.646 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.646 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.646 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.646 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.646 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:27.646 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:27.646 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:27.646 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.646 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.646 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:27.646 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.646 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:27.646 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:27.646 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:27.646 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:27.646 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.646 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.217 nvme0n1 00:27:28.217 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.217 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.217 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.217 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.217 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.217 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.217 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.217 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.217 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.217 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.217 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.217 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:28.217 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.217 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:28.217 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.217 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:28.217 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:28.217 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:28.217 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjE0ZjlhYTUxMGY0ZjVhMjMxODg4MmIxOTcwMDk5MTNeROwx: 00:27:28.217 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGYxZWRiZGY3OGRlMDM1YTY0MWQzODgxZmRjODFhNmJlMWFmZTViM2Q4MWYwNzBmMjcwNjY1ZWZhYWFiOWJhMApbt2w=: 00:27:28.217 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:28.217 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:28.217 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjE0ZjlhYTUxMGY0ZjVhMjMxODg4MmIxOTcwMDk5MTNeROwx: 00:27:28.217 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGYxZWRiZGY3OGRlMDM1YTY0MWQzODgxZmRjODFhNmJlMWFmZTViM2Q4MWYwNzBmMjcwNjY1ZWZhYWFiOWJhMApbt2w=: ]] 00:27:28.217 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGYxZWRiZGY3OGRlMDM1YTY0MWQzODgxZmRjODFhNmJlMWFmZTViM2Q4MWYwNzBmMjcwNjY1ZWZhYWFiOWJhMApbt2w=: 00:27:28.217 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:28.217 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.217 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:28.217 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:28.217 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:28.217 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.217 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:28.217 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.217 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.217 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.217 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.217 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:28.217 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:28.217 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:28.217 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.217 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.217 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:28.217 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.217 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:28.217 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:28.217 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:28.217 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:28.217 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.217 12:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.786 nvme0n1 00:27:28.786 12:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.786 12:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.786 12:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.786 12:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.786 12:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.786 12:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.786 12:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.786 12:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.786 12:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.786 12:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.786 12:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.786 12:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.786 12:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:28.786 12:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.786 12:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:28.786 12:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:28.786 12:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:28.786 12:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGUwNDAwOTMwMzEyZWRhOGQwY2QwMDYxNTI3OWE0NTIxOGFlOTlkNTM4MzUxYzkwwApCnA==: 00:27:28.786 12:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: 00:27:28.786 12:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:28.786 12:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:28.786 12:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGUwNDAwOTMwMzEyZWRhOGQwY2QwMDYxNTI3OWE0NTIxOGFlOTlkNTM4MzUxYzkwwApCnA==: 00:27:28.786 12:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: ]] 00:27:28.786 12:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: 00:27:28.786 12:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:28.786 12:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.786 12:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:28.786 12:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:28.786 12:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:28.786 12:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.786 12:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:28.786 12:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.786 12:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.786 12:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.786 12:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.786 12:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:28.786 12:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:28.786 12:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:28.786 12:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.786 12:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.786 12:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:28.786 12:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.786 12:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:28.786 12:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:28.786 12:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:28.786 12:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:28.786 12:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.786 12:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.794 nvme0n1 00:27:29.794 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.794 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.794 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.794 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.794 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.794 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.794 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.794 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.794 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.794 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.794 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.794 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.794 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:29.794 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.794 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:29.794 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:29.794 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:29.794 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU1MWZiNDVlNWU5MDc4ZDMyNTNlYTY1N2U5M2UxYmLEK/65: 00:27:29.794 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzgyZGY1NzRkZTA5NzU4ZjFiMDM2YzdmNzJkODgyNGEYTdTy: 00:27:29.794 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:29.794 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:29.794 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU1MWZiNDVlNWU5MDc4ZDMyNTNlYTY1N2U5M2UxYmLEK/65: 00:27:29.794 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzgyZGY1NzRkZTA5NzU4ZjFiMDM2YzdmNzJkODgyNGEYTdTy: ]] 00:27:29.794 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzgyZGY1NzRkZTA5NzU4ZjFiMDM2YzdmNzJkODgyNGEYTdTy: 00:27:29.794 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:29.795 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.795 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:29.795 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:29.795 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:29.795 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.795 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:29.795 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.795 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.795 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.795 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.795 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:29.795 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:29.795 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:29.795 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.795 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.795 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:29.795 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.795 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:29.795 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:29.795 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:29.795 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:29.795 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.795 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.382 nvme0n1 00:27:30.382 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.382 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.382 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.382 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.382 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.382 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.382 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.382 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.382 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.382 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.382 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.382 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.382 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:30.382 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.382 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:30.382 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:30.382 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:30.382 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E2YWViNTk1YzQ3MWZmNjMyMDM4ODBkMTQ0NGU3NWI4ZTgwZjg4MzcxM2EwMDhh8H6ceA==: 00:27:30.382 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWQwMTNlYTIxMjUzZjYwNjY3NzAyZGE1ZTY5YWVlZjO289xs: 00:27:30.382 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:30.382 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:30.382 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E2YWViNTk1YzQ3MWZmNjMyMDM4ODBkMTQ0NGU3NWI4ZTgwZjg4MzcxM2EwMDhh8H6ceA==: 00:27:30.382 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWQwMTNlYTIxMjUzZjYwNjY3NzAyZGE1ZTY5YWVlZjO289xs: ]] 00:27:30.382 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWQwMTNlYTIxMjUzZjYwNjY3NzAyZGE1ZTY5YWVlZjO289xs: 00:27:30.382 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:30.382 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.382 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:30.382 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:30.382 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:30.382 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.382 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:30.382 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.382 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.382 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.382 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.382 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:30.382 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:30.382 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:30.382 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.382 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.382 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:30.382 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.382 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:30.382 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:30.382 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:30.382 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:30.382 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.382 12:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.953 nvme0n1 00:27:30.953 12:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.953 12:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.953 12:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.953 12:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.953 12:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.953 12:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.953 12:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.953 12:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.953 12:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.953 12:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.953 12:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.953 12:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.953 12:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:30.953 12:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.953 12:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:30.953 12:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:30.953 12:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:30.953 12:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFhNDVkZWZmNmZlNGM4YzVlZmU0Y2EzODkwYWQ2MGRjYTMwZmUwOWEwMjc1N2JiYjI3MDZlNzM2MDJmNzFmOG3AVpY=: 00:27:30.953 12:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:30.953 12:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:30.953 12:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:30.953 12:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFhNDVkZWZmNmZlNGM4YzVlZmU0Y2EzODkwYWQ2MGRjYTMwZmUwOWEwMjc1N2JiYjI3MDZlNzM2MDJmNzFmOG3AVpY=: 00:27:30.953 12:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:30.953 12:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:30.953 12:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.953 12:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:30.953 12:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:30.953 12:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:30.953 12:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.953 12:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:30.953 12:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.953 12:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.215 12:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.215 12:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.215 12:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:31.215 12:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:31.215 12:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:31.215 12:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.215 12:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.215 12:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:31.215 12:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.215 12:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:31.215 12:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:31.215 12:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:31.215 12:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:31.215 12:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.215 12:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.785 nvme0n1 00:27:31.785 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.785 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.785 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.785 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.785 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.785 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.785 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.785 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.785 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.785 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.785 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.785 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:31.785 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.785 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:31.785 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:31.785 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:31.785 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGUwNDAwOTMwMzEyZWRhOGQwY2QwMDYxNTI3OWE0NTIxOGFlOTlkNTM4MzUxYzkwwApCnA==: 00:27:31.785 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: 00:27:31.785 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:31.785 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:31.785 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGUwNDAwOTMwMzEyZWRhOGQwY2QwMDYxNTI3OWE0NTIxOGFlOTlkNTM4MzUxYzkwwApCnA==: 00:27:31.785 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: ]] 00:27:31.785 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: 00:27:31.785 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:31.785 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.785 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.785 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.785 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:31.785 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:31.785 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:31.785 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:31.785 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.785 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.785 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:31.785 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.786 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:31.786 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:31.786 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:31.786 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:31.786 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:31.786 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:31.786 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:31.786 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:31.786 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:31.786 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:31.786 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:31.786 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.786 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.786 request: 00:27:31.786 { 00:27:31.786 "name": "nvme0", 00:27:31.786 "trtype": "tcp", 00:27:31.786 "traddr": "10.0.0.1", 00:27:31.786 "adrfam": "ipv4", 00:27:31.786 "trsvcid": "4420", 00:27:31.786 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:31.786 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:31.786 "prchk_reftag": false, 00:27:31.786 "prchk_guard": false, 00:27:31.786 "hdgst": false, 00:27:31.786 "ddgst": false, 00:27:31.786 "allow_unrecognized_csi": false, 00:27:31.786 "method": "bdev_nvme_attach_controller", 00:27:31.786 "req_id": 1 00:27:31.786 } 00:27:31.786 Got JSON-RPC error response 00:27:31.786 response: 00:27:31.786 { 00:27:31.786 "code": -5, 00:27:31.786 "message": "Input/output error" 00:27:31.786 } 00:27:31.786 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:31.786 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:31.786 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:31.786 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:31.786 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:31.786 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.786 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:31.786 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.786 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.786 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.786 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:31.786 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:32.049 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:32.049 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:32.049 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:32.049 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.049 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.049 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:32.049 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.050 request: 00:27:32.050 { 00:27:32.050 "name": "nvme0", 00:27:32.050 "trtype": "tcp", 00:27:32.050 "traddr": "10.0.0.1", 00:27:32.050 "adrfam": "ipv4", 00:27:32.050 "trsvcid": "4420", 00:27:32.050 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:32.050 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:32.050 "prchk_reftag": false, 00:27:32.050 "prchk_guard": false, 00:27:32.050 "hdgst": false, 00:27:32.050 "ddgst": false, 00:27:32.050 "dhchap_key": "key2", 00:27:32.050 "allow_unrecognized_csi": false, 00:27:32.050 "method": "bdev_nvme_attach_controller", 00:27:32.050 "req_id": 1 00:27:32.050 } 00:27:32.050 Got JSON-RPC error response 00:27:32.050 response: 00:27:32.050 { 00:27:32.050 "code": -5, 00:27:32.050 "message": "Input/output error" 00:27:32.050 } 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.050 request: 00:27:32.050 { 00:27:32.050 "name": "nvme0", 00:27:32.050 "trtype": "tcp", 00:27:32.050 "traddr": "10.0.0.1", 00:27:32.050 "adrfam": "ipv4", 00:27:32.050 "trsvcid": "4420", 00:27:32.050 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:32.050 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:32.050 "prchk_reftag": false, 00:27:32.050 "prchk_guard": false, 00:27:32.050 "hdgst": false, 00:27:32.050 "ddgst": false, 00:27:32.050 "dhchap_key": "key1", 00:27:32.050 "dhchap_ctrlr_key": "ckey2", 00:27:32.050 "allow_unrecognized_csi": false, 00:27:32.050 "method": "bdev_nvme_attach_controller", 00:27:32.050 "req_id": 1 00:27:32.050 } 00:27:32.050 Got JSON-RPC error response 00:27:32.050 response: 00:27:32.050 { 00:27:32.050 "code": -5, 00:27:32.050 "message": "Input/output error" 00:27:32.050 } 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:32.050 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:32.051 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:32.051 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:32.051 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.051 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.316 nvme0n1 00:27:32.316 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.316 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:32.316 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.316 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:32.316 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:32.316 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:32.316 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU1MWZiNDVlNWU5MDc4ZDMyNTNlYTY1N2U5M2UxYmLEK/65: 00:27:32.316 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzgyZGY1NzRkZTA5NzU4ZjFiMDM2YzdmNzJkODgyNGEYTdTy: 00:27:32.316 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:32.316 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:32.316 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU1MWZiNDVlNWU5MDc4ZDMyNTNlYTY1N2U5M2UxYmLEK/65: 00:27:32.316 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzgyZGY1NzRkZTA5NzU4ZjFiMDM2YzdmNzJkODgyNGEYTdTy: ]] 00:27:32.316 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzgyZGY1NzRkZTA5NzU4ZjFiMDM2YzdmNzJkODgyNGEYTdTy: 00:27:32.316 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:32.316 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.316 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.316 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.316 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.316 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:27:32.316 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.316 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.316 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.316 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.316 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:32.316 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:32.316 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:32.316 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:32.316 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:32.316 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:32.316 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:32.316 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:32.316 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.316 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.577 request: 00:27:32.577 { 00:27:32.577 "name": "nvme0", 00:27:32.577 "dhchap_key": "key1", 00:27:32.577 "dhchap_ctrlr_key": "ckey2", 00:27:32.577 "method": "bdev_nvme_set_keys", 00:27:32.577 "req_id": 1 00:27:32.577 } 00:27:32.577 Got JSON-RPC error response 00:27:32.577 response: 00:27:32.577 { 00:27:32.577 "code": -13, 00:27:32.577 "message": "Permission denied" 00:27:32.577 } 00:27:32.577 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:32.577 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:32.577 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:32.577 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:32.577 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:32.577 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.577 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:32.577 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.577 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.577 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.577 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:32.577 12:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:33.519 12:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.519 12:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:33.519 12:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.519 12:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.519 12:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.519 12:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:33.519 12:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:34.461 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.461 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:34.461 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.461 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.461 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.722 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:27:34.722 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:34.722 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.722 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:34.722 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGUwNDAwOTMwMzEyZWRhOGQwY2QwMDYxNTI3OWE0NTIxOGFlOTlkNTM4MzUxYzkwwApCnA==: 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGUwNDAwOTMwMzEyZWRhOGQwY2QwMDYxNTI3OWE0NTIxOGFlOTlkNTM4MzUxYzkwwApCnA==: 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: ]] 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJlNzVkZGI3ZDkyMDI0NjcxM2U0NTQ5NDFhN2EyMzg2NzRkYjI5NTYyMTAyNmE2bF/oXA==: 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.723 nvme0n1 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU1MWZiNDVlNWU5MDc4ZDMyNTNlYTY1N2U5M2UxYmLEK/65: 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzgyZGY1NzRkZTA5NzU4ZjFiMDM2YzdmNzJkODgyNGEYTdTy: 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU1MWZiNDVlNWU5MDc4ZDMyNTNlYTY1N2U5M2UxYmLEK/65: 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzgyZGY1NzRkZTA5NzU4ZjFiMDM2YzdmNzJkODgyNGEYTdTy: ]] 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzgyZGY1NzRkZTA5NzU4ZjFiMDM2YzdmNzJkODgyNGEYTdTy: 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.723 request: 00:27:34.723 { 00:27:34.723 "name": "nvme0", 00:27:34.723 "dhchap_key": "key2", 00:27:34.723 "dhchap_ctrlr_key": "ckey1", 00:27:34.723 "method": "bdev_nvme_set_keys", 00:27:34.723 "req_id": 1 00:27:34.723 } 00:27:34.723 Got JSON-RPC error response 00:27:34.723 response: 00:27:34.723 { 00:27:34.723 "code": -13, 00:27:34.723 "message": "Permission denied" 00:27:34.723 } 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.723 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.984 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.984 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:27:34.984 12:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:27:35.926 12:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.926 12:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:35.926 12:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.926 12:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.926 12:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.926 12:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:27:35.926 12:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:27:35.926 12:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:27:35.926 12:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:35.926 12:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:35.927 12:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:27:35.927 12:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:35.927 12:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:27:35.927 12:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:35.927 12:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:35.927 rmmod nvme_tcp 00:27:35.927 rmmod nvme_fabrics 00:27:35.927 12:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:35.927 12:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:27:35.927 12:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:27:35.927 12:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@515 -- # '[' -n 1125173 ']' 00:27:35.927 12:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # killprocess 1125173 00:27:35.927 12:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 1125173 ']' 00:27:35.927 12:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 1125173 00:27:35.927 12:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:27:35.927 12:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:35.927 12:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1125173 00:27:36.188 12:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:36.188 12:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:36.188 12:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1125173' 00:27:36.188 killing process with pid 1125173 00:27:36.188 12:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 1125173 00:27:36.188 12:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 1125173 00:27:36.188 12:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:36.188 12:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:36.188 12:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:36.188 12:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:27:36.188 12:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-save 00:27:36.188 12:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:36.188 12:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-restore 00:27:36.188 12:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:36.188 12:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:36.188 12:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:36.188 12:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:36.188 12:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.733 12:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:38.733 12:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:38.733 12:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:38.733 12:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:38.733 12:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:38.733 12:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # echo 0 00:27:38.733 12:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:38.733 12:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:38.733 12:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:38.733 12:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:38.733 12:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:27:38.733 12:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:27:38.733 12:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:42.033 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:42.033 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:42.033 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:42.034 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:42.034 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:42.034 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:42.034 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:42.034 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:42.034 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:42.034 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:42.034 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:42.034 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:42.034 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:42.034 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:42.034 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:42.034 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:42.034 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:42.294 12:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.HGs /tmp/spdk.key-null.SrI /tmp/spdk.key-sha256.EUj /tmp/spdk.key-sha384.pwN /tmp/spdk.key-sha512.Gn1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:27:42.294 12:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:46.500 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:27:46.500 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:27:46.500 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:27:46.500 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:27:46.500 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:27:46.500 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:27:46.500 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:27:46.500 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:27:46.500 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:27:46.500 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:27:46.500 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:27:46.500 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:27:46.500 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:27:46.500 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:27:46.500 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:27:46.500 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:27:46.500 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:27:46.500 00:27:46.500 real 1m0.899s 00:27:46.500 user 0m54.696s 00:27:46.500 sys 0m16.049s 00:27:46.500 12:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:46.500 12:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.500 ************************************ 00:27:46.500 END TEST nvmf_auth_host 00:27:46.500 ************************************ 00:27:46.500 12:12:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:27:46.500 12:12:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:46.500 12:12:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:46.500 12:12:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:46.500 12:12:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.500 ************************************ 00:27:46.500 START TEST nvmf_digest 00:27:46.500 ************************************ 00:27:46.500 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:46.500 * Looking for test storage... 00:27:46.500 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:46.500 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:46.500 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:27:46.500 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:46.500 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:46.500 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:46.500 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:46.500 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:46.500 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:27:46.500 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:27:46.500 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:27:46.500 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:27:46.500 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:27:46.500 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:27:46.500 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:27:46.500 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:46.500 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:27:46.500 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:27:46.500 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:46.500 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:46.500 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:27:46.500 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:27:46.500 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:46.500 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:27:46.500 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:27:46.500 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:27:46.500 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:27:46.500 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:46.500 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:27:46.500 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:27:46.500 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:46.500 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:46.500 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:27:46.500 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:46.500 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:46.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:46.500 --rc genhtml_branch_coverage=1 00:27:46.500 --rc genhtml_function_coverage=1 00:27:46.500 --rc genhtml_legend=1 00:27:46.500 --rc geninfo_all_blocks=1 00:27:46.500 --rc geninfo_unexecuted_blocks=1 00:27:46.500 00:27:46.500 ' 00:27:46.500 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:46.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:46.500 --rc genhtml_branch_coverage=1 00:27:46.500 --rc genhtml_function_coverage=1 00:27:46.500 --rc genhtml_legend=1 00:27:46.500 --rc geninfo_all_blocks=1 00:27:46.500 --rc geninfo_unexecuted_blocks=1 00:27:46.500 00:27:46.500 ' 00:27:46.500 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:46.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:46.500 --rc genhtml_branch_coverage=1 00:27:46.501 --rc genhtml_function_coverage=1 00:27:46.501 --rc genhtml_legend=1 00:27:46.501 --rc geninfo_all_blocks=1 00:27:46.501 --rc geninfo_unexecuted_blocks=1 00:27:46.501 00:27:46.501 ' 00:27:46.501 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:46.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:46.501 --rc genhtml_branch_coverage=1 00:27:46.501 --rc genhtml_function_coverage=1 00:27:46.501 --rc genhtml_legend=1 00:27:46.501 --rc geninfo_all_blocks=1 00:27:46.501 --rc geninfo_unexecuted_blocks=1 00:27:46.501 00:27:46.501 ' 00:27:46.501 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:46.501 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:27:46.501 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:46.501 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:46.501 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:46.501 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:46.501 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:46.501 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:46.501 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:46.501 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:46.501 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:46.501 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:46.501 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:46.501 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:46.501 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:46.501 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:46.501 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:46.501 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:46.501 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:46.501 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:27:46.501 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:46.501 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:46.501 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:46.501 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.501 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.501 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.501 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:27:46.501 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.501 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:27:46.501 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:46.501 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:46.501 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:46.501 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:46.501 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:46.501 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:46.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:46.501 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:46.501 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:46.501 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:46.501 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:46.501 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:46.501 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:27:46.501 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:46.501 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:27:46.501 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:46.501 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:46.501 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:46.501 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:46.501 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:46.501 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:46.501 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:46.501 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:46.501 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:46.501 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:46.501 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:27:46.501 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:54.646 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:54.646 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:54.646 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:54.647 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:54.647 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # is_hw=yes 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:54.647 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:54.647 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:27:54.647 00:27:54.647 --- 10.0.0.2 ping statistics --- 00:27:54.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:54.647 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:54.647 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:54.647 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:27:54.647 00:27:54.647 --- 10.0.0.1 ping statistics --- 00:27:54.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:54.647 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # return 0 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:54.647 ************************************ 00:27:54.647 START TEST nvmf_digest_clean 00:27:54.647 ************************************ 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # nvmfpid=1142763 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # waitforlisten 1142763 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1142763 ']' 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:54.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:54.647 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:54.647 [2024-10-21 12:12:30.623017] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:27:54.647 [2024-10-21 12:12:30.623080] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:54.647 [2024-10-21 12:12:30.712021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:54.647 [2024-10-21 12:12:30.763786] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:54.647 [2024-10-21 12:12:30.763834] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:54.647 [2024-10-21 12:12:30.763843] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:54.647 [2024-10-21 12:12:30.763850] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:54.647 [2024-10-21 12:12:30.763856] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:54.647 [2024-10-21 12:12:30.764619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:54.909 12:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:54.909 12:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:54.909 12:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:54.909 12:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:54.909 12:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:54.909 12:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:54.909 12:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:27:54.909 12:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:27:54.909 12:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:27:54.909 12:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.909 12:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:55.171 null0 00:27:55.171 [2024-10-21 12:12:31.594553] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:55.171 [2024-10-21 12:12:31.618880] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:55.171 12:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.171 12:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:27:55.171 12:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:55.171 12:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:55.171 12:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:55.171 12:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:55.171 12:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:55.171 12:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:55.171 12:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1143018 00:27:55.171 12:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1143018 /var/tmp/bperf.sock 00:27:55.171 12:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1143018 ']' 00:27:55.171 12:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:55.171 12:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:55.171 12:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:55.171 12:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:55.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:55.171 12:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:55.171 12:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:55.171 [2024-10-21 12:12:31.679781] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:27:55.171 [2024-10-21 12:12:31.679845] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1143018 ] 00:27:55.171 [2024-10-21 12:12:31.762932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.432 [2024-10-21 12:12:31.815071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:56.006 12:12:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:56.006 12:12:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:56.006 12:12:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:56.006 12:12:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:56.006 12:12:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:56.269 12:12:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:56.269 12:12:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:56.531 nvme0n1 00:27:56.531 12:12:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:56.531 12:12:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:56.531 Running I/O for 2 seconds... 00:27:58.861 18680.00 IOPS, 72.97 MiB/s [2024-10-21T10:12:35.456Z] 18795.00 IOPS, 73.42 MiB/s 00:27:58.861 Latency(us) 00:27:58.861 [2024-10-21T10:12:35.456Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:58.861 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:58.861 nvme0n1 : 2.01 18814.39 73.49 0.00 0.00 6793.51 2853.55 23592.96 00:27:58.861 [2024-10-21T10:12:35.456Z] =================================================================================================================== 00:27:58.861 [2024-10-21T10:12:35.456Z] Total : 18814.39 73.49 0.00 0.00 6793.51 2853.55 23592.96 00:27:58.861 { 00:27:58.861 "results": [ 00:27:58.861 { 00:27:58.861 "job": "nvme0n1", 00:27:58.861 "core_mask": "0x2", 00:27:58.861 "workload": "randread", 00:27:58.861 "status": "finished", 00:27:58.861 "queue_depth": 128, 00:27:58.861 "io_size": 4096, 00:27:58.861 "runtime": 2.006549, 00:27:58.861 "iops": 18814.392272503686, 00:27:58.861 "mibps": 73.49371981446752, 00:27:58.861 "io_failed": 0, 00:27:58.861 "io_timeout": 0, 00:27:58.861 "avg_latency_us": 6793.509992229991, 00:27:58.861 "min_latency_us": 2853.5466666666666, 00:27:58.861 "max_latency_us": 23592.96 00:27:58.861 } 00:27:58.861 ], 00:27:58.861 "core_count": 1 00:27:58.861 } 00:27:58.861 12:12:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:58.861 12:12:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:58.862 12:12:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:58.862 12:12:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:58.862 | select(.opcode=="crc32c") 00:27:58.862 | "\(.module_name) \(.executed)"' 00:27:58.862 12:12:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:58.862 12:12:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:58.862 12:12:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:58.862 12:12:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:58.862 12:12:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:58.862 12:12:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1143018 00:27:58.862 12:12:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1143018 ']' 00:27:58.862 12:12:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1143018 00:27:58.862 12:12:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:27:58.862 12:12:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:58.862 12:12:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1143018 00:27:58.862 12:12:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:58.862 12:12:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:58.862 12:12:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1143018' 00:27:58.862 killing process with pid 1143018 00:27:58.862 12:12:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1143018 00:27:58.862 Received shutdown signal, test time was about 2.000000 seconds 00:27:58.862 00:27:58.862 Latency(us) 00:27:58.862 [2024-10-21T10:12:35.457Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:58.862 [2024-10-21T10:12:35.457Z] =================================================================================================================== 00:27:58.862 [2024-10-21T10:12:35.457Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:58.862 12:12:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1143018 00:27:59.123 12:12:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:27:59.123 12:12:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:59.123 12:12:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:59.123 12:12:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:59.123 12:12:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:59.123 12:12:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:59.123 12:12:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:59.123 12:12:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1143699 00:27:59.123 12:12:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1143699 /var/tmp/bperf.sock 00:27:59.123 12:12:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1143699 ']' 00:27:59.123 12:12:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:59.123 12:12:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:59.123 12:12:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:59.123 12:12:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:59.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:59.123 12:12:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:59.124 12:12:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:59.124 [2024-10-21 12:12:35.547355] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:27:59.124 [2024-10-21 12:12:35.547413] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1143699 ] 00:27:59.124 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:59.124 Zero copy mechanism will not be used. 00:27:59.124 [2024-10-21 12:12:35.622110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.124 [2024-10-21 12:12:35.651942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:00.068 12:12:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:00.068 12:12:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:00.068 12:12:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:00.068 12:12:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:00.068 12:12:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:00.068 12:12:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:00.068 12:12:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:00.328 nvme0n1 00:28:00.328 12:12:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:00.328 12:12:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:00.328 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:00.328 Zero copy mechanism will not be used. 00:28:00.328 Running I/O for 2 seconds... 00:28:02.659 3829.00 IOPS, 478.62 MiB/s [2024-10-21T10:12:39.254Z] 3431.00 IOPS, 428.88 MiB/s 00:28:02.659 Latency(us) 00:28:02.659 [2024-10-21T10:12:39.254Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:02.659 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:02.659 nvme0n1 : 2.01 3433.73 429.22 0.00 0.00 4655.26 812.37 13707.95 00:28:02.659 [2024-10-21T10:12:39.254Z] =================================================================================================================== 00:28:02.659 [2024-10-21T10:12:39.254Z] Total : 3433.73 429.22 0.00 0.00 4655.26 812.37 13707.95 00:28:02.659 { 00:28:02.659 "results": [ 00:28:02.659 { 00:28:02.659 "job": "nvme0n1", 00:28:02.659 "core_mask": "0x2", 00:28:02.659 "workload": "randread", 00:28:02.659 "status": "finished", 00:28:02.659 "queue_depth": 16, 00:28:02.659 "io_size": 131072, 00:28:02.659 "runtime": 2.005983, 00:28:02.659 "iops": 3433.7280026799826, 00:28:02.659 "mibps": 429.21600033499783, 00:28:02.659 "io_failed": 0, 00:28:02.659 "io_timeout": 0, 00:28:02.659 "avg_latency_us": 4655.264854819977, 00:28:02.659 "min_latency_us": 812.3733333333333, 00:28:02.659 "max_latency_us": 13707.946666666667 00:28:02.659 } 00:28:02.659 ], 00:28:02.659 "core_count": 1 00:28:02.659 } 00:28:02.659 12:12:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:02.659 12:12:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:02.659 12:12:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:02.659 12:12:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:02.659 | select(.opcode=="crc32c") 00:28:02.659 | "\(.module_name) \(.executed)"' 00:28:02.659 12:12:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:02.659 12:12:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:02.659 12:12:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:02.659 12:12:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:02.659 12:12:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:02.659 12:12:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1143699 00:28:02.659 12:12:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1143699 ']' 00:28:02.659 12:12:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1143699 00:28:02.659 12:12:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:02.660 12:12:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:02.660 12:12:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1143699 00:28:02.660 12:12:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:02.660 12:12:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:02.660 12:12:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1143699' 00:28:02.660 killing process with pid 1143699 00:28:02.660 12:12:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1143699 00:28:02.660 Received shutdown signal, test time was about 2.000000 seconds 00:28:02.660 00:28:02.660 Latency(us) 00:28:02.660 [2024-10-21T10:12:39.255Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:02.660 [2024-10-21T10:12:39.255Z] =================================================================================================================== 00:28:02.660 [2024-10-21T10:12:39.255Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:02.660 12:12:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1143699 00:28:02.920 12:12:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:02.920 12:12:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:02.920 12:12:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:02.920 12:12:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:02.920 12:12:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:02.920 12:12:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:02.920 12:12:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:02.920 12:12:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1144390 00:28:02.920 12:12:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1144390 /var/tmp/bperf.sock 00:28:02.920 12:12:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1144390 ']' 00:28:02.920 12:12:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:02.920 12:12:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:02.920 12:12:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:02.920 12:12:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:02.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:02.920 12:12:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:02.920 12:12:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:02.920 [2024-10-21 12:12:39.342144] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:28:02.920 [2024-10-21 12:12:39.342200] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1144390 ] 00:28:02.920 [2024-10-21 12:12:39.417231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:02.920 [2024-10-21 12:12:39.446402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:02.920 12:12:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:02.920 12:12:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:02.920 12:12:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:02.920 12:12:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:02.920 12:12:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:03.180 12:12:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:03.180 12:12:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:03.750 nvme0n1 00:28:03.750 12:12:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:03.750 12:12:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:03.750 Running I/O for 2 seconds... 00:28:05.631 30169.00 IOPS, 117.85 MiB/s [2024-10-21T10:12:42.226Z] 30292.00 IOPS, 118.33 MiB/s 00:28:05.631 Latency(us) 00:28:05.631 [2024-10-21T10:12:42.226Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:05.631 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:05.631 nvme0n1 : 2.00 30309.22 118.40 0.00 0.00 4217.93 2157.23 13653.33 00:28:05.631 [2024-10-21T10:12:42.226Z] =================================================================================================================== 00:28:05.631 [2024-10-21T10:12:42.226Z] Total : 30309.22 118.40 0.00 0.00 4217.93 2157.23 13653.33 00:28:05.631 { 00:28:05.631 "results": [ 00:28:05.631 { 00:28:05.631 "job": "nvme0n1", 00:28:05.631 "core_mask": "0x2", 00:28:05.631 "workload": "randwrite", 00:28:05.631 "status": "finished", 00:28:05.631 "queue_depth": 128, 00:28:05.631 "io_size": 4096, 00:28:05.631 "runtime": 2.003087, 00:28:05.631 "iops": 30309.217722445406, 00:28:05.631 "mibps": 118.39538172830237, 00:28:05.631 "io_failed": 0, 00:28:05.631 "io_timeout": 0, 00:28:05.631 "avg_latency_us": 4217.93041507445, 00:28:05.631 "min_latency_us": 2157.2266666666665, 00:28:05.631 "max_latency_us": 13653.333333333334 00:28:05.631 } 00:28:05.631 ], 00:28:05.631 "core_count": 1 00:28:05.631 } 00:28:05.631 12:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:05.631 12:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:05.631 12:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:05.631 12:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:05.631 | select(.opcode=="crc32c") 00:28:05.631 | "\(.module_name) \(.executed)"' 00:28:05.631 12:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:05.891 12:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:05.891 12:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:05.891 12:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:05.891 12:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:05.891 12:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1144390 00:28:05.891 12:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1144390 ']' 00:28:05.891 12:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1144390 00:28:05.891 12:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:05.891 12:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:05.891 12:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1144390 00:28:05.891 12:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:05.891 12:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:05.891 12:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1144390' 00:28:05.891 killing process with pid 1144390 00:28:05.891 12:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1144390 00:28:05.891 Received shutdown signal, test time was about 2.000000 seconds 00:28:05.891 00:28:05.891 Latency(us) 00:28:05.891 [2024-10-21T10:12:42.486Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:05.891 [2024-10-21T10:12:42.486Z] =================================================================================================================== 00:28:05.891 [2024-10-21T10:12:42.486Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:05.891 12:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1144390 00:28:06.152 12:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:06.152 12:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:06.152 12:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:06.152 12:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:06.152 12:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:06.152 12:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:06.152 12:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:06.152 12:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1145060 00:28:06.152 12:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1145060 /var/tmp/bperf.sock 00:28:06.152 12:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1145060 ']' 00:28:06.152 12:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:06.152 12:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:06.152 12:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:06.152 12:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:06.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:06.152 12:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:06.152 12:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:06.152 [2024-10-21 12:12:42.609676] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:28:06.152 [2024-10-21 12:12:42.609735] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1145060 ] 00:28:06.152 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:06.152 Zero copy mechanism will not be used. 00:28:06.152 [2024-10-21 12:12:42.683052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:06.152 [2024-10-21 12:12:42.712515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:06.152 12:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:06.152 12:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:06.152 12:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:06.152 12:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:06.152 12:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:06.414 12:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:06.414 12:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:06.987 nvme0n1 00:28:06.987 12:12:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:06.987 12:12:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:06.987 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:06.987 Zero copy mechanism will not be used. 00:28:06.987 Running I/O for 2 seconds... 00:28:08.874 4502.00 IOPS, 562.75 MiB/s [2024-10-21T10:12:45.469Z] 5050.50 IOPS, 631.31 MiB/s 00:28:08.874 Latency(us) 00:28:08.874 [2024-10-21T10:12:45.469Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:08.874 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:08.874 nvme0n1 : 2.00 5049.73 631.22 0.00 0.00 3164.03 1228.80 9448.11 00:28:08.874 [2024-10-21T10:12:45.469Z] =================================================================================================================== 00:28:08.874 [2024-10-21T10:12:45.469Z] Total : 5049.73 631.22 0.00 0.00 3164.03 1228.80 9448.11 00:28:08.874 { 00:28:08.874 "results": [ 00:28:08.874 { 00:28:08.874 "job": "nvme0n1", 00:28:08.874 "core_mask": "0x2", 00:28:08.874 "workload": "randwrite", 00:28:08.874 "status": "finished", 00:28:08.874 "queue_depth": 16, 00:28:08.874 "io_size": 131072, 00:28:08.874 "runtime": 2.003474, 00:28:08.874 "iops": 5049.728621384655, 00:28:08.874 "mibps": 631.2160776730818, 00:28:08.874 "io_failed": 0, 00:28:08.874 "io_timeout": 0, 00:28:08.874 "avg_latency_us": 3164.0338176666337, 00:28:08.874 "min_latency_us": 1228.8, 00:28:08.874 "max_latency_us": 9448.106666666667 00:28:08.874 } 00:28:08.874 ], 00:28:08.874 "core_count": 1 00:28:08.874 } 00:28:09.135 12:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:09.135 12:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:09.135 12:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:09.135 12:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:09.135 | select(.opcode=="crc32c") 00:28:09.135 | "\(.module_name) \(.executed)"' 00:28:09.135 12:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:09.135 12:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:09.135 12:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:09.135 12:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:09.135 12:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:09.135 12:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1145060 00:28:09.135 12:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1145060 ']' 00:28:09.135 12:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1145060 00:28:09.135 12:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:09.135 12:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:09.135 12:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1145060 00:28:09.397 12:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:09.397 12:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:09.397 12:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1145060' 00:28:09.397 killing process with pid 1145060 00:28:09.397 12:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1145060 00:28:09.397 Received shutdown signal, test time was about 2.000000 seconds 00:28:09.397 00:28:09.397 Latency(us) 00:28:09.397 [2024-10-21T10:12:45.992Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:09.397 [2024-10-21T10:12:45.992Z] =================================================================================================================== 00:28:09.397 [2024-10-21T10:12:45.992Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:09.397 12:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1145060 00:28:09.397 12:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1142763 00:28:09.397 12:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1142763 ']' 00:28:09.397 12:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1142763 00:28:09.397 12:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:09.397 12:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:09.397 12:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1142763 00:28:09.397 12:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:09.397 12:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:09.397 12:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1142763' 00:28:09.397 killing process with pid 1142763 00:28:09.397 12:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1142763 00:28:09.397 12:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1142763 00:28:09.658 00:28:09.658 real 0m15.445s 00:28:09.658 user 0m30.240s 00:28:09.658 sys 0m3.705s 00:28:09.658 12:12:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:09.658 12:12:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:09.658 ************************************ 00:28:09.658 END TEST nvmf_digest_clean 00:28:09.658 ************************************ 00:28:09.658 12:12:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:09.658 12:12:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:09.658 12:12:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:09.658 12:12:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:09.658 ************************************ 00:28:09.658 START TEST nvmf_digest_error 00:28:09.658 ************************************ 00:28:09.658 12:12:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:28:09.658 12:12:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:09.658 12:12:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:09.658 12:12:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:09.658 12:12:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:09.658 12:12:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # nvmfpid=1145766 00:28:09.658 12:12:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # waitforlisten 1145766 00:28:09.658 12:12:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:09.658 12:12:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1145766 ']' 00:28:09.658 12:12:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:09.658 12:12:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:09.658 12:12:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:09.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:09.658 12:12:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:09.658 12:12:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:09.658 [2024-10-21 12:12:46.141897] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:28:09.658 [2024-10-21 12:12:46.141945] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:09.658 [2024-10-21 12:12:46.224833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.919 [2024-10-21 12:12:46.257386] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:09.919 [2024-10-21 12:12:46.257421] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:09.919 [2024-10-21 12:12:46.257426] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:09.919 [2024-10-21 12:12:46.257431] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:09.919 [2024-10-21 12:12:46.257435] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:09.919 [2024-10-21 12:12:46.257903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:10.491 12:12:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:10.491 12:12:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:10.491 12:12:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:10.491 12:12:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:10.491 12:12:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:10.491 12:12:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:10.491 12:12:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:10.491 12:12:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.491 12:12:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:10.491 [2024-10-21 12:12:46.987910] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:10.491 12:12:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.491 12:12:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:10.491 12:12:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:10.491 12:12:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.491 12:12:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:10.491 null0 00:28:10.491 [2024-10-21 12:12:47.065318] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:10.752 [2024-10-21 12:12:47.089538] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:10.752 12:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.752 12:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:10.752 12:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:10.752 12:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:10.752 12:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:10.752 12:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:10.752 12:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1146114 00:28:10.752 12:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1146114 /var/tmp/bperf.sock 00:28:10.752 12:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1146114 ']' 00:28:10.752 12:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:10.752 12:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:10.752 12:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:10.752 12:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:10.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:10.752 12:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:10.752 12:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:10.753 [2024-10-21 12:12:47.155966] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:28:10.753 [2024-10-21 12:12:47.156030] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1146114 ] 00:28:10.753 [2024-10-21 12:12:47.232134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.753 [2024-10-21 12:12:47.261958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:11.694 12:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:11.694 12:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:11.694 12:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:11.694 12:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:11.694 12:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:11.694 12:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.695 12:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:11.695 12:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.695 12:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:11.695 12:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:11.956 nvme0n1 00:28:11.956 12:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:11.956 12:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.956 12:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:11.956 12:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.956 12:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:11.957 12:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:11.957 Running I/O for 2 seconds... 00:28:11.957 [2024-10-21 12:12:48.543780] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:11.957 [2024-10-21 12:12:48.543811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.957 [2024-10-21 12:12:48.543821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.219 [2024-10-21 12:12:48.552985] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.219 [2024-10-21 12:12:48.553005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.219 [2024-10-21 12:12:48.553013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.219 [2024-10-21 12:12:48.565041] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.219 [2024-10-21 12:12:48.565060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.219 [2024-10-21 12:12:48.565068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.219 [2024-10-21 12:12:48.574477] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.219 [2024-10-21 12:12:48.574497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.219 [2024-10-21 12:12:48.574504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.219 [2024-10-21 12:12:48.582989] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.219 [2024-10-21 12:12:48.583008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.219 [2024-10-21 12:12:48.583015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.219 [2024-10-21 12:12:48.592190] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.219 [2024-10-21 12:12:48.592208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:7459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.219 [2024-10-21 12:12:48.592214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.219 [2024-10-21 12:12:48.601291] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.219 [2024-10-21 12:12:48.601309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:8416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.219 [2024-10-21 12:12:48.601316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.219 [2024-10-21 12:12:48.610967] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.219 [2024-10-21 12:12:48.610984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.219 [2024-10-21 12:12:48.610991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.219 [2024-10-21 12:12:48.619253] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.219 [2024-10-21 12:12:48.619271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.219 [2024-10-21 12:12:48.619278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.219 [2024-10-21 12:12:48.628145] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.219 [2024-10-21 12:12:48.628164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.219 [2024-10-21 12:12:48.628170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.219 [2024-10-21 12:12:48.636565] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.219 [2024-10-21 12:12:48.636583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:19266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.219 [2024-10-21 12:12:48.636590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.219 [2024-10-21 12:12:48.646258] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.219 [2024-10-21 12:12:48.646276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.219 [2024-10-21 12:12:48.646282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.219 [2024-10-21 12:12:48.654519] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.219 [2024-10-21 12:12:48.654536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:54 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.219 [2024-10-21 12:12:48.654543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.219 [2024-10-21 12:12:48.662369] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.219 [2024-10-21 12:12:48.662387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:16929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.219 [2024-10-21 12:12:48.662394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.219 [2024-10-21 12:12:48.671848] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.219 [2024-10-21 12:12:48.671866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.219 [2024-10-21 12:12:48.671879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.219 [2024-10-21 12:12:48.681751] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.219 [2024-10-21 12:12:48.681769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.219 [2024-10-21 12:12:48.681775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.219 [2024-10-21 12:12:48.689482] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.219 [2024-10-21 12:12:48.689501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.219 [2024-10-21 12:12:48.689507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.219 [2024-10-21 12:12:48.699436] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.219 [2024-10-21 12:12:48.699453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.219 [2024-10-21 12:12:48.699460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.219 [2024-10-21 12:12:48.709070] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.219 [2024-10-21 12:12:48.709088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.220 [2024-10-21 12:12:48.709095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.220 [2024-10-21 12:12:48.718927] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.220 [2024-10-21 12:12:48.718944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.220 [2024-10-21 12:12:48.718951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.220 [2024-10-21 12:12:48.726562] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.220 [2024-10-21 12:12:48.726580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.220 [2024-10-21 12:12:48.726587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.220 [2024-10-21 12:12:48.735539] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.220 [2024-10-21 12:12:48.735557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.220 [2024-10-21 12:12:48.735564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.220 [2024-10-21 12:12:48.744257] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.220 [2024-10-21 12:12:48.744274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.220 [2024-10-21 12:12:48.744281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.220 [2024-10-21 12:12:48.753993] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.220 [2024-10-21 12:12:48.754016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.220 [2024-10-21 12:12:48.754022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.220 [2024-10-21 12:12:48.762825] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.220 [2024-10-21 12:12:48.762843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.220 [2024-10-21 12:12:48.762850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.220 [2024-10-21 12:12:48.771500] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.220 [2024-10-21 12:12:48.771518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.220 [2024-10-21 12:12:48.771524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.220 [2024-10-21 12:12:48.779773] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.220 [2024-10-21 12:12:48.779791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.220 [2024-10-21 12:12:48.779797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.220 [2024-10-21 12:12:48.789097] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.220 [2024-10-21 12:12:48.789114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.220 [2024-10-21 12:12:48.789121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.220 [2024-10-21 12:12:48.798283] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.220 [2024-10-21 12:12:48.798300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:4851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.220 [2024-10-21 12:12:48.798306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.220 [2024-10-21 12:12:48.806325] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.220 [2024-10-21 12:12:48.806342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:25163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.220 [2024-10-21 12:12:48.806349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.482 [2024-10-21 12:12:48.815377] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.482 [2024-10-21 12:12:48.815394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.482 [2024-10-21 12:12:48.815401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.482 [2024-10-21 12:12:48.824192] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.482 [2024-10-21 12:12:48.824210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.482 [2024-10-21 12:12:48.824217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.482 [2024-10-21 12:12:48.833580] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.482 [2024-10-21 12:12:48.833597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.482 [2024-10-21 12:12:48.833604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.482 [2024-10-21 12:12:48.842450] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.482 [2024-10-21 12:12:48.842468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.482 [2024-10-21 12:12:48.842474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.482 [2024-10-21 12:12:48.850638] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.482 [2024-10-21 12:12:48.850656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.482 [2024-10-21 12:12:48.850663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.482 [2024-10-21 12:12:48.859201] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.482 [2024-10-21 12:12:48.859218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.482 [2024-10-21 12:12:48.859224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.482 [2024-10-21 12:12:48.868626] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.482 [2024-10-21 12:12:48.868644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.482 [2024-10-21 12:12:48.868650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.482 [2024-10-21 12:12:48.877369] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.482 [2024-10-21 12:12:48.877386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.482 [2024-10-21 12:12:48.877392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.482 [2024-10-21 12:12:48.886277] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.482 [2024-10-21 12:12:48.886294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.482 [2024-10-21 12:12:48.886301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.482 [2024-10-21 12:12:48.894610] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.482 [2024-10-21 12:12:48.894627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.482 [2024-10-21 12:12:48.894634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.482 [2024-10-21 12:12:48.903520] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.482 [2024-10-21 12:12:48.903538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.482 [2024-10-21 12:12:48.903548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.482 [2024-10-21 12:12:48.912754] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.482 [2024-10-21 12:12:48.912772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.482 [2024-10-21 12:12:48.912778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.482 [2024-10-21 12:12:48.921538] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.482 [2024-10-21 12:12:48.921556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:16993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.482 [2024-10-21 12:12:48.921562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.482 [2024-10-21 12:12:48.931295] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.482 [2024-10-21 12:12:48.931313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.482 [2024-10-21 12:12:48.931324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.482 [2024-10-21 12:12:48.938755] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.482 [2024-10-21 12:12:48.938773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.482 [2024-10-21 12:12:48.938779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.482 [2024-10-21 12:12:48.949312] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.482 [2024-10-21 12:12:48.949334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.482 [2024-10-21 12:12:48.949340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.482 [2024-10-21 12:12:48.958975] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.482 [2024-10-21 12:12:48.958992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.482 [2024-10-21 12:12:48.958998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.482 [2024-10-21 12:12:48.968873] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.482 [2024-10-21 12:12:48.968890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:25519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.482 [2024-10-21 12:12:48.968896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.482 [2024-10-21 12:12:48.977185] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.482 [2024-10-21 12:12:48.977202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.482 [2024-10-21 12:12:48.977208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.482 [2024-10-21 12:12:48.986894] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.482 [2024-10-21 12:12:48.986915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.482 [2024-10-21 12:12:48.986922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.482 [2024-10-21 12:12:48.998514] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.482 [2024-10-21 12:12:48.998531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.483 [2024-10-21 12:12:48.998538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.483 [2024-10-21 12:12:49.010194] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.483 [2024-10-21 12:12:49.010211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.483 [2024-10-21 12:12:49.010218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.483 [2024-10-21 12:12:49.019875] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.483 [2024-10-21 12:12:49.019892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.483 [2024-10-21 12:12:49.019898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.483 [2024-10-21 12:12:49.028393] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.483 [2024-10-21 12:12:49.028411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.483 [2024-10-21 12:12:49.028417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.483 [2024-10-21 12:12:49.037332] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.483 [2024-10-21 12:12:49.037349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.483 [2024-10-21 12:12:49.037355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.483 [2024-10-21 12:12:49.045723] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.483 [2024-10-21 12:12:49.045741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.483 [2024-10-21 12:12:49.045747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.483 [2024-10-21 12:12:49.054480] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.483 [2024-10-21 12:12:49.054497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.483 [2024-10-21 12:12:49.054503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.483 [2024-10-21 12:12:49.064329] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.483 [2024-10-21 12:12:49.064347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.483 [2024-10-21 12:12:49.064353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.483 [2024-10-21 12:12:49.073726] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.483 [2024-10-21 12:12:49.073746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.483 [2024-10-21 12:12:49.073753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.744 [2024-10-21 12:12:49.082501] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.744 [2024-10-21 12:12:49.082519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.744 [2024-10-21 12:12:49.082525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.744 [2024-10-21 12:12:49.092094] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.744 [2024-10-21 12:12:49.092111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.744 [2024-10-21 12:12:49.092118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.744 [2024-10-21 12:12:49.101258] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.744 [2024-10-21 12:12:49.101276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:6433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.744 [2024-10-21 12:12:49.101282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.744 [2024-10-21 12:12:49.109463] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.744 [2024-10-21 12:12:49.109481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.744 [2024-10-21 12:12:49.109487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.744 [2024-10-21 12:12:49.119053] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.744 [2024-10-21 12:12:49.119070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.744 [2024-10-21 12:12:49.119076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.745 [2024-10-21 12:12:49.128662] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.745 [2024-10-21 12:12:49.128678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.745 [2024-10-21 12:12:49.128684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.745 [2024-10-21 12:12:49.137068] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.745 [2024-10-21 12:12:49.137085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.745 [2024-10-21 12:12:49.137092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.745 [2024-10-21 12:12:49.145520] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.745 [2024-10-21 12:12:49.145537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.745 [2024-10-21 12:12:49.145547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.745 [2024-10-21 12:12:49.154746] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.745 [2024-10-21 12:12:49.154764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.745 [2024-10-21 12:12:49.154771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.745 [2024-10-21 12:12:49.163817] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.745 [2024-10-21 12:12:49.163834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.745 [2024-10-21 12:12:49.163841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.745 [2024-10-21 12:12:49.172210] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.745 [2024-10-21 12:12:49.172227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.745 [2024-10-21 12:12:49.172234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.745 [2024-10-21 12:12:49.181770] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.745 [2024-10-21 12:12:49.181786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.745 [2024-10-21 12:12:49.181792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.745 [2024-10-21 12:12:49.190556] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.745 [2024-10-21 12:12:49.190573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.745 [2024-10-21 12:12:49.190579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.745 [2024-10-21 12:12:49.199335] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.745 [2024-10-21 12:12:49.199353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.745 [2024-10-21 12:12:49.199359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.745 [2024-10-21 12:12:49.206995] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.745 [2024-10-21 12:12:49.207013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.745 [2024-10-21 12:12:49.207019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.745 [2024-10-21 12:12:49.216912] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.745 [2024-10-21 12:12:49.216930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.745 [2024-10-21 12:12:49.216936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.745 [2024-10-21 12:12:49.226024] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.745 [2024-10-21 12:12:49.226041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.745 [2024-10-21 12:12:49.226048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.745 [2024-10-21 12:12:49.234922] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.745 [2024-10-21 12:12:49.234938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.745 [2024-10-21 12:12:49.234945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.745 [2024-10-21 12:12:49.244526] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.745 [2024-10-21 12:12:49.244543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:23182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.745 [2024-10-21 12:12:49.244550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.745 [2024-10-21 12:12:49.252699] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.745 [2024-10-21 12:12:49.252716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.745 [2024-10-21 12:12:49.252722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.745 [2024-10-21 12:12:49.262116] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.745 [2024-10-21 12:12:49.262133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.745 [2024-10-21 12:12:49.262139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.745 [2024-10-21 12:12:49.270892] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.745 [2024-10-21 12:12:49.270909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.745 [2024-10-21 12:12:49.270915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.745 [2024-10-21 12:12:49.279223] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.745 [2024-10-21 12:12:49.279240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.745 [2024-10-21 12:12:49.279246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.745 [2024-10-21 12:12:49.287888] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.745 [2024-10-21 12:12:49.287905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:17517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.745 [2024-10-21 12:12:49.287911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.745 [2024-10-21 12:12:49.297645] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.745 [2024-10-21 12:12:49.297663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.745 [2024-10-21 12:12:49.297672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.745 [2024-10-21 12:12:49.306275] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.745 [2024-10-21 12:12:49.306292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.745 [2024-10-21 12:12:49.306299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.745 [2024-10-21 12:12:49.314354] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.745 [2024-10-21 12:12:49.314371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.745 [2024-10-21 12:12:49.314378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.745 [2024-10-21 12:12:49.323526] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.745 [2024-10-21 12:12:49.323544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.745 [2024-10-21 12:12:49.323550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.745 [2024-10-21 12:12:49.332197] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:12.745 [2024-10-21 12:12:49.332215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.745 [2024-10-21 12:12:49.332221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.008 [2024-10-21 12:12:49.341705] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.008 [2024-10-21 12:12:49.341723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.008 [2024-10-21 12:12:49.341730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.008 [2024-10-21 12:12:49.350722] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.008 [2024-10-21 12:12:49.350738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.008 [2024-10-21 12:12:49.350744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.008 [2024-10-21 12:12:49.359991] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.008 [2024-10-21 12:12:49.360009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:17722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.008 [2024-10-21 12:12:49.360015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.008 [2024-10-21 12:12:49.368416] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.008 [2024-10-21 12:12:49.368434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.008 [2024-10-21 12:12:49.368440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.008 [2024-10-21 12:12:49.376953] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.008 [2024-10-21 12:12:49.376975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.008 [2024-10-21 12:12:49.376981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.008 [2024-10-21 12:12:49.385578] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.008 [2024-10-21 12:12:49.385596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.008 [2024-10-21 12:12:49.385603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.008 [2024-10-21 12:12:49.394900] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.008 [2024-10-21 12:12:49.394919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.008 [2024-10-21 12:12:49.394925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.008 [2024-10-21 12:12:49.403262] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.008 [2024-10-21 12:12:49.403280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.008 [2024-10-21 12:12:49.403286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.008 [2024-10-21 12:12:49.413314] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.008 [2024-10-21 12:12:49.413335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.008 [2024-10-21 12:12:49.413341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.008 [2024-10-21 12:12:49.425509] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.008 [2024-10-21 12:12:49.425526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.008 [2024-10-21 12:12:49.425532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.008 [2024-10-21 12:12:49.436504] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.008 [2024-10-21 12:12:49.436521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.008 [2024-10-21 12:12:49.436532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.008 [2024-10-21 12:12:49.445962] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.008 [2024-10-21 12:12:49.445980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.008 [2024-10-21 12:12:49.445986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.008 [2024-10-21 12:12:49.455020] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.008 [2024-10-21 12:12:49.455041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.008 [2024-10-21 12:12:49.455048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.008 [2024-10-21 12:12:49.462802] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.008 [2024-10-21 12:12:49.462819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.008 [2024-10-21 12:12:49.462825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.008 [2024-10-21 12:12:49.471776] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.008 [2024-10-21 12:12:49.471793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.008 [2024-10-21 12:12:49.471800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.008 [2024-10-21 12:12:49.481601] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.008 [2024-10-21 12:12:49.481617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.008 [2024-10-21 12:12:49.481624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.008 [2024-10-21 12:12:49.488622] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.008 [2024-10-21 12:12:49.488639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.008 [2024-10-21 12:12:49.488646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.008 [2024-10-21 12:12:49.499684] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.008 [2024-10-21 12:12:49.499702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.008 [2024-10-21 12:12:49.499708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.008 [2024-10-21 12:12:49.510153] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.008 [2024-10-21 12:12:49.510171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.008 [2024-10-21 12:12:49.510177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.008 [2024-10-21 12:12:49.518620] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.008 [2024-10-21 12:12:49.518638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.008 [2024-10-21 12:12:49.518644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.008 27676.00 IOPS, 108.11 MiB/s [2024-10-21T10:12:49.603Z] [2024-10-21 12:12:49.529686] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.008 [2024-10-21 12:12:49.529702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.008 [2024-10-21 12:12:49.529708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.008 [2024-10-21 12:12:49.539913] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.008 [2024-10-21 12:12:49.539932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.008 [2024-10-21 12:12:49.539943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.008 [2024-10-21 12:12:49.547877] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.008 [2024-10-21 12:12:49.547894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.008 [2024-10-21 12:12:49.547900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.008 [2024-10-21 12:12:49.556845] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.008 [2024-10-21 12:12:49.556863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.008 [2024-10-21 12:12:49.556869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.008 [2024-10-21 12:12:49.565964] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.008 [2024-10-21 12:12:49.565981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.008 [2024-10-21 12:12:49.565987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.008 [2024-10-21 12:12:49.575009] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.008 [2024-10-21 12:12:49.575028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.008 [2024-10-21 12:12:49.575034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.008 [2024-10-21 12:12:49.582894] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.008 [2024-10-21 12:12:49.582911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.008 [2024-10-21 12:12:49.582917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.008 [2024-10-21 12:12:49.591655] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.008 [2024-10-21 12:12:49.591672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.008 [2024-10-21 12:12:49.591679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.008 [2024-10-21 12:12:49.601148] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.008 [2024-10-21 12:12:49.601165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.008 [2024-10-21 12:12:49.601171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.269 [2024-10-21 12:12:49.609646] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.269 [2024-10-21 12:12:49.609663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.269 [2024-10-21 12:12:49.609669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.269 [2024-10-21 12:12:49.619094] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.269 [2024-10-21 12:12:49.619111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:7973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.269 [2024-10-21 12:12:49.619118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.269 [2024-10-21 12:12:49.628310] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.269 [2024-10-21 12:12:49.628330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.269 [2024-10-21 12:12:49.628336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.269 [2024-10-21 12:12:49.636080] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.269 [2024-10-21 12:12:49.636097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.269 [2024-10-21 12:12:49.636103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.269 [2024-10-21 12:12:49.645404] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.269 [2024-10-21 12:12:49.645421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.269 [2024-10-21 12:12:49.645427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.270 [2024-10-21 12:12:49.654120] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.270 [2024-10-21 12:12:49.654137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:25161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.270 [2024-10-21 12:12:49.654144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.270 [2024-10-21 12:12:49.663197] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.270 [2024-10-21 12:12:49.663213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.270 [2024-10-21 12:12:49.663220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.270 [2024-10-21 12:12:49.671311] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.270 [2024-10-21 12:12:49.671332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.270 [2024-10-21 12:12:49.671339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.270 [2024-10-21 12:12:49.680519] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.270 [2024-10-21 12:12:49.680536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.270 [2024-10-21 12:12:49.680542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.270 [2024-10-21 12:12:49.688581] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.270 [2024-10-21 12:12:49.688597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.270 [2024-10-21 12:12:49.688607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.270 [2024-10-21 12:12:49.697828] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.270 [2024-10-21 12:12:49.697845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.270 [2024-10-21 12:12:49.697851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.270 [2024-10-21 12:12:49.706819] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.270 [2024-10-21 12:12:49.706836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.270 [2024-10-21 12:12:49.706842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.270 [2024-10-21 12:12:49.714860] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.270 [2024-10-21 12:12:49.714877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.270 [2024-10-21 12:12:49.714883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.270 [2024-10-21 12:12:49.724012] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.270 [2024-10-21 12:12:49.724029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.270 [2024-10-21 12:12:49.724035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.270 [2024-10-21 12:12:49.733757] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.270 [2024-10-21 12:12:49.733774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.270 [2024-10-21 12:12:49.733780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.270 [2024-10-21 12:12:49.741874] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.270 [2024-10-21 12:12:49.741891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.270 [2024-10-21 12:12:49.741898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.270 [2024-10-21 12:12:49.751430] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.270 [2024-10-21 12:12:49.751447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.270 [2024-10-21 12:12:49.751453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.270 [2024-10-21 12:12:49.759102] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.270 [2024-10-21 12:12:49.759119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.270 [2024-10-21 12:12:49.759126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.270 [2024-10-21 12:12:49.769220] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.270 [2024-10-21 12:12:49.769240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.270 [2024-10-21 12:12:49.769247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.270 [2024-10-21 12:12:49.780006] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.270 [2024-10-21 12:12:49.780023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:11498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.270 [2024-10-21 12:12:49.780029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.270 [2024-10-21 12:12:49.789459] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.270 [2024-10-21 12:12:49.789475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.270 [2024-10-21 12:12:49.789481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.270 [2024-10-21 12:12:49.797523] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.270 [2024-10-21 12:12:49.797540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.270 [2024-10-21 12:12:49.797547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.270 [2024-10-21 12:12:49.806174] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.270 [2024-10-21 12:12:49.806192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.270 [2024-10-21 12:12:49.806198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.270 [2024-10-21 12:12:49.815032] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.270 [2024-10-21 12:12:49.815049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.270 [2024-10-21 12:12:49.815056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.270 [2024-10-21 12:12:49.824363] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.270 [2024-10-21 12:12:49.824381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.270 [2024-10-21 12:12:49.824387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.270 [2024-10-21 12:12:49.833053] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.270 [2024-10-21 12:12:49.833070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.270 [2024-10-21 12:12:49.833077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.270 [2024-10-21 12:12:49.841703] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.270 [2024-10-21 12:12:49.841720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.270 [2024-10-21 12:12:49.841726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.270 [2024-10-21 12:12:49.850815] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.270 [2024-10-21 12:12:49.850832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.270 [2024-10-21 12:12:49.850838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.270 [2024-10-21 12:12:49.859349] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.270 [2024-10-21 12:12:49.859366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.270 [2024-10-21 12:12:49.859372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.533 [2024-10-21 12:12:49.868791] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.533 [2024-10-21 12:12:49.868809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.533 [2024-10-21 12:12:49.868815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.533 [2024-10-21 12:12:49.877782] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.533 [2024-10-21 12:12:49.877800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.533 [2024-10-21 12:12:49.877806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.533 [2024-10-21 12:12:49.886523] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.533 [2024-10-21 12:12:49.886541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.533 [2024-10-21 12:12:49.886547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.533 [2024-10-21 12:12:49.895192] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.533 [2024-10-21 12:12:49.895209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.533 [2024-10-21 12:12:49.895216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.533 [2024-10-21 12:12:49.903762] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.533 [2024-10-21 12:12:49.903779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.533 [2024-10-21 12:12:49.903785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.533 [2024-10-21 12:12:49.912865] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.533 [2024-10-21 12:12:49.912882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.533 [2024-10-21 12:12:49.912888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.533 [2024-10-21 12:12:49.920875] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.533 [2024-10-21 12:12:49.920892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.533 [2024-10-21 12:12:49.920902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.533 [2024-10-21 12:12:49.930247] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.533 [2024-10-21 12:12:49.930265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.533 [2024-10-21 12:12:49.930271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.533 [2024-10-21 12:12:49.939516] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.533 [2024-10-21 12:12:49.939533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.533 [2024-10-21 12:12:49.939539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.533 [2024-10-21 12:12:49.948683] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.533 [2024-10-21 12:12:49.948700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.533 [2024-10-21 12:12:49.948706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.533 [2024-10-21 12:12:49.957880] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.533 [2024-10-21 12:12:49.957898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.533 [2024-10-21 12:12:49.957904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.533 [2024-10-21 12:12:49.966486] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.533 [2024-10-21 12:12:49.966503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.533 [2024-10-21 12:12:49.966510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.533 [2024-10-21 12:12:49.975749] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.533 [2024-10-21 12:12:49.975770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.533 [2024-10-21 12:12:49.975776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.533 [2024-10-21 12:12:49.983652] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.533 [2024-10-21 12:12:49.983669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.533 [2024-10-21 12:12:49.983676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.533 [2024-10-21 12:12:49.993046] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.533 [2024-10-21 12:12:49.993064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.533 [2024-10-21 12:12:49.993070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.533 [2024-10-21 12:12:50.003488] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.533 [2024-10-21 12:12:50.003509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.533 [2024-10-21 12:12:50.003516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.533 [2024-10-21 12:12:50.010890] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.533 [2024-10-21 12:12:50.010907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.533 [2024-10-21 12:12:50.010913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.533 [2024-10-21 12:12:50.019917] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.533 [2024-10-21 12:12:50.019935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.533 [2024-10-21 12:12:50.019942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.533 [2024-10-21 12:12:50.030027] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.533 [2024-10-21 12:12:50.030045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.533 [2024-10-21 12:12:50.030052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.533 [2024-10-21 12:12:50.038348] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.533 [2024-10-21 12:12:50.038366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.533 [2024-10-21 12:12:50.038372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.534 [2024-10-21 12:12:50.047541] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.534 [2024-10-21 12:12:50.047559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.534 [2024-10-21 12:12:50.047566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.534 [2024-10-21 12:12:50.057806] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.534 [2024-10-21 12:12:50.057824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.534 [2024-10-21 12:12:50.057831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.534 [2024-10-21 12:12:50.065537] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.534 [2024-10-21 12:12:50.065555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.534 [2024-10-21 12:12:50.065561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.534 [2024-10-21 12:12:50.075075] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.534 [2024-10-21 12:12:50.075093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.534 [2024-10-21 12:12:50.075100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.534 [2024-10-21 12:12:50.084174] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.534 [2024-10-21 12:12:50.084191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.534 [2024-10-21 12:12:50.084198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.534 [2024-10-21 12:12:50.093163] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.534 [2024-10-21 12:12:50.093180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.534 [2024-10-21 12:12:50.093187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.534 [2024-10-21 12:12:50.101432] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.534 [2024-10-21 12:12:50.101449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.534 [2024-10-21 12:12:50.101456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.534 [2024-10-21 12:12:50.110497] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.534 [2024-10-21 12:12:50.110515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.534 [2024-10-21 12:12:50.110522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.534 [2024-10-21 12:12:50.118683] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.534 [2024-10-21 12:12:50.118701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.534 [2024-10-21 12:12:50.118707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.796 [2024-10-21 12:12:50.128350] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.796 [2024-10-21 12:12:50.128368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.796 [2024-10-21 12:12:50.128375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.796 [2024-10-21 12:12:50.137874] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.796 [2024-10-21 12:12:50.137892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.796 [2024-10-21 12:12:50.137898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.796 [2024-10-21 12:12:50.145097] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.796 [2024-10-21 12:12:50.145115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.796 [2024-10-21 12:12:50.145122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.796 [2024-10-21 12:12:50.154381] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.796 [2024-10-21 12:12:50.154398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:14931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.796 [2024-10-21 12:12:50.154409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.796 [2024-10-21 12:12:50.163922] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.796 [2024-10-21 12:12:50.163940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:11845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.796 [2024-10-21 12:12:50.163947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.796 [2024-10-21 12:12:50.172219] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.796 [2024-10-21 12:12:50.172237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.796 [2024-10-21 12:12:50.172244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.796 [2024-10-21 12:12:50.181194] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.796 [2024-10-21 12:12:50.181213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.796 [2024-10-21 12:12:50.181219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.796 [2024-10-21 12:12:50.190965] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.796 [2024-10-21 12:12:50.190983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.796 [2024-10-21 12:12:50.190990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.796 [2024-10-21 12:12:50.198560] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.796 [2024-10-21 12:12:50.198577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.796 [2024-10-21 12:12:50.198584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.796 [2024-10-21 12:12:50.208305] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.796 [2024-10-21 12:12:50.208326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.797 [2024-10-21 12:12:50.208334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.797 [2024-10-21 12:12:50.216603] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.797 [2024-10-21 12:12:50.216623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.797 [2024-10-21 12:12:50.216629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.797 [2024-10-21 12:12:50.226119] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.797 [2024-10-21 12:12:50.226137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.797 [2024-10-21 12:12:50.226144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.797 [2024-10-21 12:12:50.234812] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.797 [2024-10-21 12:12:50.234833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.797 [2024-10-21 12:12:50.234840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.797 [2024-10-21 12:12:50.244706] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.797 [2024-10-21 12:12:50.244724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.797 [2024-10-21 12:12:50.244730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.797 [2024-10-21 12:12:50.254387] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.797 [2024-10-21 12:12:50.254404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.797 [2024-10-21 12:12:50.254410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.797 [2024-10-21 12:12:50.262873] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.797 [2024-10-21 12:12:50.262889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.797 [2024-10-21 12:12:50.262896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.797 [2024-10-21 12:12:50.271261] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.797 [2024-10-21 12:12:50.271279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.797 [2024-10-21 12:12:50.271285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.797 [2024-10-21 12:12:50.279941] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.797 [2024-10-21 12:12:50.279959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.797 [2024-10-21 12:12:50.279965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.797 [2024-10-21 12:12:50.289246] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.797 [2024-10-21 12:12:50.289263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.797 [2024-10-21 12:12:50.289269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.797 [2024-10-21 12:12:50.297773] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.797 [2024-10-21 12:12:50.297791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.797 [2024-10-21 12:12:50.297798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.797 [2024-10-21 12:12:50.306136] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.797 [2024-10-21 12:12:50.306154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.797 [2024-10-21 12:12:50.306161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.797 [2024-10-21 12:12:50.316582] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.797 [2024-10-21 12:12:50.316599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.797 [2024-10-21 12:12:50.316605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.797 [2024-10-21 12:12:50.326173] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.797 [2024-10-21 12:12:50.326190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.797 [2024-10-21 12:12:50.326197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.797 [2024-10-21 12:12:50.335947] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.797 [2024-10-21 12:12:50.335965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.797 [2024-10-21 12:12:50.335971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.797 [2024-10-21 12:12:50.347429] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.797 [2024-10-21 12:12:50.347446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.797 [2024-10-21 12:12:50.347453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.797 [2024-10-21 12:12:50.355613] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.797 [2024-10-21 12:12:50.355630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.797 [2024-10-21 12:12:50.355637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.797 [2024-10-21 12:12:50.364948] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.797 [2024-10-21 12:12:50.364966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.797 [2024-10-21 12:12:50.364972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.797 [2024-10-21 12:12:50.376637] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.797 [2024-10-21 12:12:50.376654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.797 [2024-10-21 12:12:50.376660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.797 [2024-10-21 12:12:50.385358] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:13.797 [2024-10-21 12:12:50.385376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.797 [2024-10-21 12:12:50.385383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.059 [2024-10-21 12:12:50.393847] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:14.059 [2024-10-21 12:12:50.393869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:6237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.059 [2024-10-21 12:12:50.393875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.059 [2024-10-21 12:12:50.402934] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:14.059 [2024-10-21 12:12:50.402950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.059 [2024-10-21 12:12:50.402956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.059 [2024-10-21 12:12:50.411545] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:14.059 [2024-10-21 12:12:50.411563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.059 [2024-10-21 12:12:50.411570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.059 [2024-10-21 12:12:50.420504] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:14.059 [2024-10-21 12:12:50.420522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.059 [2024-10-21 12:12:50.420528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.059 [2024-10-21 12:12:50.429810] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:14.059 [2024-10-21 12:12:50.429828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.059 [2024-10-21 12:12:50.429834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.059 [2024-10-21 12:12:50.438052] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:14.059 [2024-10-21 12:12:50.438070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.059 [2024-10-21 12:12:50.438076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.059 [2024-10-21 12:12:50.448087] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:14.059 [2024-10-21 12:12:50.448105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.059 [2024-10-21 12:12:50.448111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.059 [2024-10-21 12:12:50.455610] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:14.059 [2024-10-21 12:12:50.455627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.059 [2024-10-21 12:12:50.455634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.059 [2024-10-21 12:12:50.466487] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:14.059 [2024-10-21 12:12:50.466505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.059 [2024-10-21 12:12:50.466512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.059 [2024-10-21 12:12:50.475275] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:14.059 [2024-10-21 12:12:50.475296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.059 [2024-10-21 12:12:50.475305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.059 [2024-10-21 12:12:50.484371] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:14.059 [2024-10-21 12:12:50.484389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.059 [2024-10-21 12:12:50.484396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.059 [2024-10-21 12:12:50.493269] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:14.059 [2024-10-21 12:12:50.493287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.059 [2024-10-21 12:12:50.493294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.059 [2024-10-21 12:12:50.501664] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:14.059 [2024-10-21 12:12:50.501681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.059 [2024-10-21 12:12:50.501690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.059 [2024-10-21 12:12:50.511450] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:14.059 [2024-10-21 12:12:50.511468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.059 [2024-10-21 12:12:50.511474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.059 [2024-10-21 12:12:50.520384] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:14.059 [2024-10-21 12:12:50.520401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.059 [2024-10-21 12:12:50.520407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.059 28025.50 IOPS, 109.47 MiB/s [2024-10-21T10:12:50.654Z] [2024-10-21 12:12:50.530709] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x97d980) 00:28:14.059 [2024-10-21 12:12:50.530726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.059 [2024-10-21 12:12:50.530732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.059 00:28:14.059 Latency(us) 00:28:14.059 [2024-10-21T10:12:50.654Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:14.060 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:14.060 nvme0n1 : 2.04 27478.66 107.34 0.00 0.00 4560.83 2416.64 47404.37 00:28:14.060 [2024-10-21T10:12:50.655Z] =================================================================================================================== 00:28:14.060 [2024-10-21T10:12:50.655Z] Total : 27478.66 107.34 0.00 0.00 4560.83 2416.64 47404.37 00:28:14.060 { 00:28:14.060 "results": [ 00:28:14.060 { 00:28:14.060 "job": "nvme0n1", 00:28:14.060 "core_mask": "0x2", 00:28:14.060 "workload": "randread", 00:28:14.060 "status": "finished", 00:28:14.060 "queue_depth": 128, 00:28:14.060 "io_size": 4096, 00:28:14.060 "runtime": 2.044459, 00:28:14.060 "iops": 27478.663059518436, 00:28:14.060 "mibps": 107.33852757624389, 00:28:14.060 "io_failed": 0, 00:28:14.060 "io_timeout": 0, 00:28:14.060 "avg_latency_us": 4560.832094554905, 00:28:14.060 "min_latency_us": 2416.64, 00:28:14.060 "max_latency_us": 47404.37333333334 00:28:14.060 } 00:28:14.060 ], 00:28:14.060 "core_count": 1 00:28:14.060 } 00:28:14.060 12:12:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:14.060 12:12:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:14.060 12:12:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:14.060 | .driver_specific 00:28:14.060 | .nvme_error 00:28:14.060 | .status_code 00:28:14.060 | .command_transient_transport_error' 00:28:14.060 12:12:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:14.321 12:12:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 220 > 0 )) 00:28:14.321 12:12:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1146114 00:28:14.321 12:12:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1146114 ']' 00:28:14.321 12:12:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1146114 00:28:14.321 12:12:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:14.321 12:12:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:14.321 12:12:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1146114 00:28:14.321 12:12:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:14.321 12:12:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:14.321 12:12:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1146114' 00:28:14.321 killing process with pid 1146114 00:28:14.321 12:12:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1146114 00:28:14.321 Received shutdown signal, test time was about 2.000000 seconds 00:28:14.321 00:28:14.321 Latency(us) 00:28:14.321 [2024-10-21T10:12:50.916Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:14.321 [2024-10-21T10:12:50.916Z] =================================================================================================================== 00:28:14.321 [2024-10-21T10:12:50.916Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:14.321 12:12:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1146114 00:28:14.582 12:12:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:14.582 12:12:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:14.582 12:12:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:14.582 12:12:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:14.582 12:12:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:14.582 12:12:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1146796 00:28:14.582 12:12:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1146796 /var/tmp/bperf.sock 00:28:14.582 12:12:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1146796 ']' 00:28:14.582 12:12:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:14.582 12:12:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:14.582 12:12:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:14.582 12:12:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:14.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:14.582 12:12:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:14.582 12:12:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:14.582 [2024-10-21 12:12:51.006179] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:28:14.582 [2024-10-21 12:12:51.006234] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1146796 ] 00:28:14.582 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:14.582 Zero copy mechanism will not be used. 00:28:14.582 [2024-10-21 12:12:51.082340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:14.582 [2024-10-21 12:12:51.110612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:15.524 12:12:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:15.524 12:12:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:15.524 12:12:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:15.524 12:12:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:15.524 12:12:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:15.524 12:12:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.524 12:12:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:15.524 12:12:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.524 12:12:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:15.524 12:12:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:15.785 nvme0n1 00:28:15.785 12:12:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:15.785 12:12:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.785 12:12:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:16.046 12:12:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.046 12:12:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:16.046 12:12:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:16.046 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:16.046 Zero copy mechanism will not be used. 00:28:16.046 Running I/O for 2 seconds... 00:28:16.046 [2024-10-21 12:12:52.477754] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.046 [2024-10-21 12:12:52.477787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.046 [2024-10-21 12:12:52.477800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.046 [2024-10-21 12:12:52.483939] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.046 [2024-10-21 12:12:52.483959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.046 [2024-10-21 12:12:52.483967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.046 [2024-10-21 12:12:52.490210] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.046 [2024-10-21 12:12:52.490228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.046 [2024-10-21 12:12:52.490234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.046 [2024-10-21 12:12:52.496371] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.046 [2024-10-21 12:12:52.496388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.046 [2024-10-21 12:12:52.496395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.046 [2024-10-21 12:12:52.501931] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.046 [2024-10-21 12:12:52.501948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.046 [2024-10-21 12:12:52.501955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.046 [2024-10-21 12:12:52.508017] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.046 [2024-10-21 12:12:52.508034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.046 [2024-10-21 12:12:52.508040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.046 [2024-10-21 12:12:52.513267] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.046 [2024-10-21 12:12:52.513284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.046 [2024-10-21 12:12:52.513291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.046 [2024-10-21 12:12:52.518264] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.046 [2024-10-21 12:12:52.518281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.046 [2024-10-21 12:12:52.518287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.046 [2024-10-21 12:12:52.524031] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.046 [2024-10-21 12:12:52.524048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.046 [2024-10-21 12:12:52.524055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.046 [2024-10-21 12:12:52.529544] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.046 [2024-10-21 12:12:52.529567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.046 [2024-10-21 12:12:52.529574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.046 [2024-10-21 12:12:52.534845] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.046 [2024-10-21 12:12:52.534862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.046 [2024-10-21 12:12:52.534869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.046 [2024-10-21 12:12:52.540199] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.046 [2024-10-21 12:12:52.540216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.046 [2024-10-21 12:12:52.540223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.046 [2024-10-21 12:12:52.545498] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.046 [2024-10-21 12:12:52.545516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.046 [2024-10-21 12:12:52.545522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.046 [2024-10-21 12:12:52.550709] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.046 [2024-10-21 12:12:52.550726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.046 [2024-10-21 12:12:52.550733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.046 [2024-10-21 12:12:52.556191] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.046 [2024-10-21 12:12:52.556209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.046 [2024-10-21 12:12:52.556216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.046 [2024-10-21 12:12:52.561628] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.046 [2024-10-21 12:12:52.561645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.046 [2024-10-21 12:12:52.561652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.046 [2024-10-21 12:12:52.567641] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.046 [2024-10-21 12:12:52.567658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.046 [2024-10-21 12:12:52.567665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.046 [2024-10-21 12:12:52.573129] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.046 [2024-10-21 12:12:52.573147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.046 [2024-10-21 12:12:52.573153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.046 [2024-10-21 12:12:52.578512] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.046 [2024-10-21 12:12:52.578529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.046 [2024-10-21 12:12:52.578536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.046 [2024-10-21 12:12:52.583608] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.046 [2024-10-21 12:12:52.583626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.046 [2024-10-21 12:12:52.583632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.046 [2024-10-21 12:12:52.591730] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.047 [2024-10-21 12:12:52.591748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.047 [2024-10-21 12:12:52.591755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.047 [2024-10-21 12:12:52.597199] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.047 [2024-10-21 12:12:52.597216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.047 [2024-10-21 12:12:52.597223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.047 [2024-10-21 12:12:52.602658] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.047 [2024-10-21 12:12:52.602675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.047 [2024-10-21 12:12:52.602681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.047 [2024-10-21 12:12:52.608594] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.047 [2024-10-21 12:12:52.608611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.047 [2024-10-21 12:12:52.608617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.047 [2024-10-21 12:12:52.614100] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.047 [2024-10-21 12:12:52.614117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.047 [2024-10-21 12:12:52.614123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.047 [2024-10-21 12:12:52.619734] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.047 [2024-10-21 12:12:52.619752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.047 [2024-10-21 12:12:52.619758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.047 [2024-10-21 12:12:52.626836] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.047 [2024-10-21 12:12:52.626857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.047 [2024-10-21 12:12:52.626863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.047 [2024-10-21 12:12:52.634793] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.047 [2024-10-21 12:12:52.634810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.047 [2024-10-21 12:12:52.634817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.047 [2024-10-21 12:12:52.640341] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.047 [2024-10-21 12:12:52.640358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.047 [2024-10-21 12:12:52.640364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.309 [2024-10-21 12:12:52.646602] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.309 [2024-10-21 12:12:52.646620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.309 [2024-10-21 12:12:52.646626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.309 [2024-10-21 12:12:52.652596] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.309 [2024-10-21 12:12:52.652612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.309 [2024-10-21 12:12:52.652619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.309 [2024-10-21 12:12:52.658339] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.309 [2024-10-21 12:12:52.658356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.309 [2024-10-21 12:12:52.658362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.309 [2024-10-21 12:12:52.663868] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.309 [2024-10-21 12:12:52.663885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.309 [2024-10-21 12:12:52.663892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.309 [2024-10-21 12:12:52.671568] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.309 [2024-10-21 12:12:52.671585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.309 [2024-10-21 12:12:52.671592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.309 [2024-10-21 12:12:52.680487] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.309 [2024-10-21 12:12:52.680504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.309 [2024-10-21 12:12:52.680511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.309 [2024-10-21 12:12:52.688411] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.309 [2024-10-21 12:12:52.688428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.309 [2024-10-21 12:12:52.688435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.309 [2024-10-21 12:12:52.694798] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.309 [2024-10-21 12:12:52.694816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.309 [2024-10-21 12:12:52.694823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.309 [2024-10-21 12:12:52.700106] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.309 [2024-10-21 12:12:52.700124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.309 [2024-10-21 12:12:52.700130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.309 [2024-10-21 12:12:52.707157] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.309 [2024-10-21 12:12:52.707174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.309 [2024-10-21 12:12:52.707180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.309 [2024-10-21 12:12:52.714446] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.309 [2024-10-21 12:12:52.714463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.309 [2024-10-21 12:12:52.714470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.309 [2024-10-21 12:12:52.720950] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.309 [2024-10-21 12:12:52.720967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.309 [2024-10-21 12:12:52.720973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.309 [2024-10-21 12:12:52.725867] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.309 [2024-10-21 12:12:52.725885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.309 [2024-10-21 12:12:52.725891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.309 [2024-10-21 12:12:52.731332] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.309 [2024-10-21 12:12:52.731350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.309 [2024-10-21 12:12:52.731356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.309 [2024-10-21 12:12:52.736845] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.309 [2024-10-21 12:12:52.736863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.309 [2024-10-21 12:12:52.736872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.309 [2024-10-21 12:12:52.742330] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.309 [2024-10-21 12:12:52.742349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.309 [2024-10-21 12:12:52.742356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.309 [2024-10-21 12:12:52.747790] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.309 [2024-10-21 12:12:52.747808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.309 [2024-10-21 12:12:52.747814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.309 [2024-10-21 12:12:52.752772] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.309 [2024-10-21 12:12:52.752790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.309 [2024-10-21 12:12:52.752796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.309 [2024-10-21 12:12:52.758116] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.309 [2024-10-21 12:12:52.758134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.309 [2024-10-21 12:12:52.758140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.309 [2024-10-21 12:12:52.763211] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.309 [2024-10-21 12:12:52.763229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.309 [2024-10-21 12:12:52.763235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.309 [2024-10-21 12:12:52.768243] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.309 [2024-10-21 12:12:52.768260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.309 [2024-10-21 12:12:52.768266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.309 [2024-10-21 12:12:52.773407] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.309 [2024-10-21 12:12:52.773424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.309 [2024-10-21 12:12:52.773430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.309 [2024-10-21 12:12:52.778697] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.309 [2024-10-21 12:12:52.778715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.309 [2024-10-21 12:12:52.778721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.309 [2024-10-21 12:12:52.783754] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.309 [2024-10-21 12:12:52.783775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.309 [2024-10-21 12:12:52.783781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.309 [2024-10-21 12:12:52.788714] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.309 [2024-10-21 12:12:52.788732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.309 [2024-10-21 12:12:52.788738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.309 [2024-10-21 12:12:52.793580] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.309 [2024-10-21 12:12:52.793597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.309 [2024-10-21 12:12:52.793604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.309 [2024-10-21 12:12:52.798418] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.310 [2024-10-21 12:12:52.798437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.310 [2024-10-21 12:12:52.798444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.310 [2024-10-21 12:12:52.803272] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.310 [2024-10-21 12:12:52.803290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.310 [2024-10-21 12:12:52.803297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.310 [2024-10-21 12:12:52.807986] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.310 [2024-10-21 12:12:52.808004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.310 [2024-10-21 12:12:52.808010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.310 [2024-10-21 12:12:52.812972] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.310 [2024-10-21 12:12:52.812990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.310 [2024-10-21 12:12:52.812997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.310 [2024-10-21 12:12:52.817002] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.310 [2024-10-21 12:12:52.817020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.310 [2024-10-21 12:12:52.817026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.310 [2024-10-21 12:12:52.820010] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.310 [2024-10-21 12:12:52.820028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.310 [2024-10-21 12:12:52.820034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.310 [2024-10-21 12:12:52.823983] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.310 [2024-10-21 12:12:52.824000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.310 [2024-10-21 12:12:52.824006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.310 [2024-10-21 12:12:52.828770] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.310 [2024-10-21 12:12:52.828788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.310 [2024-10-21 12:12:52.828794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.310 [2024-10-21 12:12:52.834010] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.310 [2024-10-21 12:12:52.834027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.310 [2024-10-21 12:12:52.834033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.310 [2024-10-21 12:12:52.838219] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.310 [2024-10-21 12:12:52.838237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.310 [2024-10-21 12:12:52.838243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.310 [2024-10-21 12:12:52.842920] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.310 [2024-10-21 12:12:52.842938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.310 [2024-10-21 12:12:52.842944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.310 [2024-10-21 12:12:52.847547] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.310 [2024-10-21 12:12:52.847566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.310 [2024-10-21 12:12:52.847572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.310 [2024-10-21 12:12:52.852342] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.310 [2024-10-21 12:12:52.852359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.310 [2024-10-21 12:12:52.852366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.310 [2024-10-21 12:12:52.856926] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.310 [2024-10-21 12:12:52.856944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.310 [2024-10-21 12:12:52.856950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.310 [2024-10-21 12:12:52.862038] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.310 [2024-10-21 12:12:52.862056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.310 [2024-10-21 12:12:52.862066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.310 [2024-10-21 12:12:52.865610] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.310 [2024-10-21 12:12:52.865628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.310 [2024-10-21 12:12:52.865634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.310 [2024-10-21 12:12:52.869906] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.310 [2024-10-21 12:12:52.869924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.310 [2024-10-21 12:12:52.869931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.310 [2024-10-21 12:12:52.874098] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.310 [2024-10-21 12:12:52.874116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.310 [2024-10-21 12:12:52.874122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.310 [2024-10-21 12:12:52.878587] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.310 [2024-10-21 12:12:52.878605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.310 [2024-10-21 12:12:52.878611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.310 [2024-10-21 12:12:52.883312] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.310 [2024-10-21 12:12:52.883336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.310 [2024-10-21 12:12:52.883342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.310 [2024-10-21 12:12:52.887846] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.310 [2024-10-21 12:12:52.887864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.310 [2024-10-21 12:12:52.887871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.310 [2024-10-21 12:12:52.891963] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.310 [2024-10-21 12:12:52.891981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.310 [2024-10-21 12:12:52.891988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.310 [2024-10-21 12:12:52.896384] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.310 [2024-10-21 12:12:52.896402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.310 [2024-10-21 12:12:52.896408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.310 [2024-10-21 12:12:52.901054] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.310 [2024-10-21 12:12:52.901075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.310 [2024-10-21 12:12:52.901081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.572 [2024-10-21 12:12:52.905541] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.572 [2024-10-21 12:12:52.905560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.572 [2024-10-21 12:12:52.905566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.572 [2024-10-21 12:12:52.910073] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.572 [2024-10-21 12:12:52.910092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.572 [2024-10-21 12:12:52.910098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.572 [2024-10-21 12:12:52.914516] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.572 [2024-10-21 12:12:52.914533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.572 [2024-10-21 12:12:52.914540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.572 [2024-10-21 12:12:52.919138] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.572 [2024-10-21 12:12:52.919156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.572 [2024-10-21 12:12:52.919162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.572 [2024-10-21 12:12:52.923945] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.572 [2024-10-21 12:12:52.923963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.572 [2024-10-21 12:12:52.923969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.572 [2024-10-21 12:12:52.928985] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.572 [2024-10-21 12:12:52.929003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.572 [2024-10-21 12:12:52.929010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.572 [2024-10-21 12:12:52.933685] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.572 [2024-10-21 12:12:52.933703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.572 [2024-10-21 12:12:52.933709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.572 [2024-10-21 12:12:52.938566] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.572 [2024-10-21 12:12:52.938584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.572 [2024-10-21 12:12:52.938590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.572 [2024-10-21 12:12:52.943478] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.572 [2024-10-21 12:12:52.943495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.572 [2024-10-21 12:12:52.943502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.572 [2024-10-21 12:12:52.948678] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.572 [2024-10-21 12:12:52.948696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.572 [2024-10-21 12:12:52.948702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.572 [2024-10-21 12:12:52.952045] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.572 [2024-10-21 12:12:52.952062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.572 [2024-10-21 12:12:52.952069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.572 [2024-10-21 12:12:52.963197] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.572 [2024-10-21 12:12:52.963215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.572 [2024-10-21 12:12:52.963221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.572 [2024-10-21 12:12:52.969718] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.572 [2024-10-21 12:12:52.969735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.572 [2024-10-21 12:12:52.969742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.572 [2024-10-21 12:12:52.975888] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.572 [2024-10-21 12:12:52.975905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.572 [2024-10-21 12:12:52.975911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.572 [2024-10-21 12:12:52.981775] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.572 [2024-10-21 12:12:52.981792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.572 [2024-10-21 12:12:52.981798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.572 [2024-10-21 12:12:52.989032] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.573 [2024-10-21 12:12:52.989050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.573 [2024-10-21 12:12:52.989056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.573 [2024-10-21 12:12:52.996339] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.573 [2024-10-21 12:12:52.996357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.573 [2024-10-21 12:12:52.996366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.573 [2024-10-21 12:12:53.002699] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.573 [2024-10-21 12:12:53.002716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.573 [2024-10-21 12:12:53.002723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.573 [2024-10-21 12:12:53.009094] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.573 [2024-10-21 12:12:53.009112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.573 [2024-10-21 12:12:53.009118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.573 [2024-10-21 12:12:53.014271] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.573 [2024-10-21 12:12:53.014288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.573 [2024-10-21 12:12:53.014294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.573 [2024-10-21 12:12:53.023019] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.573 [2024-10-21 12:12:53.023036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.573 [2024-10-21 12:12:53.023042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.573 [2024-10-21 12:12:53.030106] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.573 [2024-10-21 12:12:53.030124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.573 [2024-10-21 12:12:53.030130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.573 [2024-10-21 12:12:53.036600] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.573 [2024-10-21 12:12:53.036617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.573 [2024-10-21 12:12:53.036623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.573 [2024-10-21 12:12:53.042866] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.573 [2024-10-21 12:12:53.042884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.573 [2024-10-21 12:12:53.042890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.573 [2024-10-21 12:12:53.049367] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.573 [2024-10-21 12:12:53.049384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.573 [2024-10-21 12:12:53.049390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.573 [2024-10-21 12:12:53.056364] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.573 [2024-10-21 12:12:53.056384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.573 [2024-10-21 12:12:53.056391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.573 [2024-10-21 12:12:53.064175] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.573 [2024-10-21 12:12:53.064192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.573 [2024-10-21 12:12:53.064199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.573 [2024-10-21 12:12:53.072289] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.573 [2024-10-21 12:12:53.072306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.573 [2024-10-21 12:12:53.072312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.573 [2024-10-21 12:12:53.079719] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.573 [2024-10-21 12:12:53.079737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.573 [2024-10-21 12:12:53.079743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.573 [2024-10-21 12:12:53.086644] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.573 [2024-10-21 12:12:53.086661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.573 [2024-10-21 12:12:53.086667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.573 [2024-10-21 12:12:53.093555] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.573 [2024-10-21 12:12:53.093573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.573 [2024-10-21 12:12:53.093579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.573 [2024-10-21 12:12:53.101038] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.573 [2024-10-21 12:12:53.101055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.573 [2024-10-21 12:12:53.101061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.573 [2024-10-21 12:12:53.107695] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.573 [2024-10-21 12:12:53.107712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.573 [2024-10-21 12:12:53.107718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.573 [2024-10-21 12:12:53.117063] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.573 [2024-10-21 12:12:53.117080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.573 [2024-10-21 12:12:53.117087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.573 [2024-10-21 12:12:53.123550] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.573 [2024-10-21 12:12:53.123567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.573 [2024-10-21 12:12:53.123573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.573 [2024-10-21 12:12:53.129127] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.573 [2024-10-21 12:12:53.129144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.573 [2024-10-21 12:12:53.129150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.573 [2024-10-21 12:12:53.135066] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.573 [2024-10-21 12:12:53.135083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.573 [2024-10-21 12:12:53.135089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.573 [2024-10-21 12:12:53.140797] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.573 [2024-10-21 12:12:53.140815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.573 [2024-10-21 12:12:53.140822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.573 [2024-10-21 12:12:53.149387] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.573 [2024-10-21 12:12:53.149405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.573 [2024-10-21 12:12:53.149411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.573 [2024-10-21 12:12:53.156930] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.573 [2024-10-21 12:12:53.156948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.573 [2024-10-21 12:12:53.156954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.573 [2024-10-21 12:12:53.164987] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.573 [2024-10-21 12:12:53.165006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.573 [2024-10-21 12:12:53.165012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.835 [2024-10-21 12:12:53.171019] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.835 [2024-10-21 12:12:53.171037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.835 [2024-10-21 12:12:53.171044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.835 [2024-10-21 12:12:53.177066] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.835 [2024-10-21 12:12:53.177085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.835 [2024-10-21 12:12:53.177094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.835 [2024-10-21 12:12:53.183394] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.835 [2024-10-21 12:12:53.183412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.835 [2024-10-21 12:12:53.183418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.835 [2024-10-21 12:12:53.193040] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.835 [2024-10-21 12:12:53.193058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.835 [2024-10-21 12:12:53.193064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.835 [2024-10-21 12:12:53.200900] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.835 [2024-10-21 12:12:53.200918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.835 [2024-10-21 12:12:53.200924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.835 [2024-10-21 12:12:53.208097] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.835 [2024-10-21 12:12:53.208114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.835 [2024-10-21 12:12:53.208120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.835 [2024-10-21 12:12:53.216086] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.835 [2024-10-21 12:12:53.216105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.835 [2024-10-21 12:12:53.216111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.836 [2024-10-21 12:12:53.223388] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.836 [2024-10-21 12:12:53.223406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.836 [2024-10-21 12:12:53.223412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.836 [2024-10-21 12:12:53.231718] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.836 [2024-10-21 12:12:53.231736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.836 [2024-10-21 12:12:53.231742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.836 [2024-10-21 12:12:53.239762] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.836 [2024-10-21 12:12:53.239780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.836 [2024-10-21 12:12:53.239786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.836 [2024-10-21 12:12:53.247682] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.836 [2024-10-21 12:12:53.247700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.836 [2024-10-21 12:12:53.247706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.836 [2024-10-21 12:12:53.256243] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.836 [2024-10-21 12:12:53.256262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.836 [2024-10-21 12:12:53.256268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.836 [2024-10-21 12:12:53.264533] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.836 [2024-10-21 12:12:53.264550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.836 [2024-10-21 12:12:53.264556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.836 [2024-10-21 12:12:53.272139] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.836 [2024-10-21 12:12:53.272157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.836 [2024-10-21 12:12:53.272164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.836 [2024-10-21 12:12:53.280672] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.836 [2024-10-21 12:12:53.280689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.836 [2024-10-21 12:12:53.280696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.836 [2024-10-21 12:12:53.289667] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.836 [2024-10-21 12:12:53.289686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.836 [2024-10-21 12:12:53.289692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.836 [2024-10-21 12:12:53.295392] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.836 [2024-10-21 12:12:53.295411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.836 [2024-10-21 12:12:53.295417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.836 [2024-10-21 12:12:53.300985] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.836 [2024-10-21 12:12:53.301003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.836 [2024-10-21 12:12:53.301009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.836 [2024-10-21 12:12:53.307090] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.836 [2024-10-21 12:12:53.307108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.836 [2024-10-21 12:12:53.307120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.836 [2024-10-21 12:12:53.311840] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.836 [2024-10-21 12:12:53.311858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.836 [2024-10-21 12:12:53.311864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.836 [2024-10-21 12:12:53.317048] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.836 [2024-10-21 12:12:53.317066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.836 [2024-10-21 12:12:53.317072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.836 [2024-10-21 12:12:53.322021] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.836 [2024-10-21 12:12:53.322039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.836 [2024-10-21 12:12:53.322045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.836 [2024-10-21 12:12:53.326993] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.836 [2024-10-21 12:12:53.327010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.836 [2024-10-21 12:12:53.327017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.836 [2024-10-21 12:12:53.331909] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.836 [2024-10-21 12:12:53.331927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.836 [2024-10-21 12:12:53.331933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.836 [2024-10-21 12:12:53.336807] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.836 [2024-10-21 12:12:53.336825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.836 [2024-10-21 12:12:53.336831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.836 [2024-10-21 12:12:53.341324] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.836 [2024-10-21 12:12:53.341342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.836 [2024-10-21 12:12:53.341349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.836 [2024-10-21 12:12:53.346172] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.836 [2024-10-21 12:12:53.346190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.836 [2024-10-21 12:12:53.346196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.836 [2024-10-21 12:12:53.351153] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.836 [2024-10-21 12:12:53.351174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.836 [2024-10-21 12:12:53.351180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.836 [2024-10-21 12:12:53.356233] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.836 [2024-10-21 12:12:53.356251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.836 [2024-10-21 12:12:53.356257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.836 [2024-10-21 12:12:53.361121] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.836 [2024-10-21 12:12:53.361138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.836 [2024-10-21 12:12:53.361144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.836 [2024-10-21 12:12:53.366035] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.836 [2024-10-21 12:12:53.366053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.836 [2024-10-21 12:12:53.366059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.836 [2024-10-21 12:12:53.370385] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.836 [2024-10-21 12:12:53.370402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.836 [2024-10-21 12:12:53.370408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.836 [2024-10-21 12:12:53.375039] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.836 [2024-10-21 12:12:53.375057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.836 [2024-10-21 12:12:53.375063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.836 [2024-10-21 12:12:53.379688] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.836 [2024-10-21 12:12:53.379706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.836 [2024-10-21 12:12:53.379713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.836 [2024-10-21 12:12:53.384343] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.836 [2024-10-21 12:12:53.384361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.836 [2024-10-21 12:12:53.384367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.836 [2024-10-21 12:12:53.390740] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.837 [2024-10-21 12:12:53.390758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.837 [2024-10-21 12:12:53.390765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.837 [2024-10-21 12:12:53.395731] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.837 [2024-10-21 12:12:53.395750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.837 [2024-10-21 12:12:53.395756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.837 [2024-10-21 12:12:53.399962] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.837 [2024-10-21 12:12:53.399981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.837 [2024-10-21 12:12:53.399987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.837 [2024-10-21 12:12:53.405048] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.837 [2024-10-21 12:12:53.405066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.837 [2024-10-21 12:12:53.405072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.837 [2024-10-21 12:12:53.410315] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.837 [2024-10-21 12:12:53.410345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.837 [2024-10-21 12:12:53.410351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.837 [2024-10-21 12:12:53.415708] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.837 [2024-10-21 12:12:53.415726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.837 [2024-10-21 12:12:53.415733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.837 [2024-10-21 12:12:53.421221] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.837 [2024-10-21 12:12:53.421239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.837 [2024-10-21 12:12:53.421245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.837 [2024-10-21 12:12:53.426368] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:16.837 [2024-10-21 12:12:53.426386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.837 [2024-10-21 12:12:53.426392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.099 [2024-10-21 12:12:53.431294] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.099 [2024-10-21 12:12:53.431313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.099 [2024-10-21 12:12:53.431324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.099 [2024-10-21 12:12:53.436161] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.099 [2024-10-21 12:12:53.436179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.099 [2024-10-21 12:12:53.436188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.099 [2024-10-21 12:12:53.440817] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.099 [2024-10-21 12:12:53.440836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.099 [2024-10-21 12:12:53.440842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.099 [2024-10-21 12:12:53.445132] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.099 [2024-10-21 12:12:53.445150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.099 [2024-10-21 12:12:53.445157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.099 [2024-10-21 12:12:53.450164] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.099 [2024-10-21 12:12:53.450182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.099 [2024-10-21 12:12:53.450188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.099 [2024-10-21 12:12:53.455064] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.099 [2024-10-21 12:12:53.455083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.099 [2024-10-21 12:12:53.455089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.099 [2024-10-21 12:12:53.459883] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.099 [2024-10-21 12:12:53.459902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.099 [2024-10-21 12:12:53.459908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.099 5298.00 IOPS, 662.25 MiB/s [2024-10-21T10:12:53.694Z] [2024-10-21 12:12:53.465930] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.099 [2024-10-21 12:12:53.465948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.099 [2024-10-21 12:12:53.465955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.099 [2024-10-21 12:12:53.472938] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.099 [2024-10-21 12:12:53.472956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.099 [2024-10-21 12:12:53.472962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.099 [2024-10-21 12:12:53.479587] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.099 [2024-10-21 12:12:53.479606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.099 [2024-10-21 12:12:53.479612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.099 [2024-10-21 12:12:53.485280] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.099 [2024-10-21 12:12:53.485301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.099 [2024-10-21 12:12:53.485308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.099 [2024-10-21 12:12:53.490824] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.099 [2024-10-21 12:12:53.490843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.099 [2024-10-21 12:12:53.490849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.099 [2024-10-21 12:12:53.496079] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.099 [2024-10-21 12:12:53.496097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.099 [2024-10-21 12:12:53.496104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.099 [2024-10-21 12:12:53.501433] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.099 [2024-10-21 12:12:53.501452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.099 [2024-10-21 12:12:53.501458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.099 [2024-10-21 12:12:53.506708] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.099 [2024-10-21 12:12:53.506727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.099 [2024-10-21 12:12:53.506734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.099 [2024-10-21 12:12:53.512121] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.099 [2024-10-21 12:12:53.512140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.099 [2024-10-21 12:12:53.512146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.099 [2024-10-21 12:12:53.517594] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.099 [2024-10-21 12:12:53.517613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.099 [2024-10-21 12:12:53.517620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.099 [2024-10-21 12:12:53.522813] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.099 [2024-10-21 12:12:53.522831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.099 [2024-10-21 12:12:53.522838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.099 [2024-10-21 12:12:53.527906] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.099 [2024-10-21 12:12:53.527924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.099 [2024-10-21 12:12:53.527931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.099 [2024-10-21 12:12:53.532779] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.099 [2024-10-21 12:12:53.532797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.099 [2024-10-21 12:12:53.532803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.099 [2024-10-21 12:12:53.537600] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.099 [2024-10-21 12:12:53.537618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.099 [2024-10-21 12:12:53.537625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.099 [2024-10-21 12:12:53.542465] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.099 [2024-10-21 12:12:53.542483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.099 [2024-10-21 12:12:53.542489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.099 [2024-10-21 12:12:53.547436] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.099 [2024-10-21 12:12:53.547454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.099 [2024-10-21 12:12:53.547461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.099 [2024-10-21 12:12:53.552358] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.099 [2024-10-21 12:12:53.552376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.100 [2024-10-21 12:12:53.552382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.100 [2024-10-21 12:12:53.557513] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.100 [2024-10-21 12:12:53.557531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.100 [2024-10-21 12:12:53.557537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.100 [2024-10-21 12:12:53.562368] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.100 [2024-10-21 12:12:53.562386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.100 [2024-10-21 12:12:53.562392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.100 [2024-10-21 12:12:53.567220] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.100 [2024-10-21 12:12:53.567238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.100 [2024-10-21 12:12:53.567244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.100 [2024-10-21 12:12:53.572328] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.100 [2024-10-21 12:12:53.572347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.100 [2024-10-21 12:12:53.572356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.100 [2024-10-21 12:12:53.580675] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.100 [2024-10-21 12:12:53.580693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.100 [2024-10-21 12:12:53.580699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.100 [2024-10-21 12:12:53.588386] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.100 [2024-10-21 12:12:53.588404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.100 [2024-10-21 12:12:53.588411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.100 [2024-10-21 12:12:53.594520] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.100 [2024-10-21 12:12:53.594538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.100 [2024-10-21 12:12:53.594544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.100 [2024-10-21 12:12:53.601244] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.100 [2024-10-21 12:12:53.601262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.100 [2024-10-21 12:12:53.601268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.100 [2024-10-21 12:12:53.606675] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.100 [2024-10-21 12:12:53.606693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.100 [2024-10-21 12:12:53.606700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.100 [2024-10-21 12:12:53.613608] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.100 [2024-10-21 12:12:53.613627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.100 [2024-10-21 12:12:53.613633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.100 [2024-10-21 12:12:53.620948] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.100 [2024-10-21 12:12:53.620966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.100 [2024-10-21 12:12:53.620973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.100 [2024-10-21 12:12:53.626874] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.100 [2024-10-21 12:12:53.626892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.100 [2024-10-21 12:12:53.626898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.100 [2024-10-21 12:12:53.632975] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.100 [2024-10-21 12:12:53.632994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.100 [2024-10-21 12:12:53.633001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.100 [2024-10-21 12:12:53.639098] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.100 [2024-10-21 12:12:53.639117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.100 [2024-10-21 12:12:53.639124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.100 [2024-10-21 12:12:53.645942] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.100 [2024-10-21 12:12:53.645960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.100 [2024-10-21 12:12:53.645967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.100 [2024-10-21 12:12:53.653167] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.100 [2024-10-21 12:12:53.653185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.100 [2024-10-21 12:12:53.653191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.100 [2024-10-21 12:12:53.660400] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.100 [2024-10-21 12:12:53.660418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.100 [2024-10-21 12:12:53.660425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.100 [2024-10-21 12:12:53.667963] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.100 [2024-10-21 12:12:53.667981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.100 [2024-10-21 12:12:53.667988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.100 [2024-10-21 12:12:53.675414] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.100 [2024-10-21 12:12:53.675432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.100 [2024-10-21 12:12:53.675439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.100 [2024-10-21 12:12:53.683417] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.100 [2024-10-21 12:12:53.683436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.100 [2024-10-21 12:12:53.683442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.100 [2024-10-21 12:12:53.692591] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.100 [2024-10-21 12:12:53.692609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.100 [2024-10-21 12:12:53.692619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.362 [2024-10-21 12:12:53.699576] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.362 [2024-10-21 12:12:53.699594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.362 [2024-10-21 12:12:53.699601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.362 [2024-10-21 12:12:53.702991] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.362 [2024-10-21 12:12:53.703009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.362 [2024-10-21 12:12:53.703015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.362 [2024-10-21 12:12:53.708969] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.362 [2024-10-21 12:12:53.708986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.362 [2024-10-21 12:12:53.708993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.362 [2024-10-21 12:12:53.715787] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.362 [2024-10-21 12:12:53.715805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.362 [2024-10-21 12:12:53.715812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.362 [2024-10-21 12:12:53.721674] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.362 [2024-10-21 12:12:53.721693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.362 [2024-10-21 12:12:53.721699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.362 [2024-10-21 12:12:53.727270] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.362 [2024-10-21 12:12:53.727287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.362 [2024-10-21 12:12:53.727294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.362 [2024-10-21 12:12:53.736145] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.362 [2024-10-21 12:12:53.736162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.362 [2024-10-21 12:12:53.736169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.362 [2024-10-21 12:12:53.742190] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.362 [2024-10-21 12:12:53.742208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.362 [2024-10-21 12:12:53.742215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.362 [2024-10-21 12:12:53.747080] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.362 [2024-10-21 12:12:53.747101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.362 [2024-10-21 12:12:53.747108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.362 [2024-10-21 12:12:53.754636] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.362 [2024-10-21 12:12:53.754654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.362 [2024-10-21 12:12:53.754661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.362 [2024-10-21 12:12:53.763168] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.362 [2024-10-21 12:12:53.763185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.362 [2024-10-21 12:12:53.763192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.362 [2024-10-21 12:12:53.769379] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.362 [2024-10-21 12:12:53.769397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.362 [2024-10-21 12:12:53.769404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.363 [2024-10-21 12:12:53.776391] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.363 [2024-10-21 12:12:53.776409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.363 [2024-10-21 12:12:53.776416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.363 [2024-10-21 12:12:53.783566] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.363 [2024-10-21 12:12:53.783585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.363 [2024-10-21 12:12:53.783592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.363 [2024-10-21 12:12:53.790213] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.363 [2024-10-21 12:12:53.790231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.363 [2024-10-21 12:12:53.790237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.363 [2024-10-21 12:12:53.796283] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.363 [2024-10-21 12:12:53.796301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.363 [2024-10-21 12:12:53.796307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.363 [2024-10-21 12:12:53.802626] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.363 [2024-10-21 12:12:53.802644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.363 [2024-10-21 12:12:53.802650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.363 [2024-10-21 12:12:53.808621] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.363 [2024-10-21 12:12:53.808638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.363 [2024-10-21 12:12:53.808645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.363 [2024-10-21 12:12:53.815011] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.363 [2024-10-21 12:12:53.815029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.363 [2024-10-21 12:12:53.815036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.363 [2024-10-21 12:12:53.825456] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.363 [2024-10-21 12:12:53.825474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.363 [2024-10-21 12:12:53.825480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.363 [2024-10-21 12:12:53.831202] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.363 [2024-10-21 12:12:53.831220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.363 [2024-10-21 12:12:53.831227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.363 [2024-10-21 12:12:53.837490] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.363 [2024-10-21 12:12:53.837508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.363 [2024-10-21 12:12:53.837514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.363 [2024-10-21 12:12:53.842685] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.363 [2024-10-21 12:12:53.842704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.363 [2024-10-21 12:12:53.842711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.363 [2024-10-21 12:12:53.850486] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.363 [2024-10-21 12:12:53.850504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.363 [2024-10-21 12:12:53.850511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.363 [2024-10-21 12:12:53.858309] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.363 [2024-10-21 12:12:53.858332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.363 [2024-10-21 12:12:53.858339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.363 [2024-10-21 12:12:53.865821] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.363 [2024-10-21 12:12:53.865839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.363 [2024-10-21 12:12:53.865849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.363 [2024-10-21 12:12:53.872841] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.363 [2024-10-21 12:12:53.872859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.363 [2024-10-21 12:12:53.872866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.363 [2024-10-21 12:12:53.878911] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.363 [2024-10-21 12:12:53.878929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.363 [2024-10-21 12:12:53.878935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.363 [2024-10-21 12:12:53.889865] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.363 [2024-10-21 12:12:53.889884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.363 [2024-10-21 12:12:53.889890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.363 [2024-10-21 12:12:53.898136] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.363 [2024-10-21 12:12:53.898154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.363 [2024-10-21 12:12:53.898160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.363 [2024-10-21 12:12:53.903554] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.363 [2024-10-21 12:12:53.903572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.363 [2024-10-21 12:12:53.903578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.363 [2024-10-21 12:12:53.910444] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.363 [2024-10-21 12:12:53.910462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.363 [2024-10-21 12:12:53.910468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.363 [2024-10-21 12:12:53.921877] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.363 [2024-10-21 12:12:53.921895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.363 [2024-10-21 12:12:53.921902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.363 [2024-10-21 12:12:53.927667] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.363 [2024-10-21 12:12:53.927686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.363 [2024-10-21 12:12:53.927692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.363 [2024-10-21 12:12:53.934303] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.363 [2024-10-21 12:12:53.934330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.363 [2024-10-21 12:12:53.934336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.363 [2024-10-21 12:12:53.941547] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.363 [2024-10-21 12:12:53.941565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.363 [2024-10-21 12:12:53.941572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.363 [2024-10-21 12:12:53.953715] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.363 [2024-10-21 12:12:53.953734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.363 [2024-10-21 12:12:53.953741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.625 [2024-10-21 12:12:53.960469] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.626 [2024-10-21 12:12:53.960487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.626 [2024-10-21 12:12:53.960494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.626 [2024-10-21 12:12:53.966495] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.626 [2024-10-21 12:12:53.966514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.626 [2024-10-21 12:12:53.966520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.626 [2024-10-21 12:12:53.972739] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.626 [2024-10-21 12:12:53.972757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.626 [2024-10-21 12:12:53.972764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.626 [2024-10-21 12:12:53.981068] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.626 [2024-10-21 12:12:53.981086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.626 [2024-10-21 12:12:53.981092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.626 [2024-10-21 12:12:53.986961] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.626 [2024-10-21 12:12:53.986979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.626 [2024-10-21 12:12:53.986986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.626 [2024-10-21 12:12:53.993651] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.626 [2024-10-21 12:12:53.993669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.626 [2024-10-21 12:12:53.993676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.626 [2024-10-21 12:12:54.000496] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.626 [2024-10-21 12:12:54.000514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.626 [2024-10-21 12:12:54.000520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.626 [2024-10-21 12:12:54.007854] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.626 [2024-10-21 12:12:54.007872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.626 [2024-10-21 12:12:54.007880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.626 [2024-10-21 12:12:54.014555] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.626 [2024-10-21 12:12:54.014574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.626 [2024-10-21 12:12:54.014580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.626 [2024-10-21 12:12:54.023140] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.626 [2024-10-21 12:12:54.023159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.626 [2024-10-21 12:12:54.023165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.626 [2024-10-21 12:12:54.029483] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.626 [2024-10-21 12:12:54.029501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.626 [2024-10-21 12:12:54.029507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.626 [2024-10-21 12:12:54.035671] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.626 [2024-10-21 12:12:54.035690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.626 [2024-10-21 12:12:54.035696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.626 [2024-10-21 12:12:54.041963] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.626 [2024-10-21 12:12:54.041981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.626 [2024-10-21 12:12:54.041987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.626 [2024-10-21 12:12:54.045584] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.626 [2024-10-21 12:12:54.045602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.626 [2024-10-21 12:12:54.045608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.626 [2024-10-21 12:12:54.050454] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.626 [2024-10-21 12:12:54.050471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.626 [2024-10-21 12:12:54.050480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.626 [2024-10-21 12:12:54.057832] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.626 [2024-10-21 12:12:54.057850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.626 [2024-10-21 12:12:54.057856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.626 [2024-10-21 12:12:54.064951] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.626 [2024-10-21 12:12:54.064968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.626 [2024-10-21 12:12:54.064975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.626 [2024-10-21 12:12:54.071372] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.626 [2024-10-21 12:12:54.071389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.626 [2024-10-21 12:12:54.071395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.626 [2024-10-21 12:12:54.078595] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.626 [2024-10-21 12:12:54.078613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.626 [2024-10-21 12:12:54.078620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.626 [2024-10-21 12:12:54.084131] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.626 [2024-10-21 12:12:54.084148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.626 [2024-10-21 12:12:54.084154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.626 [2024-10-21 12:12:54.091616] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.626 [2024-10-21 12:12:54.091633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.626 [2024-10-21 12:12:54.091640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.626 [2024-10-21 12:12:54.099569] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.626 [2024-10-21 12:12:54.099586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.626 [2024-10-21 12:12:54.099592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.626 [2024-10-21 12:12:54.107022] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.626 [2024-10-21 12:12:54.107040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.626 [2024-10-21 12:12:54.107047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.626 [2024-10-21 12:12:54.114347] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.626 [2024-10-21 12:12:54.114370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.626 [2024-10-21 12:12:54.114377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.626 [2024-10-21 12:12:54.120112] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.626 [2024-10-21 12:12:54.120129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.626 [2024-10-21 12:12:54.120136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.626 [2024-10-21 12:12:54.126312] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.626 [2024-10-21 12:12:54.126335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.626 [2024-10-21 12:12:54.126341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.626 [2024-10-21 12:12:54.131229] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.626 [2024-10-21 12:12:54.131247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.626 [2024-10-21 12:12:54.131253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.626 [2024-10-21 12:12:54.136750] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.626 [2024-10-21 12:12:54.136767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.626 [2024-10-21 12:12:54.136773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.626 [2024-10-21 12:12:54.142228] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.627 [2024-10-21 12:12:54.142246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.627 [2024-10-21 12:12:54.142252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.627 [2024-10-21 12:12:54.147798] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.627 [2024-10-21 12:12:54.147816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.627 [2024-10-21 12:12:54.147823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.627 [2024-10-21 12:12:54.153111] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.627 [2024-10-21 12:12:54.153129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.627 [2024-10-21 12:12:54.153135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.627 [2024-10-21 12:12:54.158225] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.627 [2024-10-21 12:12:54.158242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.627 [2024-10-21 12:12:54.158249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.627 [2024-10-21 12:12:54.163519] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.627 [2024-10-21 12:12:54.163537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.627 [2024-10-21 12:12:54.163544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.627 [2024-10-21 12:12:54.168644] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.627 [2024-10-21 12:12:54.168663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.627 [2024-10-21 12:12:54.168669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.627 [2024-10-21 12:12:54.172902] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.627 [2024-10-21 12:12:54.172919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.627 [2024-10-21 12:12:54.172925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.627 [2024-10-21 12:12:54.177617] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.627 [2024-10-21 12:12:54.177635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.627 [2024-10-21 12:12:54.177641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.627 [2024-10-21 12:12:54.182259] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.627 [2024-10-21 12:12:54.182277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.627 [2024-10-21 12:12:54.182283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.627 [2024-10-21 12:12:54.186946] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.627 [2024-10-21 12:12:54.186964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.627 [2024-10-21 12:12:54.186970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.627 [2024-10-21 12:12:54.191236] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.627 [2024-10-21 12:12:54.191254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.627 [2024-10-21 12:12:54.191260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.627 [2024-10-21 12:12:54.196195] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.627 [2024-10-21 12:12:54.196213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.627 [2024-10-21 12:12:54.196219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.627 [2024-10-21 12:12:54.201523] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.627 [2024-10-21 12:12:54.201542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.627 [2024-10-21 12:12:54.201551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.627 [2024-10-21 12:12:54.211001] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.627 [2024-10-21 12:12:54.211018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.627 [2024-10-21 12:12:54.211024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.627 [2024-10-21 12:12:54.218821] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.627 [2024-10-21 12:12:54.218840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.627 [2024-10-21 12:12:54.218846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.888 [2024-10-21 12:12:54.225258] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.888 [2024-10-21 12:12:54.225276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.888 [2024-10-21 12:12:54.225283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.888 [2024-10-21 12:12:54.231313] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.888 [2024-10-21 12:12:54.231336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.888 [2024-10-21 12:12:54.231342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.888 [2024-10-21 12:12:54.238682] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.888 [2024-10-21 12:12:54.238701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.888 [2024-10-21 12:12:54.238707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.888 [2024-10-21 12:12:54.245951] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.888 [2024-10-21 12:12:54.245969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.888 [2024-10-21 12:12:54.245975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.888 [2024-10-21 12:12:54.252175] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.888 [2024-10-21 12:12:54.252193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.888 [2024-10-21 12:12:54.252200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.888 [2024-10-21 12:12:54.258297] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.889 [2024-10-21 12:12:54.258315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.889 [2024-10-21 12:12:54.258326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.889 [2024-10-21 12:12:54.263957] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.889 [2024-10-21 12:12:54.263978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.889 [2024-10-21 12:12:54.263985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.889 [2024-10-21 12:12:54.271998] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.889 [2024-10-21 12:12:54.272017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.889 [2024-10-21 12:12:54.272023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.889 [2024-10-21 12:12:54.278335] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.889 [2024-10-21 12:12:54.278353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.889 [2024-10-21 12:12:54.278360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.889 [2024-10-21 12:12:54.285718] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.889 [2024-10-21 12:12:54.285736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.889 [2024-10-21 12:12:54.285742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.889 [2024-10-21 12:12:54.292731] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.889 [2024-10-21 12:12:54.292749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.889 [2024-10-21 12:12:54.292755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.889 [2024-10-21 12:12:54.299747] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.889 [2024-10-21 12:12:54.299766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.889 [2024-10-21 12:12:54.299772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.889 [2024-10-21 12:12:54.305459] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.889 [2024-10-21 12:12:54.305478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.889 [2024-10-21 12:12:54.305484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.889 [2024-10-21 12:12:54.313113] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.889 [2024-10-21 12:12:54.313131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.889 [2024-10-21 12:12:54.313137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.889 [2024-10-21 12:12:54.320298] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.889 [2024-10-21 12:12:54.320317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.889 [2024-10-21 12:12:54.320329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.889 [2024-10-21 12:12:54.326690] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.889 [2024-10-21 12:12:54.326707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.889 [2024-10-21 12:12:54.326714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.889 [2024-10-21 12:12:54.333953] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.889 [2024-10-21 12:12:54.333971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.889 [2024-10-21 12:12:54.333977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.889 [2024-10-21 12:12:54.340781] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.889 [2024-10-21 12:12:54.340799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.889 [2024-10-21 12:12:54.340805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.889 [2024-10-21 12:12:54.349143] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.889 [2024-10-21 12:12:54.349161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.889 [2024-10-21 12:12:54.349167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.889 [2024-10-21 12:12:54.354864] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.889 [2024-10-21 12:12:54.354881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.889 [2024-10-21 12:12:54.354888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.889 [2024-10-21 12:12:54.358059] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.889 [2024-10-21 12:12:54.358076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.889 [2024-10-21 12:12:54.358082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.889 [2024-10-21 12:12:54.364017] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.889 [2024-10-21 12:12:54.364035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.889 [2024-10-21 12:12:54.364042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.889 [2024-10-21 12:12:54.370927] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.889 [2024-10-21 12:12:54.370944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.889 [2024-10-21 12:12:54.370950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.889 [2024-10-21 12:12:54.377164] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.889 [2024-10-21 12:12:54.377182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.889 [2024-10-21 12:12:54.377192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.889 [2024-10-21 12:12:54.384606] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.889 [2024-10-21 12:12:54.384624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.889 [2024-10-21 12:12:54.384631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.889 [2024-10-21 12:12:54.392362] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.889 [2024-10-21 12:12:54.392380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.889 [2024-10-21 12:12:54.392386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.889 [2024-10-21 12:12:54.399521] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.889 [2024-10-21 12:12:54.399539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.889 [2024-10-21 12:12:54.399545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.889 [2024-10-21 12:12:54.405674] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.889 [2024-10-21 12:12:54.405692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.889 [2024-10-21 12:12:54.405698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.889 [2024-10-21 12:12:54.412476] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.889 [2024-10-21 12:12:54.412494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.889 [2024-10-21 12:12:54.412501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.889 [2024-10-21 12:12:54.421796] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.889 [2024-10-21 12:12:54.421814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.890 [2024-10-21 12:12:54.421820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.890 [2024-10-21 12:12:54.428419] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.890 [2024-10-21 12:12:54.428438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.890 [2024-10-21 12:12:54.428445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.890 [2024-10-21 12:12:54.434622] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.890 [2024-10-21 12:12:54.434641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.890 [2024-10-21 12:12:54.434647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.890 [2024-10-21 12:12:54.441377] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.890 [2024-10-21 12:12:54.441395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.890 [2024-10-21 12:12:54.441401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.890 [2024-10-21 12:12:54.447718] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.890 [2024-10-21 12:12:54.447736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.890 [2024-10-21 12:12:54.447742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.890 [2024-10-21 12:12:54.455788] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.890 [2024-10-21 12:12:54.455805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.890 [2024-10-21 12:12:54.455811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.890 [2024-10-21 12:12:54.463711] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa63120) 00:28:17.890 [2024-10-21 12:12:54.463729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.890 [2024-10-21 12:12:54.463735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.151 5024.00 IOPS, 628.00 MiB/s 00:28:18.151 Latency(us) 00:28:18.151 [2024-10-21T10:12:54.746Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:18.151 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:18.151 nvme0n1 : 2.04 4924.46 615.56 0.00 0.00 3185.30 397.65 45875.20 00:28:18.151 [2024-10-21T10:12:54.746Z] =================================================================================================================== 00:28:18.151 [2024-10-21T10:12:54.746Z] Total : 4924.46 615.56 0.00 0.00 3185.30 397.65 45875.20 00:28:18.151 { 00:28:18.151 "results": [ 00:28:18.151 { 00:28:18.151 "job": "nvme0n1", 00:28:18.151 "core_mask": "0x2", 00:28:18.151 "workload": "randread", 00:28:18.151 "status": "finished", 00:28:18.151 "queue_depth": 16, 00:28:18.151 "io_size": 131072, 00:28:18.151 "runtime": 2.043675, 00:28:18.151 "iops": 4924.462059769778, 00:28:18.151 "mibps": 615.5577574712222, 00:28:18.151 "io_failed": 0, 00:28:18.151 "io_timeout": 0, 00:28:18.151 "avg_latency_us": 3185.2982299947007, 00:28:18.151 "min_latency_us": 397.6533333333333, 00:28:18.151 "max_latency_us": 45875.2 00:28:18.151 } 00:28:18.151 ], 00:28:18.151 "core_count": 1 00:28:18.151 } 00:28:18.151 12:12:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:18.151 12:12:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:18.151 12:12:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:18.151 | .driver_specific 00:28:18.151 | .nvme_error 00:28:18.151 | .status_code 00:28:18.151 | .command_transient_transport_error' 00:28:18.151 12:12:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:18.151 12:12:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 324 > 0 )) 00:28:18.151 12:12:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1146796 00:28:18.151 12:12:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1146796 ']' 00:28:18.151 12:12:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1146796 00:28:18.151 12:12:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:18.151 12:12:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:18.151 12:12:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1146796 00:28:18.412 12:12:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:18.412 12:12:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:18.412 12:12:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1146796' 00:28:18.412 killing process with pid 1146796 00:28:18.412 12:12:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1146796 00:28:18.412 Received shutdown signal, test time was about 2.000000 seconds 00:28:18.412 00:28:18.412 Latency(us) 00:28:18.412 [2024-10-21T10:12:55.007Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:18.412 [2024-10-21T10:12:55.007Z] =================================================================================================================== 00:28:18.412 [2024-10-21T10:12:55.007Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:18.412 12:12:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1146796 00:28:18.412 12:12:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:18.412 12:12:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:18.412 12:12:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:18.412 12:12:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:18.412 12:12:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:18.412 12:12:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1147486 00:28:18.412 12:12:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1147486 /var/tmp/bperf.sock 00:28:18.412 12:12:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1147486 ']' 00:28:18.412 12:12:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:18.412 12:12:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:18.412 12:12:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:18.412 12:12:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:18.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:18.412 12:12:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:18.412 12:12:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:18.412 [2024-10-21 12:12:54.929269] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:28:18.412 [2024-10-21 12:12:54.929340] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1147486 ] 00:28:18.412 [2024-10-21 12:12:55.003289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:18.702 [2024-10-21 12:12:55.032687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:18.702 12:12:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:18.702 12:12:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:18.702 12:12:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:18.702 12:12:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:18.987 12:12:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:18.987 12:12:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.987 12:12:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:18.987 12:12:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.987 12:12:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:18.987 12:12:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:18.987 nvme0n1 00:28:18.987 12:12:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:18.987 12:12:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.987 12:12:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:19.250 12:12:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.250 12:12:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:19.250 12:12:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:19.250 Running I/O for 2 seconds... 00:28:19.250 [2024-10-21 12:12:55.674057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.250 [2024-10-21 12:12:55.674370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.250 [2024-10-21 12:12:55.674396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.250 [2024-10-21 12:12:55.682969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.250 [2024-10-21 12:12:55.683270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.250 [2024-10-21 12:12:55.683288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.250 [2024-10-21 12:12:55.691790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.250 [2024-10-21 12:12:55.692085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.250 [2024-10-21 12:12:55.692101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.250 [2024-10-21 12:12:55.700646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.250 [2024-10-21 12:12:55.700933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.250 [2024-10-21 12:12:55.700949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.250 [2024-10-21 12:12:55.709425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.250 [2024-10-21 12:12:55.709694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.250 [2024-10-21 12:12:55.709711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.250 [2024-10-21 12:12:55.718203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.250 [2024-10-21 12:12:55.718467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.250 [2024-10-21 12:12:55.718482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.250 [2024-10-21 12:12:55.726959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.250 [2024-10-21 12:12:55.727256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.250 [2024-10-21 12:12:55.727273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.250 [2024-10-21 12:12:55.735731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.250 [2024-10-21 12:12:55.735981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.250 [2024-10-21 12:12:55.735997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.250 [2024-10-21 12:12:55.744506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.250 [2024-10-21 12:12:55.744641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.250 [2024-10-21 12:12:55.744656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.250 [2024-10-21 12:12:55.753292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.250 [2024-10-21 12:12:55.753604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.250 [2024-10-21 12:12:55.753620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.250 [2024-10-21 12:12:55.762054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.250 [2024-10-21 12:12:55.762330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.250 [2024-10-21 12:12:55.762346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.250 [2024-10-21 12:12:55.770778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.250 [2024-10-21 12:12:55.771057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.250 [2024-10-21 12:12:55.771073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.250 [2024-10-21 12:12:55.779491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.250 [2024-10-21 12:12:55.779810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.250 [2024-10-21 12:12:55.779826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.250 [2024-10-21 12:12:55.788212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.250 [2024-10-21 12:12:55.788469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.250 [2024-10-21 12:12:55.788484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.250 [2024-10-21 12:12:55.797008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.250 [2024-10-21 12:12:55.797326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.250 [2024-10-21 12:12:55.797343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.250 [2024-10-21 12:12:55.805736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.250 [2024-10-21 12:12:55.805888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.250 [2024-10-21 12:12:55.805904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.250 [2024-10-21 12:12:55.814488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.250 [2024-10-21 12:12:55.814751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.250 [2024-10-21 12:12:55.814767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.250 [2024-10-21 12:12:55.823183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.250 [2024-10-21 12:12:55.823325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.250 [2024-10-21 12:12:55.823341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.250 [2024-10-21 12:12:55.831958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.250 [2024-10-21 12:12:55.832213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.251 [2024-10-21 12:12:55.832229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.251 [2024-10-21 12:12:55.840693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.251 [2024-10-21 12:12:55.840972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.251 [2024-10-21 12:12:55.840989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.512 [2024-10-21 12:12:55.849489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.512 [2024-10-21 12:12:55.849774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.512 [2024-10-21 12:12:55.849790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.512 [2024-10-21 12:12:55.858240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.512 [2024-10-21 12:12:55.858549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.512 [2024-10-21 12:12:55.858568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.513 [2024-10-21 12:12:55.866965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.513 [2024-10-21 12:12:55.867270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.513 [2024-10-21 12:12:55.867286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.513 [2024-10-21 12:12:55.875652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.513 [2024-10-21 12:12:55.875930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.513 [2024-10-21 12:12:55.875946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.513 [2024-10-21 12:12:55.884453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.513 [2024-10-21 12:12:55.884722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.513 [2024-10-21 12:12:55.884738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.513 [2024-10-21 12:12:55.893182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.513 [2024-10-21 12:12:55.893435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.513 [2024-10-21 12:12:55.893450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.513 [2024-10-21 12:12:55.901962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.513 [2024-10-21 12:12:55.902314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.513 [2024-10-21 12:12:55.902334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.513 [2024-10-21 12:12:55.910695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.513 [2024-10-21 12:12:55.910971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.513 [2024-10-21 12:12:55.910987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.513 [2024-10-21 12:12:55.919464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.513 [2024-10-21 12:12:55.919730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.513 [2024-10-21 12:12:55.919745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.513 [2024-10-21 12:12:55.928186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.513 [2024-10-21 12:12:55.928470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.513 [2024-10-21 12:12:55.928486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.513 [2024-10-21 12:12:55.936902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.513 [2024-10-21 12:12:55.937268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.513 [2024-10-21 12:12:55.937284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.513 [2024-10-21 12:12:55.945626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.513 [2024-10-21 12:12:55.945916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.513 [2024-10-21 12:12:55.945932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.513 [2024-10-21 12:12:55.954360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.513 [2024-10-21 12:12:55.954646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.513 [2024-10-21 12:12:55.954662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.513 [2024-10-21 12:12:55.963092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.513 [2024-10-21 12:12:55.963372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.513 [2024-10-21 12:12:55.963389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.513 [2024-10-21 12:12:55.971763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.513 [2024-10-21 12:12:55.972004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.513 [2024-10-21 12:12:55.972019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.513 [2024-10-21 12:12:55.980500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.513 [2024-10-21 12:12:55.980765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.513 [2024-10-21 12:12:55.980781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.513 [2024-10-21 12:12:55.989255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.513 [2024-10-21 12:12:55.989554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.513 [2024-10-21 12:12:55.989571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.513 [2024-10-21 12:12:55.997976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.513 [2024-10-21 12:12:55.998272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.513 [2024-10-21 12:12:55.998287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.513 [2024-10-21 12:12:56.006675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.513 [2024-10-21 12:12:56.006940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.513 [2024-10-21 12:12:56.006955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.513 [2024-10-21 12:12:56.015386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.513 [2024-10-21 12:12:56.015633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.513 [2024-10-21 12:12:56.015648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.513 [2024-10-21 12:12:56.024067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.513 [2024-10-21 12:12:56.024343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.513 [2024-10-21 12:12:56.024358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.513 [2024-10-21 12:12:56.032866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.513 [2024-10-21 12:12:56.033133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.513 [2024-10-21 12:12:56.033149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.513 [2024-10-21 12:12:56.041625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.513 [2024-10-21 12:12:56.041886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.513 [2024-10-21 12:12:56.041902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.513 [2024-10-21 12:12:56.050388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.513 [2024-10-21 12:12:56.050713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.513 [2024-10-21 12:12:56.050728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.513 [2024-10-21 12:12:56.059226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.513 [2024-10-21 12:12:56.059491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.513 [2024-10-21 12:12:56.059507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.513 [2024-10-21 12:12:56.067991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.514 [2024-10-21 12:12:56.068219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.514 [2024-10-21 12:12:56.068234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.514 [2024-10-21 12:12:56.076815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.514 [2024-10-21 12:12:56.077078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.514 [2024-10-21 12:12:56.077093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.514 [2024-10-21 12:12:56.085562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.514 [2024-10-21 12:12:56.085830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.514 [2024-10-21 12:12:56.085849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.514 [2024-10-21 12:12:56.094283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.514 [2024-10-21 12:12:56.094554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.514 [2024-10-21 12:12:56.094570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.514 [2024-10-21 12:12:56.103108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.514 [2024-10-21 12:12:56.103375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.514 [2024-10-21 12:12:56.103391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.777 [2024-10-21 12:12:56.111834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.777 [2024-10-21 12:12:56.112121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.777 [2024-10-21 12:12:56.112136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.777 [2024-10-21 12:12:56.120557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.777 [2024-10-21 12:12:56.120702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.777 [2024-10-21 12:12:56.120717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.777 [2024-10-21 12:12:56.129298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.777 [2024-10-21 12:12:56.129482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.777 [2024-10-21 12:12:56.129497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.777 [2024-10-21 12:12:56.138060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.777 [2024-10-21 12:12:56.138276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.777 [2024-10-21 12:12:56.138291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.777 [2024-10-21 12:12:56.146883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.777 [2024-10-21 12:12:56.147166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.777 [2024-10-21 12:12:56.147182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.777 [2024-10-21 12:12:56.155587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.777 [2024-10-21 12:12:56.155869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.777 [2024-10-21 12:12:56.155885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.777 [2024-10-21 12:12:56.164326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.777 [2024-10-21 12:12:56.164599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.777 [2024-10-21 12:12:56.164617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.777 [2024-10-21 12:12:56.173064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.777 [2024-10-21 12:12:56.173322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.777 [2024-10-21 12:12:56.173337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.777 [2024-10-21 12:12:56.181827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.777 [2024-10-21 12:12:56.182135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.777 [2024-10-21 12:12:56.182150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.777 [2024-10-21 12:12:56.190497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.777 [2024-10-21 12:12:56.190799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.777 [2024-10-21 12:12:56.190815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.777 [2024-10-21 12:12:56.199199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.777 [2024-10-21 12:12:56.199439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.777 [2024-10-21 12:12:56.199454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.777 [2024-10-21 12:12:56.207910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.777 [2024-10-21 12:12:56.208201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.777 [2024-10-21 12:12:56.208217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.778 [2024-10-21 12:12:56.216708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.778 [2024-10-21 12:12:56.216996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.778 [2024-10-21 12:12:56.217011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.778 [2024-10-21 12:12:56.225436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.778 [2024-10-21 12:12:56.225704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.778 [2024-10-21 12:12:56.225719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.778 [2024-10-21 12:12:56.234104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.778 [2024-10-21 12:12:56.234370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.778 [2024-10-21 12:12:56.234386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.778 [2024-10-21 12:12:56.242917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.778 [2024-10-21 12:12:56.243165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.778 [2024-10-21 12:12:56.243180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.778 [2024-10-21 12:12:56.251680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.778 [2024-10-21 12:12:56.251949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.778 [2024-10-21 12:12:56.251964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.778 [2024-10-21 12:12:56.260482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.778 [2024-10-21 12:12:56.260723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.778 [2024-10-21 12:12:56.260739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.778 [2024-10-21 12:12:56.269217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.778 [2024-10-21 12:12:56.269495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.778 [2024-10-21 12:12:56.269511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.778 [2024-10-21 12:12:56.277989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.778 [2024-10-21 12:12:56.278257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.778 [2024-10-21 12:12:56.278272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.778 [2024-10-21 12:12:56.286702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.778 [2024-10-21 12:12:56.286975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.778 [2024-10-21 12:12:56.286998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.778 [2024-10-21 12:12:56.295415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.778 [2024-10-21 12:12:56.295679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.778 [2024-10-21 12:12:56.295694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.778 [2024-10-21 12:12:56.304107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.778 [2024-10-21 12:12:56.304372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.778 [2024-10-21 12:12:56.304387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.778 [2024-10-21 12:12:56.312841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.778 [2024-10-21 12:12:56.313134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.778 [2024-10-21 12:12:56.313150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.778 [2024-10-21 12:12:56.321559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.778 [2024-10-21 12:12:56.321858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.778 [2024-10-21 12:12:56.321874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.778 [2024-10-21 12:12:56.330257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.778 [2024-10-21 12:12:56.330581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.778 [2024-10-21 12:12:56.330596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.778 [2024-10-21 12:12:56.338942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.778 [2024-10-21 12:12:56.339231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.778 [2024-10-21 12:12:56.339247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.778 [2024-10-21 12:12:56.347657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.778 [2024-10-21 12:12:56.347938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.778 [2024-10-21 12:12:56.347953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.778 [2024-10-21 12:12:56.356392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.778 [2024-10-21 12:12:56.356668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.778 [2024-10-21 12:12:56.356684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:19.778 [2024-10-21 12:12:56.365210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:19.778 [2024-10-21 12:12:56.365494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.778 [2024-10-21 12:12:56.365509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.040 [2024-10-21 12:12:56.373990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.040 [2024-10-21 12:12:56.374265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.040 [2024-10-21 12:12:56.374282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.040 [2024-10-21 12:12:56.382706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.040 [2024-10-21 12:12:56.383001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.040 [2024-10-21 12:12:56.383017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.041 [2024-10-21 12:12:56.391382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.041 [2024-10-21 12:12:56.391661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.041 [2024-10-21 12:12:56.391680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.041 [2024-10-21 12:12:56.400105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.041 [2024-10-21 12:12:56.400346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.041 [2024-10-21 12:12:56.400361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.041 [2024-10-21 12:12:56.408885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.041 [2024-10-21 12:12:56.409138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.041 [2024-10-21 12:12:56.409153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.041 [2024-10-21 12:12:56.417642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.041 [2024-10-21 12:12:56.417921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.041 [2024-10-21 12:12:56.417937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.041 [2024-10-21 12:12:56.426468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.041 [2024-10-21 12:12:56.426735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.041 [2024-10-21 12:12:56.426752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.041 [2024-10-21 12:12:56.435200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.041 [2024-10-21 12:12:56.435451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.041 [2024-10-21 12:12:56.435467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.041 [2024-10-21 12:12:56.443924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.041 [2024-10-21 12:12:56.444204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.041 [2024-10-21 12:12:56.444220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.041 [2024-10-21 12:12:56.452643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.041 [2024-10-21 12:12:56.452930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.041 [2024-10-21 12:12:56.452946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.041 [2024-10-21 12:12:56.461405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.041 [2024-10-21 12:12:56.461677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.041 [2024-10-21 12:12:56.461693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.041 [2024-10-21 12:12:56.470136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.041 [2024-10-21 12:12:56.470510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.041 [2024-10-21 12:12:56.470526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.041 [2024-10-21 12:12:56.478909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.041 [2024-10-21 12:12:56.479045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.041 [2024-10-21 12:12:56.479061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.041 [2024-10-21 12:12:56.487669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.041 [2024-10-21 12:12:56.487933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.041 [2024-10-21 12:12:56.487949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.041 [2024-10-21 12:12:56.496357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.041 [2024-10-21 12:12:56.496601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.041 [2024-10-21 12:12:56.496617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.041 [2024-10-21 12:12:56.505056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.041 [2024-10-21 12:12:56.505347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.041 [2024-10-21 12:12:56.505362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.041 [2024-10-21 12:12:56.513806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.041 [2024-10-21 12:12:56.514048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.041 [2024-10-21 12:12:56.514063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.041 [2024-10-21 12:12:56.522540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.041 [2024-10-21 12:12:56.522826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.041 [2024-10-21 12:12:56.522842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.041 [2024-10-21 12:12:56.531259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.041 [2024-10-21 12:12:56.531574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.041 [2024-10-21 12:12:56.531590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.041 [2024-10-21 12:12:56.539967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.041 [2024-10-21 12:12:56.540200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.041 [2024-10-21 12:12:56.540215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.041 [2024-10-21 12:12:56.548666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.041 [2024-10-21 12:12:56.548960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.041 [2024-10-21 12:12:56.548975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.041 [2024-10-21 12:12:56.557400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.041 [2024-10-21 12:12:56.557717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.041 [2024-10-21 12:12:56.557733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.041 [2024-10-21 12:12:56.566154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.041 [2024-10-21 12:12:56.566435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.041 [2024-10-21 12:12:56.566450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.041 [2024-10-21 12:12:56.574868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.041 [2024-10-21 12:12:56.575217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.041 [2024-10-21 12:12:56.575233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.041 [2024-10-21 12:12:56.583583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.041 [2024-10-21 12:12:56.583887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.041 [2024-10-21 12:12:56.583903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.041 [2024-10-21 12:12:56.592305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.042 [2024-10-21 12:12:56.592737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.042 [2024-10-21 12:12:56.592753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.042 [2024-10-21 12:12:56.601028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.042 [2024-10-21 12:12:56.601310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.042 [2024-10-21 12:12:56.601329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.042 [2024-10-21 12:12:56.609744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.042 [2024-10-21 12:12:56.610021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.042 [2024-10-21 12:12:56.610036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.042 [2024-10-21 12:12:56.618491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.042 [2024-10-21 12:12:56.618783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.042 [2024-10-21 12:12:56.618801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.042 [2024-10-21 12:12:56.627221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.042 [2024-10-21 12:12:56.627523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.042 [2024-10-21 12:12:56.627539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.304 [2024-10-21 12:12:56.635980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.304 [2024-10-21 12:12:56.636235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.304 [2024-10-21 12:12:56.636250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.304 [2024-10-21 12:12:56.644756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.304 [2024-10-21 12:12:56.644990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.304 [2024-10-21 12:12:56.645005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.304 [2024-10-21 12:12:56.653542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.304 [2024-10-21 12:12:56.653767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.304 [2024-10-21 12:12:56.653783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.304 [2024-10-21 12:12:56.662240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.304 29035.00 IOPS, 113.42 MiB/s [2024-10-21T10:12:56.899Z] [2024-10-21 12:12:56.663056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.304 [2024-10-21 12:12:56.663070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.304 [2024-10-21 12:12:56.670973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.304 [2024-10-21 12:12:56.671228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.304 [2024-10-21 12:12:56.671244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.304 [2024-10-21 12:12:56.679732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.304 [2024-10-21 12:12:56.679968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.304 [2024-10-21 12:12:56.679983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.304 [2024-10-21 12:12:56.688518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.304 [2024-10-21 12:12:56.688804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.304 [2024-10-21 12:12:56.688819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.304 [2024-10-21 12:12:56.697280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.304 [2024-10-21 12:12:56.697531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.304 [2024-10-21 12:12:56.697546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.304 [2024-10-21 12:12:56.706063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.304 [2024-10-21 12:12:56.706341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.304 [2024-10-21 12:12:56.706358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.304 [2024-10-21 12:12:56.714759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.304 [2024-10-21 12:12:56.715015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.304 [2024-10-21 12:12:56.715030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.304 [2024-10-21 12:12:56.723499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.304 [2024-10-21 12:12:56.723788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.304 [2024-10-21 12:12:56.723804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.304 [2024-10-21 12:12:56.732235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.304 [2024-10-21 12:12:56.732522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.304 [2024-10-21 12:12:56.732538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.304 [2024-10-21 12:12:56.740976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.304 [2024-10-21 12:12:56.741212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.304 [2024-10-21 12:12:56.741227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.304 [2024-10-21 12:12:56.749758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.304 [2024-10-21 12:12:56.750008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.304 [2024-10-21 12:12:56.750023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.304 [2024-10-21 12:12:56.758483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.304 [2024-10-21 12:12:56.758756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.304 [2024-10-21 12:12:56.758771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.304 [2024-10-21 12:12:56.767175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.304 [2024-10-21 12:12:56.767444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.304 [2024-10-21 12:12:56.767459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.304 [2024-10-21 12:12:56.775935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.304 [2024-10-21 12:12:56.776231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.304 [2024-10-21 12:12:56.776246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.304 [2024-10-21 12:12:56.784667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.304 [2024-10-21 12:12:56.784916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.304 [2024-10-21 12:12:56.784931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.304 [2024-10-21 12:12:56.793418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.304 [2024-10-21 12:12:56.793667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.304 [2024-10-21 12:12:56.793682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.304 [2024-10-21 12:12:56.802227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.304 [2024-10-21 12:12:56.802528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.304 [2024-10-21 12:12:56.802543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.305 [2024-10-21 12:12:56.810938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.305 [2024-10-21 12:12:56.811184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.305 [2024-10-21 12:12:56.811200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.305 [2024-10-21 12:12:56.819661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.305 [2024-10-21 12:12:56.819935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.305 [2024-10-21 12:12:56.819950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.305 [2024-10-21 12:12:56.828419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.305 [2024-10-21 12:12:56.828679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.305 [2024-10-21 12:12:56.828694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.305 [2024-10-21 12:12:56.837215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.305 [2024-10-21 12:12:56.837480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.305 [2024-10-21 12:12:56.837496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.305 [2024-10-21 12:12:56.845929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.305 [2024-10-21 12:12:56.846167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.305 [2024-10-21 12:12:56.846183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.305 [2024-10-21 12:12:56.854678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.305 [2024-10-21 12:12:56.854919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.305 [2024-10-21 12:12:56.854934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.305 [2024-10-21 12:12:56.863434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.305 [2024-10-21 12:12:56.863660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.305 [2024-10-21 12:12:56.863674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.305 [2024-10-21 12:12:56.872255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.305 [2024-10-21 12:12:56.872521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.305 [2024-10-21 12:12:56.872536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.305 [2024-10-21 12:12:56.881015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.305 [2024-10-21 12:12:56.881140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.305 [2024-10-21 12:12:56.881156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.305 [2024-10-21 12:12:56.889748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.305 [2024-10-21 12:12:56.889884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.305 [2024-10-21 12:12:56.889900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.567 [2024-10-21 12:12:56.898485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.567 [2024-10-21 12:12:56.898614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.567 [2024-10-21 12:12:56.898629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.567 [2024-10-21 12:12:56.907195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.567 [2024-10-21 12:12:56.907466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.567 [2024-10-21 12:12:56.907482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.567 [2024-10-21 12:12:56.915939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.567 [2024-10-21 12:12:56.916175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.567 [2024-10-21 12:12:56.916190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.567 [2024-10-21 12:12:56.924683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.567 [2024-10-21 12:12:56.924913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.567 [2024-10-21 12:12:56.924931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.567 [2024-10-21 12:12:56.933440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.567 [2024-10-21 12:12:56.933684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.567 [2024-10-21 12:12:56.933699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.567 [2024-10-21 12:12:56.942131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.567 [2024-10-21 12:12:56.942374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.567 [2024-10-21 12:12:56.942389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.567 [2024-10-21 12:12:56.950900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.567 [2024-10-21 12:12:56.951159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.567 [2024-10-21 12:12:56.951174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.567 [2024-10-21 12:12:56.959613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.567 [2024-10-21 12:12:56.959921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.567 [2024-10-21 12:12:56.959937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.567 [2024-10-21 12:12:56.968361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.567 [2024-10-21 12:12:56.968585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.567 [2024-10-21 12:12:56.968601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.567 [2024-10-21 12:12:56.977127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.567 [2024-10-21 12:12:56.977416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.567 [2024-10-21 12:12:56.977432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.567 [2024-10-21 12:12:56.985851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.567 [2024-10-21 12:12:56.986134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.567 [2024-10-21 12:12:56.986151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.567 [2024-10-21 12:12:56.994567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.567 [2024-10-21 12:12:56.994868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.567 [2024-10-21 12:12:56.994889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.567 [2024-10-21 12:12:57.003276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.567 [2024-10-21 12:12:57.003579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.567 [2024-10-21 12:12:57.003595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.567 [2024-10-21 12:12:57.012040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.567 [2024-10-21 12:12:57.012351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.567 [2024-10-21 12:12:57.012368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.567 [2024-10-21 12:12:57.020783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.567 [2024-10-21 12:12:57.021002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.567 [2024-10-21 12:12:57.021017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.567 [2024-10-21 12:12:57.029507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.567 [2024-10-21 12:12:57.029774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.567 [2024-10-21 12:12:57.029789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.567 [2024-10-21 12:12:57.038261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.567 [2024-10-21 12:12:57.038528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.567 [2024-10-21 12:12:57.038545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.568 [2024-10-21 12:12:57.046982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.568 [2024-10-21 12:12:57.047245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.568 [2024-10-21 12:12:57.047261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.568 [2024-10-21 12:12:57.056026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.568 [2024-10-21 12:12:57.056295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.568 [2024-10-21 12:12:57.056312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.568 [2024-10-21 12:12:57.064788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.568 [2024-10-21 12:12:57.065029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.568 [2024-10-21 12:12:57.065045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.568 [2024-10-21 12:12:57.073576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.568 [2024-10-21 12:12:57.073921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.568 [2024-10-21 12:12:57.073937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.568 [2024-10-21 12:12:57.082299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.568 [2024-10-21 12:12:57.082573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.568 [2024-10-21 12:12:57.082589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.568 [2024-10-21 12:12:57.091036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.568 [2024-10-21 12:12:57.091305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.568 [2024-10-21 12:12:57.091325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.568 [2024-10-21 12:12:57.099776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.568 [2024-10-21 12:12:57.100059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.568 [2024-10-21 12:12:57.100075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.568 [2024-10-21 12:12:57.108534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.568 [2024-10-21 12:12:57.108813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.568 [2024-10-21 12:12:57.108830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.568 [2024-10-21 12:12:57.117300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.568 [2024-10-21 12:12:57.117579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.568 [2024-10-21 12:12:57.117595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.568 [2024-10-21 12:12:57.126042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.568 [2024-10-21 12:12:57.126298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.568 [2024-10-21 12:12:57.126324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.568 [2024-10-21 12:12:57.134806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.568 [2024-10-21 12:12:57.135045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.568 [2024-10-21 12:12:57.135060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.568 [2024-10-21 12:12:57.143563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.568 [2024-10-21 12:12:57.143785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.568 [2024-10-21 12:12:57.143800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.568 [2024-10-21 12:12:57.152384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.568 [2024-10-21 12:12:57.152654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.568 [2024-10-21 12:12:57.152680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.568 [2024-10-21 12:12:57.161168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.568 [2024-10-21 12:12:57.161442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.568 [2024-10-21 12:12:57.161457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.830 [2024-10-21 12:12:57.169984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.830 [2024-10-21 12:12:57.170221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.830 [2024-10-21 12:12:57.170236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.830 [2024-10-21 12:12:57.178704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.830 [2024-10-21 12:12:57.178970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.830 [2024-10-21 12:12:57.178986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.830 [2024-10-21 12:12:57.187445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.830 [2024-10-21 12:12:57.187709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.830 [2024-10-21 12:12:57.187724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.831 [2024-10-21 12:12:57.196270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.831 [2024-10-21 12:12:57.196543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.831 [2024-10-21 12:12:57.196559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.831 [2024-10-21 12:12:57.205014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.831 [2024-10-21 12:12:57.205264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.831 [2024-10-21 12:12:57.205279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.831 [2024-10-21 12:12:57.213827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.831 [2024-10-21 12:12:57.214087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.831 [2024-10-21 12:12:57.214103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.831 [2024-10-21 12:12:57.222545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.831 [2024-10-21 12:12:57.222768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.831 [2024-10-21 12:12:57.222783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.831 [2024-10-21 12:12:57.231274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.831 [2024-10-21 12:12:57.231558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.831 [2024-10-21 12:12:57.231575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.831 [2024-10-21 12:12:57.240007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.831 [2024-10-21 12:12:57.240230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.831 [2024-10-21 12:12:57.240245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.831 [2024-10-21 12:12:57.248797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.831 [2024-10-21 12:12:57.249052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.831 [2024-10-21 12:12:57.249067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.831 [2024-10-21 12:12:57.257564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.831 [2024-10-21 12:12:57.257693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.831 [2024-10-21 12:12:57.257709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.831 [2024-10-21 12:12:57.266376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.831 [2024-10-21 12:12:57.266638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.831 [2024-10-21 12:12:57.266654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.831 [2024-10-21 12:12:57.275110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.831 [2024-10-21 12:12:57.275367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.831 [2024-10-21 12:12:57.275383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.831 [2024-10-21 12:12:57.283795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.831 [2024-10-21 12:12:57.284090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.831 [2024-10-21 12:12:57.284106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.831 [2024-10-21 12:12:57.292551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.831 [2024-10-21 12:12:57.292801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.831 [2024-10-21 12:12:57.292817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.831 [2024-10-21 12:12:57.301341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.831 [2024-10-21 12:12:57.301623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.831 [2024-10-21 12:12:57.301639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.831 [2024-10-21 12:12:57.310117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.831 [2024-10-21 12:12:57.310411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.831 [2024-10-21 12:12:57.310427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.831 [2024-10-21 12:12:57.318907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.831 [2024-10-21 12:12:57.319157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.831 [2024-10-21 12:12:57.319172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.831 [2024-10-21 12:12:57.327659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.831 [2024-10-21 12:12:57.327883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.831 [2024-10-21 12:12:57.327898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.831 [2024-10-21 12:12:57.336389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.831 [2024-10-21 12:12:57.336632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.831 [2024-10-21 12:12:57.336647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.831 [2024-10-21 12:12:57.345207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.831 [2024-10-21 12:12:57.345345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.831 [2024-10-21 12:12:57.345360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.831 [2024-10-21 12:12:57.353968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.831 [2024-10-21 12:12:57.354283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:77 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.831 [2024-10-21 12:12:57.354299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.831 [2024-10-21 12:12:57.362723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.831 [2024-10-21 12:12:57.362994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.831 [2024-10-21 12:12:57.363009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.831 [2024-10-21 12:12:57.371440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.831 [2024-10-21 12:12:57.371690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.831 [2024-10-21 12:12:57.371705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.831 [2024-10-21 12:12:57.380202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.831 [2024-10-21 12:12:57.380482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.831 [2024-10-21 12:12:57.380505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.831 [2024-10-21 12:12:57.388994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.831 [2024-10-21 12:12:57.389312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.831 [2024-10-21 12:12:57.389333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.831 [2024-10-21 12:12:57.397786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.831 [2024-10-21 12:12:57.398049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.831 [2024-10-21 12:12:57.398065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.831 [2024-10-21 12:12:57.406558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.831 [2024-10-21 12:12:57.406847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.831 [2024-10-21 12:12:57.406864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.832 [2024-10-21 12:12:57.415342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.832 [2024-10-21 12:12:57.415613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.832 [2024-10-21 12:12:57.415629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:20.832 [2024-10-21 12:12:57.424096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:20.832 [2024-10-21 12:12:57.424346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.832 [2024-10-21 12:12:57.424361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:21.093 [2024-10-21 12:12:57.432821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:21.093 [2024-10-21 12:12:57.433061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.093 [2024-10-21 12:12:57.433076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:21.093 [2024-10-21 12:12:57.441625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:21.093 [2024-10-21 12:12:57.441842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.093 [2024-10-21 12:12:57.441857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:21.093 [2024-10-21 12:12:57.450416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:21.093 [2024-10-21 12:12:57.450657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.093 [2024-10-21 12:12:57.450672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:21.093 [2024-10-21 12:12:57.459188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:21.093 [2024-10-21 12:12:57.459444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.093 [2024-10-21 12:12:57.459463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:21.093 [2024-10-21 12:12:57.467889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:21.093 [2024-10-21 12:12:57.468185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.093 [2024-10-21 12:12:57.468201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:21.093 [2024-10-21 12:12:57.476628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:21.093 [2024-10-21 12:12:57.476876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.093 [2024-10-21 12:12:57.476891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:21.093 [2024-10-21 12:12:57.485350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:21.093 [2024-10-21 12:12:57.485577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.093 [2024-10-21 12:12:57.485592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:21.093 [2024-10-21 12:12:57.494107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:21.093 [2024-10-21 12:12:57.494403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.093 [2024-10-21 12:12:57.494419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:21.093 [2024-10-21 12:12:57.502901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:21.093 [2024-10-21 12:12:57.503155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.093 [2024-10-21 12:12:57.503170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:21.093 [2024-10-21 12:12:57.511761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:21.093 [2024-10-21 12:12:57.511980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.094 [2024-10-21 12:12:57.511995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:21.094 [2024-10-21 12:12:57.520533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:21.094 [2024-10-21 12:12:57.520771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.094 [2024-10-21 12:12:57.520787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:21.094 [2024-10-21 12:12:57.529353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:21.094 [2024-10-21 12:12:57.529616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.094 [2024-10-21 12:12:57.529631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:21.094 [2024-10-21 12:12:57.538089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:21.094 [2024-10-21 12:12:57.538219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.094 [2024-10-21 12:12:57.538235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:21.094 [2024-10-21 12:12:57.546900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:21.094 [2024-10-21 12:12:57.547054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.094 [2024-10-21 12:12:57.547070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:21.094 [2024-10-21 12:12:57.555626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:21.094 [2024-10-21 12:12:57.555897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.094 [2024-10-21 12:12:57.555912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:21.094 [2024-10-21 12:12:57.564383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:21.094 [2024-10-21 12:12:57.564620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.094 [2024-10-21 12:12:57.564635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:21.094 [2024-10-21 12:12:57.573122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:21.094 [2024-10-21 12:12:57.573381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.094 [2024-10-21 12:12:57.573397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:21.094 [2024-10-21 12:12:57.581888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:21.094 [2024-10-21 12:12:57.582125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.094 [2024-10-21 12:12:57.582140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:21.094 [2024-10-21 12:12:57.590657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:21.094 [2024-10-21 12:12:57.590930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.094 [2024-10-21 12:12:57.590945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:21.094 [2024-10-21 12:12:57.599376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:21.094 [2024-10-21 12:12:57.599620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.094 [2024-10-21 12:12:57.599635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:21.094 [2024-10-21 12:12:57.608125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:21.094 [2024-10-21 12:12:57.608391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.094 [2024-10-21 12:12:57.608406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:21.094 [2024-10-21 12:12:57.616909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:21.094 [2024-10-21 12:12:57.617050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.094 [2024-10-21 12:12:57.617065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:21.094 [2024-10-21 12:12:57.625653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:21.094 [2024-10-21 12:12:57.625881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.094 [2024-10-21 12:12:57.625897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:21.094 [2024-10-21 12:12:57.634375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:21.094 [2024-10-21 12:12:57.634649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.094 [2024-10-21 12:12:57.634664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:21.094 [2024-10-21 12:12:57.643086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:21.094 [2024-10-21 12:12:57.643398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.094 [2024-10-21 12:12:57.643414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:21.094 [2024-10-21 12:12:57.651900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:21.094 [2024-10-21 12:12:57.652178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.094 [2024-10-21 12:12:57.652194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:21.094 [2024-10-21 12:12:57.660607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2c60) with pdu=0x2000166fd208 00:28:21.094 [2024-10-21 12:12:57.660834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.094 [2024-10-21 12:12:57.660849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:21.094 29112.00 IOPS, 113.72 MiB/s 00:28:21.094 Latency(us) 00:28:21.094 [2024-10-21T10:12:57.689Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:21.094 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:21.094 nvme0n1 : 2.00 29113.42 113.72 0.00 0.00 4389.79 3304.11 11086.51 00:28:21.094 [2024-10-21T10:12:57.689Z] =================================================================================================================== 00:28:21.094 [2024-10-21T10:12:57.689Z] Total : 29113.42 113.72 0.00 0.00 4389.79 3304.11 11086.51 00:28:21.094 { 00:28:21.094 "results": [ 00:28:21.094 { 00:28:21.094 "job": "nvme0n1", 00:28:21.094 "core_mask": "0x2", 00:28:21.094 "workload": "randwrite", 00:28:21.094 "status": "finished", 00:28:21.094 "queue_depth": 128, 00:28:21.094 "io_size": 4096, 00:28:21.094 "runtime": 2.004024, 00:28:21.094 "iops": 29113.42379133184, 00:28:21.094 "mibps": 113.72431168489, 00:28:21.094 "io_failed": 0, 00:28:21.094 "io_timeout": 0, 00:28:21.094 "avg_latency_us": 4389.785361305361, 00:28:21.094 "min_latency_us": 3304.1066666666666, 00:28:21.094 "max_latency_us": 11086.506666666666 00:28:21.094 } 00:28:21.094 ], 00:28:21.094 "core_count": 1 00:28:21.094 } 00:28:21.356 12:12:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:21.356 12:12:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:21.356 12:12:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:21.356 | .driver_specific 00:28:21.356 | .nvme_error 00:28:21.356 | .status_code 00:28:21.356 | .command_transient_transport_error' 00:28:21.356 12:12:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:21.356 12:12:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 228 > 0 )) 00:28:21.356 12:12:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1147486 00:28:21.356 12:12:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1147486 ']' 00:28:21.356 12:12:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1147486 00:28:21.356 12:12:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:21.356 12:12:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:21.356 12:12:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1147486 00:28:21.356 12:12:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:21.356 12:12:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:21.356 12:12:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1147486' 00:28:21.356 killing process with pid 1147486 00:28:21.356 12:12:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1147486 00:28:21.356 Received shutdown signal, test time was about 2.000000 seconds 00:28:21.356 00:28:21.356 Latency(us) 00:28:21.356 [2024-10-21T10:12:57.951Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:21.356 [2024-10-21T10:12:57.951Z] =================================================================================================================== 00:28:21.356 [2024-10-21T10:12:57.951Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:21.356 12:12:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1147486 00:28:21.617 12:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:21.617 12:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:21.617 12:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:21.617 12:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:21.617 12:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:21.617 12:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1148161 00:28:21.617 12:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1148161 /var/tmp/bperf.sock 00:28:21.617 12:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1148161 ']' 00:28:21.617 12:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:21.617 12:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:21.617 12:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:21.617 12:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:21.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:21.617 12:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:21.617 12:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:21.617 [2024-10-21 12:12:58.086909] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:28:21.617 [2024-10-21 12:12:58.086970] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1148161 ] 00:28:21.617 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:21.617 Zero copy mechanism will not be used. 00:28:21.617 [2024-10-21 12:12:58.164331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.617 [2024-10-21 12:12:58.193565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:22.558 12:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:22.558 12:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:22.558 12:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:22.559 12:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:22.559 12:12:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:22.559 12:12:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.559 12:12:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:22.559 12:12:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.559 12:12:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:22.559 12:12:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:23.130 nvme0n1 00:28:23.130 12:12:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:23.130 12:12:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.130 12:12:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:23.130 12:12:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.130 12:12:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:23.130 12:12:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:23.130 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:23.130 Zero copy mechanism will not be used. 00:28:23.130 Running I/O for 2 seconds... 00:28:23.130 [2024-10-21 12:12:59.562870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.130 [2024-10-21 12:12:59.563236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.130 [2024-10-21 12:12:59.563262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.130 [2024-10-21 12:12:59.569311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.130 [2024-10-21 12:12:59.569537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.130 [2024-10-21 12:12:59.569558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.130 [2024-10-21 12:12:59.575291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.130 [2024-10-21 12:12:59.575612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.130 [2024-10-21 12:12:59.575630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.130 [2024-10-21 12:12:59.585617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.130 [2024-10-21 12:12:59.585913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.130 [2024-10-21 12:12:59.585932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.130 [2024-10-21 12:12:59.592859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.131 [2024-10-21 12:12:59.592919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.131 [2024-10-21 12:12:59.592935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.131 [2024-10-21 12:12:59.601616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.131 [2024-10-21 12:12:59.601915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.131 [2024-10-21 12:12:59.601933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.131 [2024-10-21 12:12:59.607735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.131 [2024-10-21 12:12:59.608025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.131 [2024-10-21 12:12:59.608042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.131 [2024-10-21 12:12:59.615274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.131 [2024-10-21 12:12:59.615472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.131 [2024-10-21 12:12:59.615490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.131 [2024-10-21 12:12:59.623678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.131 [2024-10-21 12:12:59.624010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.131 [2024-10-21 12:12:59.624028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.131 [2024-10-21 12:12:59.630802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.131 [2024-10-21 12:12:59.630994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.131 [2024-10-21 12:12:59.631011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.131 [2024-10-21 12:12:59.635653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.131 [2024-10-21 12:12:59.635844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.131 [2024-10-21 12:12:59.635861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.131 [2024-10-21 12:12:59.641033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.131 [2024-10-21 12:12:59.641224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.131 [2024-10-21 12:12:59.641240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.131 [2024-10-21 12:12:59.647691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.131 [2024-10-21 12:12:59.647892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.131 [2024-10-21 12:12:59.647909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.131 [2024-10-21 12:12:59.654486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.131 [2024-10-21 12:12:59.654688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.131 [2024-10-21 12:12:59.654705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.131 [2024-10-21 12:12:59.664206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.131 [2024-10-21 12:12:59.664508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.131 [2024-10-21 12:12:59.664527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.131 [2024-10-21 12:12:59.672711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.131 [2024-10-21 12:12:59.673052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.131 [2024-10-21 12:12:59.673070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.131 [2024-10-21 12:12:59.679795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.131 [2024-10-21 12:12:59.679983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.131 [2024-10-21 12:12:59.680000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.131 [2024-10-21 12:12:59.687416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.131 [2024-10-21 12:12:59.687624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.131 [2024-10-21 12:12:59.687641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.131 [2024-10-21 12:12:59.697593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.131 [2024-10-21 12:12:59.697840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.131 [2024-10-21 12:12:59.697864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.131 [2024-10-21 12:12:59.704211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.131 [2024-10-21 12:12:59.704403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.131 [2024-10-21 12:12:59.704420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.131 [2024-10-21 12:12:59.709916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.131 [2024-10-21 12:12:59.710246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.131 [2024-10-21 12:12:59.710264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.131 [2024-10-21 12:12:59.717883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.131 [2024-10-21 12:12:59.718074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.131 [2024-10-21 12:12:59.718091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.131 [2024-10-21 12:12:59.724459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.131 [2024-10-21 12:12:59.724711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.131 [2024-10-21 12:12:59.724728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.392 [2024-10-21 12:12:59.733247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.392 [2024-10-21 12:12:59.733441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.392 [2024-10-21 12:12:59.733458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.392 [2024-10-21 12:12:59.741341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.392 [2024-10-21 12:12:59.741533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.392 [2024-10-21 12:12:59.741550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.392 [2024-10-21 12:12:59.748106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.392 [2024-10-21 12:12:59.748389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.392 [2024-10-21 12:12:59.748407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.392 [2024-10-21 12:12:59.756032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.392 [2024-10-21 12:12:59.756294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.392 [2024-10-21 12:12:59.756312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.392 [2024-10-21 12:12:59.765770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.392 [2024-10-21 12:12:59.766078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.392 [2024-10-21 12:12:59.766096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.392 [2024-10-21 12:12:59.775670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.392 [2024-10-21 12:12:59.776058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.392 [2024-10-21 12:12:59.776076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.392 [2024-10-21 12:12:59.786763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.392 [2024-10-21 12:12:59.786827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.392 [2024-10-21 12:12:59.786843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.392 [2024-10-21 12:12:59.796596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.392 [2024-10-21 12:12:59.796897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.392 [2024-10-21 12:12:59.796915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.392 [2024-10-21 12:12:59.803765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.392 [2024-10-21 12:12:59.804038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.392 [2024-10-21 12:12:59.804056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.392 [2024-10-21 12:12:59.811588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.393 [2024-10-21 12:12:59.811788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.393 [2024-10-21 12:12:59.811806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.393 [2024-10-21 12:12:59.818560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.393 [2024-10-21 12:12:59.818850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.393 [2024-10-21 12:12:59.818869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.393 [2024-10-21 12:12:59.825257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.393 [2024-10-21 12:12:59.825451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.393 [2024-10-21 12:12:59.825468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.393 [2024-10-21 12:12:59.836167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.393 [2024-10-21 12:12:59.836499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.393 [2024-10-21 12:12:59.836517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.393 [2024-10-21 12:12:59.845047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.393 [2024-10-21 12:12:59.845368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.393 [2024-10-21 12:12:59.845386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.393 [2024-10-21 12:12:59.854837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.393 [2024-10-21 12:12:59.855153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.393 [2024-10-21 12:12:59.855170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.393 [2024-10-21 12:12:59.866729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.393 [2024-10-21 12:12:59.866953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.393 [2024-10-21 12:12:59.866970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.393 [2024-10-21 12:12:59.877845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.393 [2024-10-21 12:12:59.878056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.393 [2024-10-21 12:12:59.878073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.393 [2024-10-21 12:12:59.888940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.393 [2024-10-21 12:12:59.889250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.393 [2024-10-21 12:12:59.889267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.393 [2024-10-21 12:12:59.900573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.393 [2024-10-21 12:12:59.900900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.393 [2024-10-21 12:12:59.900918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.393 [2024-10-21 12:12:59.911768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.393 [2024-10-21 12:12:59.911976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.393 [2024-10-21 12:12:59.911993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.393 [2024-10-21 12:12:59.923739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.393 [2024-10-21 12:12:59.923947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.393 [2024-10-21 12:12:59.923964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.393 [2024-10-21 12:12:59.934535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.393 [2024-10-21 12:12:59.934763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.393 [2024-10-21 12:12:59.934782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.393 [2024-10-21 12:12:59.946401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.393 [2024-10-21 12:12:59.946616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.393 [2024-10-21 12:12:59.946633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.393 [2024-10-21 12:12:59.957863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.393 [2024-10-21 12:12:59.958065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.393 [2024-10-21 12:12:59.958082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.393 [2024-10-21 12:12:59.969302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.393 [2024-10-21 12:12:59.969504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.393 [2024-10-21 12:12:59.969521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.393 [2024-10-21 12:12:59.981387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.393 [2024-10-21 12:12:59.981591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.393 [2024-10-21 12:12:59.981608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.654 [2024-10-21 12:12:59.993217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.654 [2024-10-21 12:12:59.993450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.654 [2024-10-21 12:12:59.993468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.654 [2024-10-21 12:13:00.005372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.654 [2024-10-21 12:13:00.005573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.654 [2024-10-21 12:13:00.005592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.654 [2024-10-21 12:13:00.017499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.654 [2024-10-21 12:13:00.017814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.654 [2024-10-21 12:13:00.017833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.654 [2024-10-21 12:13:00.028215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.654 [2024-10-21 12:13:00.028532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.654 [2024-10-21 12:13:00.028552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.654 [2024-10-21 12:13:00.039619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.654 [2024-10-21 12:13:00.039859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.654 [2024-10-21 12:13:00.039881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.654 [2024-10-21 12:13:00.050516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.654 [2024-10-21 12:13:00.050731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.654 [2024-10-21 12:13:00.050749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.654 [2024-10-21 12:13:00.061347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.654 [2024-10-21 12:13:00.061654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.654 [2024-10-21 12:13:00.061674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.654 [2024-10-21 12:13:00.072445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.654 [2024-10-21 12:13:00.072774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.654 [2024-10-21 12:13:00.072797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.654 [2024-10-21 12:13:00.084200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.654 [2024-10-21 12:13:00.084530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.654 [2024-10-21 12:13:00.084548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.654 [2024-10-21 12:13:00.096147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.654 [2024-10-21 12:13:00.096514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.654 [2024-10-21 12:13:00.096533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.654 [2024-10-21 12:13:00.108028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.654 [2024-10-21 12:13:00.108269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.654 [2024-10-21 12:13:00.108288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.654 [2024-10-21 12:13:00.119593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.654 [2024-10-21 12:13:00.119809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.654 [2024-10-21 12:13:00.119827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.655 [2024-10-21 12:13:00.131973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.655 [2024-10-21 12:13:00.132184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.655 [2024-10-21 12:13:00.132201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.655 [2024-10-21 12:13:00.142974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.655 [2024-10-21 12:13:00.143292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.655 [2024-10-21 12:13:00.143311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.655 [2024-10-21 12:13:00.154388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.655 [2024-10-21 12:13:00.154635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.655 [2024-10-21 12:13:00.154652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.655 [2024-10-21 12:13:00.165770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.655 [2024-10-21 12:13:00.166097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.655 [2024-10-21 12:13:00.166116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.655 [2024-10-21 12:13:00.176910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.655 [2024-10-21 12:13:00.177148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.655 [2024-10-21 12:13:00.177165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.655 [2024-10-21 12:13:00.188677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.655 [2024-10-21 12:13:00.189080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.655 [2024-10-21 12:13:00.189098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.655 [2024-10-21 12:13:00.200124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.655 [2024-10-21 12:13:00.200361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.655 [2024-10-21 12:13:00.200378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.655 [2024-10-21 12:13:00.211069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.655 [2024-10-21 12:13:00.211344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.655 [2024-10-21 12:13:00.211361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.655 [2024-10-21 12:13:00.222955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.655 [2024-10-21 12:13:00.223168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.655 [2024-10-21 12:13:00.223185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.655 [2024-10-21 12:13:00.234073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.655 [2024-10-21 12:13:00.234279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.655 [2024-10-21 12:13:00.234300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.655 [2024-10-21 12:13:00.245235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.655 [2024-10-21 12:13:00.245475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.655 [2024-10-21 12:13:00.245492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.916 [2024-10-21 12:13:00.256862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.916 [2024-10-21 12:13:00.257111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.916 [2024-10-21 12:13:00.257129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.916 [2024-10-21 12:13:00.268150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.916 [2024-10-21 12:13:00.268421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.916 [2024-10-21 12:13:00.268439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.916 [2024-10-21 12:13:00.279255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.916 [2024-10-21 12:13:00.279478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.916 [2024-10-21 12:13:00.279495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.916 [2024-10-21 12:13:00.291167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.916 [2024-10-21 12:13:00.291396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.916 [2024-10-21 12:13:00.291413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.916 [2024-10-21 12:13:00.301554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.916 [2024-10-21 12:13:00.301858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.916 [2024-10-21 12:13:00.301876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.916 [2024-10-21 12:13:00.313004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.916 [2024-10-21 12:13:00.313218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.916 [2024-10-21 12:13:00.313235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.916 [2024-10-21 12:13:00.321394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.916 [2024-10-21 12:13:00.321724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.916 [2024-10-21 12:13:00.321742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.916 [2024-10-21 12:13:00.329819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.916 [2024-10-21 12:13:00.330152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.916 [2024-10-21 12:13:00.330170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.916 [2024-10-21 12:13:00.338993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.916 [2024-10-21 12:13:00.339058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.916 [2024-10-21 12:13:00.339074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.916 [2024-10-21 12:13:00.346185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.916 [2024-10-21 12:13:00.346380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.916 [2024-10-21 12:13:00.346396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.916 [2024-10-21 12:13:00.354861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.916 [2024-10-21 12:13:00.355052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.916 [2024-10-21 12:13:00.355069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.916 [2024-10-21 12:13:00.362908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.916 [2024-10-21 12:13:00.363211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.916 [2024-10-21 12:13:00.363228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.916 [2024-10-21 12:13:00.372544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.916 [2024-10-21 12:13:00.372600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.916 [2024-10-21 12:13:00.372616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.916 [2024-10-21 12:13:00.381671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.916 [2024-10-21 12:13:00.382041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.916 [2024-10-21 12:13:00.382058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.916 [2024-10-21 12:13:00.389378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.916 [2024-10-21 12:13:00.389617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.916 [2024-10-21 12:13:00.389633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.916 [2024-10-21 12:13:00.399164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.916 [2024-10-21 12:13:00.399467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.916 [2024-10-21 12:13:00.399489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.916 [2024-10-21 12:13:00.407871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.916 [2024-10-21 12:13:00.408159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.916 [2024-10-21 12:13:00.408176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.916 [2024-10-21 12:13:00.416430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.916 [2024-10-21 12:13:00.416741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.916 [2024-10-21 12:13:00.416758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.916 [2024-10-21 12:13:00.425103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.916 [2024-10-21 12:13:00.425374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.916 [2024-10-21 12:13:00.425392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.917 [2024-10-21 12:13:00.434835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.917 [2024-10-21 12:13:00.435152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.917 [2024-10-21 12:13:00.435170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.917 [2024-10-21 12:13:00.444708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.917 [2024-10-21 12:13:00.445035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.917 [2024-10-21 12:13:00.445053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.917 [2024-10-21 12:13:00.451435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.917 [2024-10-21 12:13:00.451628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.917 [2024-10-21 12:13:00.451645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.917 [2024-10-21 12:13:00.460106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.917 [2024-10-21 12:13:00.460401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.917 [2024-10-21 12:13:00.460418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.917 [2024-10-21 12:13:00.467293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.917 [2024-10-21 12:13:00.467613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.917 [2024-10-21 12:13:00.467631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.917 [2024-10-21 12:13:00.473084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.917 [2024-10-21 12:13:00.473278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.917 [2024-10-21 12:13:00.473295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.917 [2024-10-21 12:13:00.482130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.917 [2024-10-21 12:13:00.482426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.917 [2024-10-21 12:13:00.482444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.917 [2024-10-21 12:13:00.489471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.917 [2024-10-21 12:13:00.489756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.917 [2024-10-21 12:13:00.489774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.917 [2024-10-21 12:13:00.496835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.917 [2024-10-21 12:13:00.497130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.917 [2024-10-21 12:13:00.497149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.917 [2024-10-21 12:13:00.505770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:23.917 [2024-10-21 12:13:00.506078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.917 [2024-10-21 12:13:00.506095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.178 [2024-10-21 12:13:00.514374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.178 [2024-10-21 12:13:00.514715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.178 [2024-10-21 12:13:00.514733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.178 [2024-10-21 12:13:00.525584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.178 [2024-10-21 12:13:00.525881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.178 [2024-10-21 12:13:00.525899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.178 [2024-10-21 12:13:00.535424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.178 [2024-10-21 12:13:00.535733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.178 [2024-10-21 12:13:00.535751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.178 [2024-10-21 12:13:00.544936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.178 [2024-10-21 12:13:00.545232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.178 [2024-10-21 12:13:00.545249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.178 [2024-10-21 12:13:00.554401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.178 [2024-10-21 12:13:00.554694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.178 [2024-10-21 12:13:00.554712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.178 3286.00 IOPS, 410.75 MiB/s [2024-10-21T10:13:00.773Z] [2024-10-21 12:13:00.564032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.178 [2024-10-21 12:13:00.564265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.178 [2024-10-21 12:13:00.564281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.178 [2024-10-21 12:13:00.571485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.178 [2024-10-21 12:13:00.571740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.178 [2024-10-21 12:13:00.571758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.178 [2024-10-21 12:13:00.579506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.179 [2024-10-21 12:13:00.579872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.179 [2024-10-21 12:13:00.579889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.179 [2024-10-21 12:13:00.588643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.179 [2024-10-21 12:13:00.588878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.179 [2024-10-21 12:13:00.588895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.179 [2024-10-21 12:13:00.598842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.179 [2024-10-21 12:13:00.599148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.179 [2024-10-21 12:13:00.599166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.179 [2024-10-21 12:13:00.609491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.179 [2024-10-21 12:13:00.609752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.179 [2024-10-21 12:13:00.609770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.179 [2024-10-21 12:13:00.620778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.179 [2024-10-21 12:13:00.621085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.179 [2024-10-21 12:13:00.621103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.179 [2024-10-21 12:13:00.632583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.179 [2024-10-21 12:13:00.632809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.179 [2024-10-21 12:13:00.632829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.179 [2024-10-21 12:13:00.643513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.179 [2024-10-21 12:13:00.643731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.179 [2024-10-21 12:13:00.643748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.179 [2024-10-21 12:13:00.654584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.179 [2024-10-21 12:13:00.654834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.179 [2024-10-21 12:13:00.654850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.179 [2024-10-21 12:13:00.665444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.179 [2024-10-21 12:13:00.665771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.179 [2024-10-21 12:13:00.665789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.179 [2024-10-21 12:13:00.676526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.179 [2024-10-21 12:13:00.676924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.179 [2024-10-21 12:13:00.676941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.179 [2024-10-21 12:13:00.687697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.179 [2024-10-21 12:13:00.687918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.179 [2024-10-21 12:13:00.687935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.179 [2024-10-21 12:13:00.696664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.179 [2024-10-21 12:13:00.696854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.179 [2024-10-21 12:13:00.696871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.179 [2024-10-21 12:13:00.705830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.179 [2024-10-21 12:13:00.706167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.179 [2024-10-21 12:13:00.706185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.179 [2024-10-21 12:13:00.714700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.179 [2024-10-21 12:13:00.714992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.179 [2024-10-21 12:13:00.715009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.179 [2024-10-21 12:13:00.722964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.179 [2024-10-21 12:13:00.723181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.179 [2024-10-21 12:13:00.723198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.179 [2024-10-21 12:13:00.732571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.179 [2024-10-21 12:13:00.732900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.179 [2024-10-21 12:13:00.732918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.179 [2024-10-21 12:13:00.743160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.179 [2024-10-21 12:13:00.743355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.179 [2024-10-21 12:13:00.743372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.179 [2024-10-21 12:13:00.752514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.179 [2024-10-21 12:13:00.752806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.179 [2024-10-21 12:13:00.752824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.179 [2024-10-21 12:13:00.759527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.179 [2024-10-21 12:13:00.759717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.179 [2024-10-21 12:13:00.759734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.179 [2024-10-21 12:13:00.764415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.179 [2024-10-21 12:13:00.764606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.179 [2024-10-21 12:13:00.764622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.179 [2024-10-21 12:13:00.768533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.179 [2024-10-21 12:13:00.768723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.179 [2024-10-21 12:13:00.768740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.441 [2024-10-21 12:13:00.774384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.441 [2024-10-21 12:13:00.774432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.441 [2024-10-21 12:13:00.774447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.441 [2024-10-21 12:13:00.778876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.441 [2024-10-21 12:13:00.779066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.441 [2024-10-21 12:13:00.779082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.441 [2024-10-21 12:13:00.783377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.441 [2024-10-21 12:13:00.783568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.441 [2024-10-21 12:13:00.783585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.441 [2024-10-21 12:13:00.789989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.441 [2024-10-21 12:13:00.790180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.441 [2024-10-21 12:13:00.790197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.441 [2024-10-21 12:13:00.795908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.441 [2024-10-21 12:13:00.796098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.441 [2024-10-21 12:13:00.796115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.441 [2024-10-21 12:13:00.799776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.441 [2024-10-21 12:13:00.799966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.441 [2024-10-21 12:13:00.799982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.441 [2024-10-21 12:13:00.805445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.441 [2024-10-21 12:13:00.805636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.441 [2024-10-21 12:13:00.805653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.441 [2024-10-21 12:13:00.811390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.441 [2024-10-21 12:13:00.811580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.441 [2024-10-21 12:13:00.811597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.441 [2024-10-21 12:13:00.815495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.441 [2024-10-21 12:13:00.815684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.441 [2024-10-21 12:13:00.815701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.441 [2024-10-21 12:13:00.823164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.441 [2024-10-21 12:13:00.823426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.441 [2024-10-21 12:13:00.823443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.441 [2024-10-21 12:13:00.828332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.441 [2024-10-21 12:13:00.828512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.441 [2024-10-21 12:13:00.828532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.441 [2024-10-21 12:13:00.832973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.441 [2024-10-21 12:13:00.833153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.441 [2024-10-21 12:13:00.833170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.441 [2024-10-21 12:13:00.838469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.441 [2024-10-21 12:13:00.838648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.441 [2024-10-21 12:13:00.838665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.441 [2024-10-21 12:13:00.844055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.441 [2024-10-21 12:13:00.844235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.441 [2024-10-21 12:13:00.844251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.441 [2024-10-21 12:13:00.847469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.441 [2024-10-21 12:13:00.847649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.441 [2024-10-21 12:13:00.847665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.441 [2024-10-21 12:13:00.851643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.441 [2024-10-21 12:13:00.851821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.441 [2024-10-21 12:13:00.851838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.441 [2024-10-21 12:13:00.855925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.441 [2024-10-21 12:13:00.856114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.441 [2024-10-21 12:13:00.856130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.442 [2024-10-21 12:13:00.865502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.442 [2024-10-21 12:13:00.865711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.442 [2024-10-21 12:13:00.865728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.442 [2024-10-21 12:13:00.871905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.442 [2024-10-21 12:13:00.872085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.442 [2024-10-21 12:13:00.872102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.442 [2024-10-21 12:13:00.881059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.442 [2024-10-21 12:13:00.881356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.442 [2024-10-21 12:13:00.881374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.442 [2024-10-21 12:13:00.887577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.442 [2024-10-21 12:13:00.887756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.442 [2024-10-21 12:13:00.887773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.442 [2024-10-21 12:13:00.891517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.442 [2024-10-21 12:13:00.891696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.442 [2024-10-21 12:13:00.891713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.442 [2024-10-21 12:13:00.896768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.442 [2024-10-21 12:13:00.896949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.442 [2024-10-21 12:13:00.896965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.442 [2024-10-21 12:13:00.902165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.442 [2024-10-21 12:13:00.902350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.442 [2024-10-21 12:13:00.902367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.442 [2024-10-21 12:13:00.910212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.442 [2024-10-21 12:13:00.910393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.442 [2024-10-21 12:13:00.910410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.442 [2024-10-21 12:13:00.913652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.442 [2024-10-21 12:13:00.913815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.442 [2024-10-21 12:13:00.913832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.442 [2024-10-21 12:13:00.917773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.442 [2024-10-21 12:13:00.917984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.442 [2024-10-21 12:13:00.918000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.442 [2024-10-21 12:13:00.925195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.442 [2024-10-21 12:13:00.925478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.442 [2024-10-21 12:13:00.925500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.442 [2024-10-21 12:13:00.934659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.442 [2024-10-21 12:13:00.934949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.442 [2024-10-21 12:13:00.934967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.442 [2024-10-21 12:13:00.945203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.442 [2024-10-21 12:13:00.945415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.442 [2024-10-21 12:13:00.945432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.442 [2024-10-21 12:13:00.955270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.442 [2024-10-21 12:13:00.955572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.442 [2024-10-21 12:13:00.955590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.442 [2024-10-21 12:13:00.964754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.442 [2024-10-21 12:13:00.964812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.442 [2024-10-21 12:13:00.964827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.442 [2024-10-21 12:13:00.969820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.442 [2024-10-21 12:13:00.969883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.442 [2024-10-21 12:13:00.969899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.442 [2024-10-21 12:13:00.973784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.442 [2024-10-21 12:13:00.973844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.442 [2024-10-21 12:13:00.973860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.442 [2024-10-21 12:13:00.979587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.442 [2024-10-21 12:13:00.979636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.442 [2024-10-21 12:13:00.979651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.442 [2024-10-21 12:13:00.984831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.442 [2024-10-21 12:13:00.984882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.442 [2024-10-21 12:13:00.984898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.442 [2024-10-21 12:13:00.989152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.442 [2024-10-21 12:13:00.989205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.442 [2024-10-21 12:13:00.989221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.442 [2024-10-21 12:13:00.994733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.442 [2024-10-21 12:13:00.994782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.442 [2024-10-21 12:13:00.994797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.442 [2024-10-21 12:13:01.000863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.442 [2024-10-21 12:13:01.000927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.442 [2024-10-21 12:13:01.000942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.442 [2024-10-21 12:13:01.005093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.442 [2024-10-21 12:13:01.005139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.442 [2024-10-21 12:13:01.005155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.442 [2024-10-21 12:13:01.010471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.442 [2024-10-21 12:13:01.010560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.442 [2024-10-21 12:13:01.010575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.442 [2024-10-21 12:13:01.015073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.442 [2024-10-21 12:13:01.015119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.442 [2024-10-21 12:13:01.015134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.442 [2024-10-21 12:13:01.019083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.442 [2024-10-21 12:13:01.019129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.442 [2024-10-21 12:13:01.019145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.442 [2024-10-21 12:13:01.023060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.442 [2024-10-21 12:13:01.023111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.442 [2024-10-21 12:13:01.023127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.442 [2024-10-21 12:13:01.032310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.442 [2024-10-21 12:13:01.032363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.442 [2024-10-21 12:13:01.032379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.705 [2024-10-21 12:13:01.037784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.705 [2024-10-21 12:13:01.038055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.705 [2024-10-21 12:13:01.038072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.705 [2024-10-21 12:13:01.045163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.705 [2024-10-21 12:13:01.045229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.705 [2024-10-21 12:13:01.045245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.705 [2024-10-21 12:13:01.050316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.705 [2024-10-21 12:13:01.050376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.705 [2024-10-21 12:13:01.050392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.705 [2024-10-21 12:13:01.057619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.705 [2024-10-21 12:13:01.057666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.705 [2024-10-21 12:13:01.057682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.705 [2024-10-21 12:13:01.063896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.705 [2024-10-21 12:13:01.063944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.705 [2024-10-21 12:13:01.063961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.705 [2024-10-21 12:13:01.068354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.705 [2024-10-21 12:13:01.068401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.705 [2024-10-21 12:13:01.068416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.705 [2024-10-21 12:13:01.072216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.705 [2024-10-21 12:13:01.072272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.705 [2024-10-21 12:13:01.072287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.705 [2024-10-21 12:13:01.076726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.705 [2024-10-21 12:13:01.076782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.705 [2024-10-21 12:13:01.076798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.705 [2024-10-21 12:13:01.080687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.705 [2024-10-21 12:13:01.080749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.705 [2024-10-21 12:13:01.080768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.705 [2024-10-21 12:13:01.089653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.705 [2024-10-21 12:13:01.089714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.705 [2024-10-21 12:13:01.089730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.705 [2024-10-21 12:13:01.095630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.705 [2024-10-21 12:13:01.095685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.705 [2024-10-21 12:13:01.095700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.705 [2024-10-21 12:13:01.102700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.705 [2024-10-21 12:13:01.102752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.705 [2024-10-21 12:13:01.102768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.705 [2024-10-21 12:13:01.108287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.705 [2024-10-21 12:13:01.108336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.705 [2024-10-21 12:13:01.108352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.705 [2024-10-21 12:13:01.112006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.705 [2024-10-21 12:13:01.112050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.705 [2024-10-21 12:13:01.112066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.705 [2024-10-21 12:13:01.117845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.705 [2024-10-21 12:13:01.117914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.705 [2024-10-21 12:13:01.117930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.705 [2024-10-21 12:13:01.122848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.705 [2024-10-21 12:13:01.122892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.705 [2024-10-21 12:13:01.122909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.705 [2024-10-21 12:13:01.130547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.705 [2024-10-21 12:13:01.130592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.705 [2024-10-21 12:13:01.130608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.705 [2024-10-21 12:13:01.135800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.705 [2024-10-21 12:13:01.135848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.705 [2024-10-21 12:13:01.135864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.705 [2024-10-21 12:13:01.143142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.705 [2024-10-21 12:13:01.143187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.705 [2024-10-21 12:13:01.143203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.705 [2024-10-21 12:13:01.146868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.705 [2024-10-21 12:13:01.146912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.705 [2024-10-21 12:13:01.146929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.705 [2024-10-21 12:13:01.153687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.705 [2024-10-21 12:13:01.153733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.706 [2024-10-21 12:13:01.153748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.706 [2024-10-21 12:13:01.162038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.706 [2024-10-21 12:13:01.162084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.706 [2024-10-21 12:13:01.162100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.706 [2024-10-21 12:13:01.167284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.706 [2024-10-21 12:13:01.167385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.706 [2024-10-21 12:13:01.167401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.706 [2024-10-21 12:13:01.175580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.706 [2024-10-21 12:13:01.175830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.706 [2024-10-21 12:13:01.175847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.706 [2024-10-21 12:13:01.184028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.706 [2024-10-21 12:13:01.184106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.706 [2024-10-21 12:13:01.184122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.706 [2024-10-21 12:13:01.191937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.706 [2024-10-21 12:13:01.192014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.706 [2024-10-21 12:13:01.192031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.706 [2024-10-21 12:13:01.196052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.706 [2024-10-21 12:13:01.196108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.706 [2024-10-21 12:13:01.196124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.706 [2024-10-21 12:13:01.200934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.706 [2024-10-21 12:13:01.200991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.706 [2024-10-21 12:13:01.201007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.706 [2024-10-21 12:13:01.205727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.706 [2024-10-21 12:13:01.205796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.706 [2024-10-21 12:13:01.205811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.706 [2024-10-21 12:13:01.214909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.706 [2024-10-21 12:13:01.214968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.706 [2024-10-21 12:13:01.214984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.706 [2024-10-21 12:13:01.219150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.706 [2024-10-21 12:13:01.219226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.706 [2024-10-21 12:13:01.219241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.706 [2024-10-21 12:13:01.223762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.706 [2024-10-21 12:13:01.223810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.706 [2024-10-21 12:13:01.223826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.706 [2024-10-21 12:13:01.229642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.706 [2024-10-21 12:13:01.229897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.706 [2024-10-21 12:13:01.229915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.706 [2024-10-21 12:13:01.238368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.706 [2024-10-21 12:13:01.238634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.706 [2024-10-21 12:13:01.238650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.706 [2024-10-21 12:13:01.244870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.706 [2024-10-21 12:13:01.244955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.706 [2024-10-21 12:13:01.244973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.706 [2024-10-21 12:13:01.252385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.706 [2024-10-21 12:13:01.252440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.706 [2024-10-21 12:13:01.252455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.706 [2024-10-21 12:13:01.257963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.706 [2024-10-21 12:13:01.258011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.706 [2024-10-21 12:13:01.258027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.706 [2024-10-21 12:13:01.265315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.706 [2024-10-21 12:13:01.265370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.706 [2024-10-21 12:13:01.265385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.706 [2024-10-21 12:13:01.270098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.706 [2024-10-21 12:13:01.270155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.706 [2024-10-21 12:13:01.270171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.706 [2024-10-21 12:13:01.273768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.706 [2024-10-21 12:13:01.273824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.706 [2024-10-21 12:13:01.273840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.706 [2024-10-21 12:13:01.278059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.706 [2024-10-21 12:13:01.278123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.706 [2024-10-21 12:13:01.278139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.706 [2024-10-21 12:13:01.282206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.706 [2024-10-21 12:13:01.282251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.706 [2024-10-21 12:13:01.282267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.706 [2024-10-21 12:13:01.286285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.706 [2024-10-21 12:13:01.286336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.706 [2024-10-21 12:13:01.286352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.706 [2024-10-21 12:13:01.290169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.706 [2024-10-21 12:13:01.290221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.706 [2024-10-21 12:13:01.290237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.706 [2024-10-21 12:13:01.294003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.706 [2024-10-21 12:13:01.294047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.706 [2024-10-21 12:13:01.294063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.706 [2024-10-21 12:13:01.298571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.706 [2024-10-21 12:13:01.298629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.706 [2024-10-21 12:13:01.298645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.968 [2024-10-21 12:13:01.303913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.968 [2024-10-21 12:13:01.303968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.968 [2024-10-21 12:13:01.303983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.968 [2024-10-21 12:13:01.308712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.968 [2024-10-21 12:13:01.308758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.968 [2024-10-21 12:13:01.308773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.968 [2024-10-21 12:13:01.315752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.968 [2024-10-21 12:13:01.316000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.968 [2024-10-21 12:13:01.316015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.968 [2024-10-21 12:13:01.322273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.968 [2024-10-21 12:13:01.322318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.968 [2024-10-21 12:13:01.322339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.968 [2024-10-21 12:13:01.327090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.968 [2024-10-21 12:13:01.327147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.968 [2024-10-21 12:13:01.327163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.968 [2024-10-21 12:13:01.331590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.968 [2024-10-21 12:13:01.331659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.968 [2024-10-21 12:13:01.331678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.968 [2024-10-21 12:13:01.337448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.968 [2024-10-21 12:13:01.337518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.968 [2024-10-21 12:13:01.337534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.968 [2024-10-21 12:13:01.341924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.968 [2024-10-21 12:13:01.341987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.968 [2024-10-21 12:13:01.342003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.968 [2024-10-21 12:13:01.345878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.968 [2024-10-21 12:13:01.345926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.968 [2024-10-21 12:13:01.345942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.968 [2024-10-21 12:13:01.350196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.968 [2024-10-21 12:13:01.350265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.968 [2024-10-21 12:13:01.350280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.968 [2024-10-21 12:13:01.355761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.968 [2024-10-21 12:13:01.355805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.968 [2024-10-21 12:13:01.355821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.968 [2024-10-21 12:13:01.360133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.968 [2024-10-21 12:13:01.360205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.968 [2024-10-21 12:13:01.360219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.968 [2024-10-21 12:13:01.364176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.968 [2024-10-21 12:13:01.364256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.968 [2024-10-21 12:13:01.364271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.968 [2024-10-21 12:13:01.368574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.968 [2024-10-21 12:13:01.368670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.968 [2024-10-21 12:13:01.368685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.969 [2024-10-21 12:13:01.372586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.969 [2024-10-21 12:13:01.372663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.969 [2024-10-21 12:13:01.372679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.969 [2024-10-21 12:13:01.376702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.969 [2024-10-21 12:13:01.376776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.969 [2024-10-21 12:13:01.376792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.969 [2024-10-21 12:13:01.385364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.969 [2024-10-21 12:13:01.385420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.969 [2024-10-21 12:13:01.385436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.969 [2024-10-21 12:13:01.391806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.969 [2024-10-21 12:13:01.391850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.969 [2024-10-21 12:13:01.391866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.969 [2024-10-21 12:13:01.395697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.969 [2024-10-21 12:13:01.395829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.969 [2024-10-21 12:13:01.395845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.969 [2024-10-21 12:13:01.400062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.969 [2024-10-21 12:13:01.400112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.969 [2024-10-21 12:13:01.400128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.969 [2024-10-21 12:13:01.404916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.969 [2024-10-21 12:13:01.404978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.969 [2024-10-21 12:13:01.404994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.969 [2024-10-21 12:13:01.408414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.969 [2024-10-21 12:13:01.408466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.969 [2024-10-21 12:13:01.408482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.969 [2024-10-21 12:13:01.412267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.969 [2024-10-21 12:13:01.412315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.969 [2024-10-21 12:13:01.412335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.969 [2024-10-21 12:13:01.416567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.969 [2024-10-21 12:13:01.416625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.969 [2024-10-21 12:13:01.416641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.969 [2024-10-21 12:13:01.420125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.969 [2024-10-21 12:13:01.420172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.969 [2024-10-21 12:13:01.420187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.969 [2024-10-21 12:13:01.423721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.969 [2024-10-21 12:13:01.423772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.969 [2024-10-21 12:13:01.423788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.969 [2024-10-21 12:13:01.427351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.969 [2024-10-21 12:13:01.427396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.969 [2024-10-21 12:13:01.427412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.969 [2024-10-21 12:13:01.430507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.969 [2024-10-21 12:13:01.430552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.969 [2024-10-21 12:13:01.430568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.969 [2024-10-21 12:13:01.436522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.969 [2024-10-21 12:13:01.436576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.969 [2024-10-21 12:13:01.436592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.969 [2024-10-21 12:13:01.439984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.969 [2024-10-21 12:13:01.440028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.969 [2024-10-21 12:13:01.440044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.969 [2024-10-21 12:13:01.443766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.969 [2024-10-21 12:13:01.443811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.969 [2024-10-21 12:13:01.443827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.969 [2024-10-21 12:13:01.447718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.969 [2024-10-21 12:13:01.447769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.969 [2024-10-21 12:13:01.447788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.969 [2024-10-21 12:13:01.451031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.969 [2024-10-21 12:13:01.451085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.969 [2024-10-21 12:13:01.451101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.969 [2024-10-21 12:13:01.455533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.969 [2024-10-21 12:13:01.455576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.969 [2024-10-21 12:13:01.455592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.969 [2024-10-21 12:13:01.460613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.969 [2024-10-21 12:13:01.460687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.969 [2024-10-21 12:13:01.460703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.969 [2024-10-21 12:13:01.469927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.969 [2024-10-21 12:13:01.469972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.969 [2024-10-21 12:13:01.469988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.969 [2024-10-21 12:13:01.479879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.969 [2024-10-21 12:13:01.480144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.969 [2024-10-21 12:13:01.480161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.969 [2024-10-21 12:13:01.491227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.969 [2024-10-21 12:13:01.491499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.969 [2024-10-21 12:13:01.491516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.969 [2024-10-21 12:13:01.501248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.969 [2024-10-21 12:13:01.501345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.969 [2024-10-21 12:13:01.501361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.969 [2024-10-21 12:13:01.505960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.969 [2024-10-21 12:13:01.506007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.969 [2024-10-21 12:13:01.506023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.969 [2024-10-21 12:13:01.509697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.969 [2024-10-21 12:13:01.509746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.969 [2024-10-21 12:13:01.509761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.969 [2024-10-21 12:13:01.513358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.969 [2024-10-21 12:13:01.513406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.969 [2024-10-21 12:13:01.513423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.969 [2024-10-21 12:13:01.521570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.969 [2024-10-21 12:13:01.521625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.970 [2024-10-21 12:13:01.521641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.970 [2024-10-21 12:13:01.525479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.970 [2024-10-21 12:13:01.525531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.970 [2024-10-21 12:13:01.525547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.970 [2024-10-21 12:13:01.530735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.970 [2024-10-21 12:13:01.530790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.970 [2024-10-21 12:13:01.530806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.970 [2024-10-21 12:13:01.538884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.970 [2024-10-21 12:13:01.538933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.970 [2024-10-21 12:13:01.538948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.970 [2024-10-21 12:13:01.542742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.970 [2024-10-21 12:13:01.542785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.970 [2024-10-21 12:13:01.542801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.970 [2024-10-21 12:13:01.546232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.970 [2024-10-21 12:13:01.546284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.970 [2024-10-21 12:13:01.546300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.970 [2024-10-21 12:13:01.549925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.970 [2024-10-21 12:13:01.549973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.970 [2024-10-21 12:13:01.549989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.970 [2024-10-21 12:13:01.553638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.970 [2024-10-21 12:13:01.553702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.970 [2024-10-21 12:13:01.553718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.970 [2024-10-21 12:13:01.557314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc2fa0) with pdu=0x2000166fef90 00:28:24.970 [2024-10-21 12:13:01.558341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.970 [2024-10-21 12:13:01.558359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.970 4199.00 IOPS, 524.88 MiB/s 00:28:24.970 Latency(us) 00:28:24.970 [2024-10-21T10:13:01.565Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:24.970 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:24.970 nvme0n1 : 2.00 4200.82 525.10 0.00 0.00 3804.55 1501.87 13598.72 00:28:24.970 [2024-10-21T10:13:01.565Z] =================================================================================================================== 00:28:24.970 [2024-10-21T10:13:01.565Z] Total : 4200.82 525.10 0.00 0.00 3804.55 1501.87 13598.72 00:28:25.230 { 00:28:25.230 "results": [ 00:28:25.230 { 00:28:25.230 "job": "nvme0n1", 00:28:25.230 "core_mask": "0x2", 00:28:25.230 "workload": "randwrite", 00:28:25.230 "status": "finished", 00:28:25.230 "queue_depth": 16, 00:28:25.230 "io_size": 131072, 00:28:25.230 "runtime": 2.002703, 00:28:25.230 "iops": 4200.822588271951, 00:28:25.230 "mibps": 525.1028235339938, 00:28:25.230 "io_failed": 0, 00:28:25.230 "io_timeout": 0, 00:28:25.230 "avg_latency_us": 3804.554563968462, 00:28:25.230 "min_latency_us": 1501.8666666666666, 00:28:25.230 "max_latency_us": 13598.72 00:28:25.230 } 00:28:25.230 ], 00:28:25.230 "core_count": 1 00:28:25.230 } 00:28:25.230 12:13:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:25.230 12:13:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:25.230 12:13:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:25.230 | .driver_specific 00:28:25.230 | .nvme_error 00:28:25.230 | .status_code 00:28:25.230 | .command_transient_transport_error' 00:28:25.230 12:13:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:25.230 12:13:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 271 > 0 )) 00:28:25.230 12:13:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1148161 00:28:25.230 12:13:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1148161 ']' 00:28:25.230 12:13:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1148161 00:28:25.230 12:13:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:25.230 12:13:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:25.230 12:13:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1148161 00:28:25.490 12:13:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:25.490 12:13:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:25.490 12:13:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1148161' 00:28:25.490 killing process with pid 1148161 00:28:25.490 12:13:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1148161 00:28:25.490 Received shutdown signal, test time was about 2.000000 seconds 00:28:25.490 00:28:25.490 Latency(us) 00:28:25.490 [2024-10-21T10:13:02.085Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:25.490 [2024-10-21T10:13:02.085Z] =================================================================================================================== 00:28:25.490 [2024-10-21T10:13:02.085Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:25.490 12:13:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1148161 00:28:25.490 12:13:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1145766 00:28:25.490 12:13:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1145766 ']' 00:28:25.491 12:13:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1145766 00:28:25.491 12:13:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:25.491 12:13:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:25.491 12:13:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1145766 00:28:25.491 12:13:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:25.491 12:13:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:25.491 12:13:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1145766' 00:28:25.491 killing process with pid 1145766 00:28:25.491 12:13:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1145766 00:28:25.491 12:13:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1145766 00:28:25.751 00:28:25.751 real 0m16.032s 00:28:25.751 user 0m31.531s 00:28:25.751 sys 0m3.680s 00:28:25.751 12:13:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:25.751 12:13:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:25.751 ************************************ 00:28:25.751 END TEST nvmf_digest_error 00:28:25.751 ************************************ 00:28:25.751 12:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:25.751 12:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:25.751 12:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:25.751 12:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:28:25.751 12:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:25.751 12:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:28:25.751 12:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:25.751 12:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:25.751 rmmod nvme_tcp 00:28:25.751 rmmod nvme_fabrics 00:28:25.751 rmmod nvme_keyring 00:28:25.751 12:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:25.751 12:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:28:25.751 12:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:28:25.751 12:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@515 -- # '[' -n 1145766 ']' 00:28:25.751 12:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # killprocess 1145766 00:28:25.751 12:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 1145766 ']' 00:28:25.751 12:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 1145766 00:28:25.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1145766) - No such process 00:28:25.751 12:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 1145766 is not found' 00:28:25.751 Process with pid 1145766 is not found 00:28:25.751 12:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:25.751 12:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:25.751 12:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:25.751 12:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:28:25.751 12:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-save 00:28:25.751 12:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:25.751 12:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-restore 00:28:25.751 12:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:25.752 12:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:25.752 12:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:25.752 12:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:25.752 12:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:28.298 12:13:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:28.298 00:28:28.298 real 0m41.534s 00:28:28.298 user 1m3.952s 00:28:28.298 sys 0m13.199s 00:28:28.298 12:13:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:28.298 12:13:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:28.298 ************************************ 00:28:28.298 END TEST nvmf_digest 00:28:28.298 ************************************ 00:28:28.298 12:13:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:28:28.298 12:13:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:28:28.298 12:13:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:28:28.298 12:13:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:28.298 12:13:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:28.298 12:13:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:28.298 12:13:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.298 ************************************ 00:28:28.298 START TEST nvmf_bdevperf 00:28:28.298 ************************************ 00:28:28.298 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:28.298 * Looking for test storage... 00:28:28.298 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:28.298 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:28.298 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:28:28.298 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:28.298 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:28.298 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:28.298 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:28.298 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:28.298 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:28:28.298 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:28:28.298 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:28:28.298 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:28:28.298 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:28:28.298 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:28:28.298 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:28:28.298 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:28.298 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:28:28.298 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:28:28.298 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:28.298 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:28.298 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:28:28.298 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:28:28.298 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:28.298 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:28:28.298 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:28:28.298 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:28:28.298 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:28:28.298 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:28.298 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:28:28.298 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:28:28.298 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:28.298 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:28.298 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:28:28.298 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:28.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:28.299 --rc genhtml_branch_coverage=1 00:28:28.299 --rc genhtml_function_coverage=1 00:28:28.299 --rc genhtml_legend=1 00:28:28.299 --rc geninfo_all_blocks=1 00:28:28.299 --rc geninfo_unexecuted_blocks=1 00:28:28.299 00:28:28.299 ' 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:28.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:28.299 --rc genhtml_branch_coverage=1 00:28:28.299 --rc genhtml_function_coverage=1 00:28:28.299 --rc genhtml_legend=1 00:28:28.299 --rc geninfo_all_blocks=1 00:28:28.299 --rc geninfo_unexecuted_blocks=1 00:28:28.299 00:28:28.299 ' 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:28.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:28.299 --rc genhtml_branch_coverage=1 00:28:28.299 --rc genhtml_function_coverage=1 00:28:28.299 --rc genhtml_legend=1 00:28:28.299 --rc geninfo_all_blocks=1 00:28:28.299 --rc geninfo_unexecuted_blocks=1 00:28:28.299 00:28:28.299 ' 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:28.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:28.299 --rc genhtml_branch_coverage=1 00:28:28.299 --rc genhtml_function_coverage=1 00:28:28.299 --rc genhtml_legend=1 00:28:28.299 --rc geninfo_all_blocks=1 00:28:28.299 --rc geninfo_unexecuted_blocks=1 00:28:28.299 00:28:28.299 ' 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:28.299 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:28:28.299 12:13:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:36.444 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:36.444 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:36.444 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:36.444 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # is_hw=yes 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:36.444 12:13:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:36.444 12:13:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:36.444 12:13:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:36.444 12:13:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:36.444 12:13:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:36.444 12:13:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:36.444 12:13:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:36.444 12:13:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:36.444 12:13:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:36.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:36.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.567 ms 00:28:36.444 00:28:36.444 --- 10.0.0.2 ping statistics --- 00:28:36.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:36.444 rtt min/avg/max/mdev = 0.567/0.567/0.567/0.000 ms 00:28:36.444 12:13:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:36.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:36.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:28:36.444 00:28:36.444 --- 10.0.0.1 ping statistics --- 00:28:36.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:36.444 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:28:36.444 12:13:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:36.444 12:13:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # return 0 00:28:36.444 12:13:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:36.444 12:13:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:36.444 12:13:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:36.445 12:13:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:36.445 12:13:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:36.445 12:13:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:36.445 12:13:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:36.445 12:13:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:36.445 12:13:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:36.445 12:13:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:36.445 12:13:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:36.445 12:13:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:36.445 12:13:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=1153167 00:28:36.445 12:13:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 1153167 00:28:36.445 12:13:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:36.445 12:13:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1153167 ']' 00:28:36.445 12:13:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:36.445 12:13:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:36.445 12:13:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:36.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:36.445 12:13:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:36.445 12:13:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:36.445 [2024-10-21 12:13:12.266831] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:28:36.445 [2024-10-21 12:13:12.266897] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:36.445 [2024-10-21 12:13:12.357349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:36.445 [2024-10-21 12:13:12.409466] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:36.445 [2024-10-21 12:13:12.409519] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:36.445 [2024-10-21 12:13:12.409529] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:36.445 [2024-10-21 12:13:12.409536] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:36.445 [2024-10-21 12:13:12.409543] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:36.445 [2024-10-21 12:13:12.411386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:36.445 [2024-10-21 12:13:12.411602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:36.445 [2024-10-21 12:13:12.411603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:36.706 12:13:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:36.706 12:13:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:28:36.706 12:13:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:36.706 12:13:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:36.706 12:13:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:36.706 12:13:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:36.706 12:13:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:36.706 12:13:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.706 12:13:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:36.706 [2024-10-21 12:13:13.143037] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:36.706 12:13:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.706 12:13:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:36.706 12:13:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.706 12:13:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:36.706 Malloc0 00:28:36.706 12:13:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.706 12:13:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:36.706 12:13:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.706 12:13:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:36.706 12:13:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.706 12:13:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:36.706 12:13:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.706 12:13:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:36.706 12:13:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.706 12:13:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:36.706 12:13:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.706 12:13:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:36.706 [2024-10-21 12:13:13.215895] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:36.706 12:13:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.706 12:13:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:36.706 12:13:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:36.706 12:13:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:28:36.706 12:13:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:28:36.706 12:13:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:36.706 12:13:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:36.706 { 00:28:36.706 "params": { 00:28:36.706 "name": "Nvme$subsystem", 00:28:36.706 "trtype": "$TEST_TRANSPORT", 00:28:36.706 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.706 "adrfam": "ipv4", 00:28:36.706 "trsvcid": "$NVMF_PORT", 00:28:36.706 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.706 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.706 "hdgst": ${hdgst:-false}, 00:28:36.706 "ddgst": ${ddgst:-false} 00:28:36.706 }, 00:28:36.706 "method": "bdev_nvme_attach_controller" 00:28:36.706 } 00:28:36.706 EOF 00:28:36.706 )") 00:28:36.706 12:13:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:28:36.706 12:13:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:28:36.706 12:13:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:28:36.706 12:13:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:28:36.706 "params": { 00:28:36.706 "name": "Nvme1", 00:28:36.706 "trtype": "tcp", 00:28:36.706 "traddr": "10.0.0.2", 00:28:36.706 "adrfam": "ipv4", 00:28:36.706 "trsvcid": "4420", 00:28:36.706 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:36.706 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:36.706 "hdgst": false, 00:28:36.706 "ddgst": false 00:28:36.706 }, 00:28:36.706 "method": "bdev_nvme_attach_controller" 00:28:36.706 }' 00:28:36.706 [2024-10-21 12:13:13.276861] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:28:36.706 [2024-10-21 12:13:13.276928] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1153219 ] 00:28:36.967 [2024-10-21 12:13:13.362084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:36.967 [2024-10-21 12:13:13.416207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:37.228 Running I/O for 1 seconds... 00:28:38.169 8631.00 IOPS, 33.71 MiB/s 00:28:38.169 Latency(us) 00:28:38.169 [2024-10-21T10:13:14.764Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:38.169 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:38.169 Verification LBA range: start 0x0 length 0x4000 00:28:38.169 Nvme1n1 : 1.01 8671.70 33.87 0.00 0.00 14694.29 2785.28 12834.13 00:28:38.169 [2024-10-21T10:13:14.764Z] =================================================================================================================== 00:28:38.169 [2024-10-21T10:13:14.764Z] Total : 8671.70 33.87 0.00 0.00 14694.29 2785.28 12834.13 00:28:38.429 12:13:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1153556 00:28:38.429 12:13:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:28:38.429 12:13:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:38.429 12:13:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:38.429 12:13:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:28:38.429 12:13:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:28:38.429 12:13:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:38.429 12:13:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:38.429 { 00:28:38.429 "params": { 00:28:38.429 "name": "Nvme$subsystem", 00:28:38.429 "trtype": "$TEST_TRANSPORT", 00:28:38.429 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.429 "adrfam": "ipv4", 00:28:38.429 "trsvcid": "$NVMF_PORT", 00:28:38.429 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.429 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.429 "hdgst": ${hdgst:-false}, 00:28:38.429 "ddgst": ${ddgst:-false} 00:28:38.429 }, 00:28:38.429 "method": "bdev_nvme_attach_controller" 00:28:38.429 } 00:28:38.429 EOF 00:28:38.429 )") 00:28:38.429 12:13:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:28:38.429 12:13:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:28:38.429 12:13:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:28:38.429 12:13:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:28:38.429 "params": { 00:28:38.429 "name": "Nvme1", 00:28:38.429 "trtype": "tcp", 00:28:38.429 "traddr": "10.0.0.2", 00:28:38.429 "adrfam": "ipv4", 00:28:38.429 "trsvcid": "4420", 00:28:38.429 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:38.429 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:38.429 "hdgst": false, 00:28:38.429 "ddgst": false 00:28:38.429 }, 00:28:38.429 "method": "bdev_nvme_attach_controller" 00:28:38.429 }' 00:28:38.429 [2024-10-21 12:13:14.844166] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:28:38.429 [2024-10-21 12:13:14.844240] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1153556 ] 00:28:38.429 [2024-10-21 12:13:14.927867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:38.429 [2024-10-21 12:13:14.967762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:38.690 Running I/O for 15 seconds... 00:28:41.013 10940.00 IOPS, 42.73 MiB/s [2024-10-21T10:13:17.871Z] 11065.00 IOPS, 43.22 MiB/s [2024-10-21T10:13:17.871Z] 12:13:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1153167 00:28:41.276 12:13:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:28:41.276 [2024-10-21 12:13:17.804685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.276 [2024-10-21 12:13:17.804727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.276 [2024-10-21 12:13:17.804749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.276 [2024-10-21 12:13:17.804760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.276 [2024-10-21 12:13:17.804771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.276 [2024-10-21 12:13:17.804779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.276 [2024-10-21 12:13:17.804790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:95112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.276 [2024-10-21 12:13:17.804800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.276 [2024-10-21 12:13:17.804810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.277 [2024-10-21 12:13:17.804819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.277 [2024-10-21 12:13:17.804830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.277 [2024-10-21 12:13:17.804838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.277 [2024-10-21 12:13:17.804848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.277 [2024-10-21 12:13:17.804857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.277 [2024-10-21 12:13:17.804868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.277 [2024-10-21 12:13:17.804876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.277 [2024-10-21 12:13:17.804885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.277 [2024-10-21 12:13:17.804893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.277 [2024-10-21 12:13:17.804903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.277 [2024-10-21 12:13:17.804912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.277 [2024-10-21 12:13:17.804926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.277 [2024-10-21 12:13:17.804936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.277 [2024-10-21 12:13:17.804947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:95176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.277 [2024-10-21 12:13:17.804956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.277 [2024-10-21 12:13:17.804978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.277 [2024-10-21 12:13:17.804989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.277 [2024-10-21 12:13:17.805003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.277 [2024-10-21 12:13:17.805012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.277 [2024-10-21 12:13:17.805024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.277 [2024-10-21 12:13:17.805032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.277 [2024-10-21 12:13:17.805043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:95208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.277 [2024-10-21 12:13:17.805052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.277 [2024-10-21 12:13:17.805064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.277 [2024-10-21 12:13:17.805072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.277 [2024-10-21 12:13:17.805082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:95224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.277 [2024-10-21 12:13:17.805090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.277 [2024-10-21 12:13:17.805099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.277 [2024-10-21 12:13:17.805108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.277 [2024-10-21 12:13:17.805118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:95240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.277 [2024-10-21 12:13:17.805126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.277 [2024-10-21 12:13:17.805136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.277 [2024-10-21 12:13:17.805145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.277 [2024-10-21 12:13:17.805154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.277 [2024-10-21 12:13:17.805163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.277 [2024-10-21 12:13:17.805173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:95264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.277 [2024-10-21 12:13:17.805180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.277 [2024-10-21 12:13:17.805190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:95272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.277 [2024-10-21 12:13:17.805197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.277 [2024-10-21 12:13:17.805207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.277 [2024-10-21 12:13:17.805217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.277 [2024-10-21 12:13:17.805227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.277 [2024-10-21 12:13:17.805235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.277 [2024-10-21 12:13:17.805245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.277 [2024-10-21 12:13:17.805252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.277 [2024-10-21 12:13:17.805262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.277 [2024-10-21 12:13:17.805269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.277 [2024-10-21 12:13:17.805278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:95312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.277 [2024-10-21 12:13:17.805286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.277 [2024-10-21 12:13:17.805295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:95320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.277 [2024-10-21 12:13:17.805303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.277 [2024-10-21 12:13:17.805312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.277 [2024-10-21 12:13:17.805324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.277 [2024-10-21 12:13:17.805335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:95336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.277 [2024-10-21 12:13:17.805343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.277 [2024-10-21 12:13:17.805353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.277 [2024-10-21 12:13:17.805360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.277 [2024-10-21 12:13:17.805371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:95352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.277 [2024-10-21 12:13:17.805378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.277 [2024-10-21 12:13:17.805388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:95360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.277 [2024-10-21 12:13:17.805395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.277 [2024-10-21 12:13:17.805405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:95368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.277 [2024-10-21 12:13:17.805412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.277 [2024-10-21 12:13:17.805421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.277 [2024-10-21 12:13:17.805429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.277 [2024-10-21 12:13:17.805439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.277 [2024-10-21 12:13:17.805450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.277 [2024-10-21 12:13:17.805459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:95392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.277 [2024-10-21 12:13:17.805467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.278 [2024-10-21 12:13:17.805476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:95400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.278 [2024-10-21 12:13:17.805483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.278 [2024-10-21 12:13:17.805493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:95408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.278 [2024-10-21 12:13:17.805500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.278 [2024-10-21 12:13:17.805509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.278 [2024-10-21 12:13:17.805517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.278 [2024-10-21 12:13:17.805527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:95424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.278 [2024-10-21 12:13:17.805535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.278 [2024-10-21 12:13:17.805545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.278 [2024-10-21 12:13:17.805552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.278 [2024-10-21 12:13:17.805561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.278 [2024-10-21 12:13:17.805568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.278 [2024-10-21 12:13:17.805578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.278 [2024-10-21 12:13:17.805586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.278 [2024-10-21 12:13:17.805596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.278 [2024-10-21 12:13:17.805603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.278 [2024-10-21 12:13:17.805612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.278 [2024-10-21 12:13:17.805619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.278 [2024-10-21 12:13:17.805629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.278 [2024-10-21 12:13:17.805636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.278 [2024-10-21 12:13:17.805646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.278 [2024-10-21 12:13:17.805654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.278 [2024-10-21 12:13:17.805666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:95488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.278 [2024-10-21 12:13:17.805673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.278 [2024-10-21 12:13:17.805682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.278 [2024-10-21 12:13:17.805689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.278 [2024-10-21 12:13:17.805699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.278 [2024-10-21 12:13:17.805707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.278 [2024-10-21 12:13:17.805716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.278 [2024-10-21 12:13:17.805723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.278 [2024-10-21 12:13:17.805733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.278 [2024-10-21 12:13:17.805740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.278 [2024-10-21 12:13:17.805750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.278 [2024-10-21 12:13:17.805758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.278 [2024-10-21 12:13:17.805768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.278 [2024-10-21 12:13:17.805775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.278 [2024-10-21 12:13:17.805785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.278 [2024-10-21 12:13:17.805792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.278 [2024-10-21 12:13:17.805801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.278 [2024-10-21 12:13:17.805808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.278 [2024-10-21 12:13:17.805819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.278 [2024-10-21 12:13:17.805826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.278 [2024-10-21 12:13:17.805835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.278 [2024-10-21 12:13:17.805842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.278 [2024-10-21 12:13:17.805852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.278 [2024-10-21 12:13:17.805859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.278 [2024-10-21 12:13:17.805869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.278 [2024-10-21 12:13:17.805878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.278 [2024-10-21 12:13:17.805888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.278 [2024-10-21 12:13:17.805895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.278 [2024-10-21 12:13:17.805904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.278 [2024-10-21 12:13:17.805911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.278 [2024-10-21 12:13:17.805922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.278 [2024-10-21 12:13:17.805930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.278 [2024-10-21 12:13:17.805940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.278 [2024-10-21 12:13:17.805947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.278 [2024-10-21 12:13:17.805956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.278 [2024-10-21 12:13:17.805963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.278 [2024-10-21 12:13:17.805973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.278 [2024-10-21 12:13:17.805981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.278 [2024-10-21 12:13:17.805990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:95640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.278 [2024-10-21 12:13:17.805998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.278 [2024-10-21 12:13:17.806007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.278 [2024-10-21 12:13:17.806014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.278 [2024-10-21 12:13:17.806024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.278 [2024-10-21 12:13:17.806032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.278 [2024-10-21 12:13:17.806041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.279 [2024-10-21 12:13:17.806049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.279 [2024-10-21 12:13:17.806058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.279 [2024-10-21 12:13:17.806065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.279 [2024-10-21 12:13:17.806074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.279 [2024-10-21 12:13:17.806082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.279 [2024-10-21 12:13:17.806093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.279 [2024-10-21 12:13:17.806101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.279 [2024-10-21 12:13:17.806110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.279 [2024-10-21 12:13:17.806117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.279 [2024-10-21 12:13:17.806127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.279 [2024-10-21 12:13:17.806134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.279 [2024-10-21 12:13:17.806144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.279 [2024-10-21 12:13:17.806152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.279 [2024-10-21 12:13:17.806162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.279 [2024-10-21 12:13:17.806169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.279 [2024-10-21 12:13:17.806179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.279 [2024-10-21 12:13:17.806186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.279 [2024-10-21 12:13:17.806195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.279 [2024-10-21 12:13:17.806203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.279 [2024-10-21 12:13:17.806213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.279 [2024-10-21 12:13:17.806220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.279 [2024-10-21 12:13:17.806230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.279 [2024-10-21 12:13:17.806236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.279 [2024-10-21 12:13:17.806246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.279 [2024-10-21 12:13:17.806254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.279 [2024-10-21 12:13:17.806263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.279 [2024-10-21 12:13:17.806270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.279 [2024-10-21 12:13:17.806280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.279 [2024-10-21 12:13:17.806287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.279 [2024-10-21 12:13:17.806296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.279 [2024-10-21 12:13:17.806305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.279 [2024-10-21 12:13:17.806315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.279 [2024-10-21 12:13:17.806326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.279 [2024-10-21 12:13:17.806336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.279 [2024-10-21 12:13:17.806343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.279 [2024-10-21 12:13:17.806352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.279 [2024-10-21 12:13:17.806359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.279 [2024-10-21 12:13:17.806369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.279 [2024-10-21 12:13:17.806377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.279 [2024-10-21 12:13:17.806386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.279 [2024-10-21 12:13:17.806394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.279 [2024-10-21 12:13:17.806403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.279 [2024-10-21 12:13:17.806411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.279 [2024-10-21 12:13:17.806420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.279 [2024-10-21 12:13:17.806427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.279 [2024-10-21 12:13:17.806437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.279 [2024-10-21 12:13:17.806444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.279 [2024-10-21 12:13:17.806454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.279 [2024-10-21 12:13:17.806461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.279 [2024-10-21 12:13:17.806471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.279 [2024-10-21 12:13:17.806478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.279 [2024-10-21 12:13:17.806488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.279 [2024-10-21 12:13:17.806496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.279 [2024-10-21 12:13:17.806506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.279 [2024-10-21 12:13:17.806513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.279 [2024-10-21 12:13:17.806522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.279 [2024-10-21 12:13:17.806532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.279 [2024-10-21 12:13:17.806543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:95896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.279 [2024-10-21 12:13:17.806551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.279 [2024-10-21 12:13:17.806560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.279 [2024-10-21 12:13:17.806567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.279 [2024-10-21 12:13:17.806577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.279 [2024-10-21 12:13:17.806584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.279 [2024-10-21 12:13:17.806594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.279 [2024-10-21 12:13:17.806601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.279 [2024-10-21 12:13:17.806611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.279 [2024-10-21 12:13:17.806618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.279 [2024-10-21 12:13:17.806628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.279 [2024-10-21 12:13:17.806635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.279 [2024-10-21 12:13:17.806645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.279 [2024-10-21 12:13:17.806652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.279 [2024-10-21 12:13:17.806662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.279 [2024-10-21 12:13:17.806669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.280 [2024-10-21 12:13:17.806679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:95960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.280 [2024-10-21 12:13:17.806686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.280 [2024-10-21 12:13:17.806695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.280 [2024-10-21 12:13:17.806703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.280 [2024-10-21 12:13:17.806713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:95984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.280 [2024-10-21 12:13:17.806720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.280 [2024-10-21 12:13:17.806730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:95992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.280 [2024-10-21 12:13:17.806737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.280 [2024-10-21 12:13:17.806749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.280 [2024-10-21 12:13:17.806757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.280 [2024-10-21 12:13:17.806766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.280 [2024-10-21 12:13:17.806773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.280 [2024-10-21 12:13:17.806783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.280 [2024-10-21 12:13:17.806790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.280 [2024-10-21 12:13:17.806799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.280 [2024-10-21 12:13:17.806807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.280 [2024-10-21 12:13:17.806817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.280 [2024-10-21 12:13:17.806824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.280 [2024-10-21 12:13:17.806833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.280 [2024-10-21 12:13:17.806841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.280 [2024-10-21 12:13:17.806850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.280 [2024-10-21 12:13:17.806858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.280 [2024-10-21 12:13:17.806868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.280 [2024-10-21 12:13:17.806875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.280 [2024-10-21 12:13:17.806884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.280 [2024-10-21 12:13:17.806891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.280 [2024-10-21 12:13:17.806901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.280 [2024-10-21 12:13:17.806909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.280 [2024-10-21 12:13:17.806919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.280 [2024-10-21 12:13:17.806927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.280 [2024-10-21 12:13:17.806937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.280 [2024-10-21 12:13:17.806944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.280 [2024-10-21 12:13:17.806953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.280 [2024-10-21 12:13:17.806962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.280 [2024-10-21 12:13:17.806972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.280 [2024-10-21 12:13:17.806980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.280 [2024-10-21 12:13:17.806988] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1095530 is same with the state(6) to be set 00:28:41.280 [2024-10-21 12:13:17.806997] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:41.280 [2024-10-21 12:13:17.807003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:41.280 [2024-10-21 12:13:17.807010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95976 len:8 PRP1 0x0 PRP2 0x0 00:28:41.280 [2024-10-21 12:13:17.807018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.280 [2024-10-21 12:13:17.807057] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1095530 was disconnected and freed. reset controller. 00:28:41.280 [2024-10-21 12:13:17.810603] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.280 [2024-10-21 12:13:17.810653] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:41.280 [2024-10-21 12:13:17.811294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.280 [2024-10-21 12:13:17.811311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:41.280 [2024-10-21 12:13:17.811325] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:41.280 [2024-10-21 12:13:17.811546] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:41.280 [2024-10-21 12:13:17.811767] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.280 [2024-10-21 12:13:17.811776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.280 [2024-10-21 12:13:17.811785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.280 [2024-10-21 12:13:17.815338] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.280 [2024-10-21 12:13:17.824756] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.280 [2024-10-21 12:13:17.825424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.280 [2024-10-21 12:13:17.825463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:41.280 [2024-10-21 12:13:17.825476] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:41.280 [2024-10-21 12:13:17.825719] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:41.280 [2024-10-21 12:13:17.825944] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.280 [2024-10-21 12:13:17.825954] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.280 [2024-10-21 12:13:17.825962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.280 [2024-10-21 12:13:17.829525] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.280 [2024-10-21 12:13:17.838726] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.280 [2024-10-21 12:13:17.839400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.280 [2024-10-21 12:13:17.839441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:41.280 [2024-10-21 12:13:17.839454] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:41.280 [2024-10-21 12:13:17.839695] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:41.280 [2024-10-21 12:13:17.839920] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.280 [2024-10-21 12:13:17.839929] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.280 [2024-10-21 12:13:17.839938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.280 [2024-10-21 12:13:17.843502] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.280 [2024-10-21 12:13:17.852733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.280 [2024-10-21 12:13:17.853288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.280 [2024-10-21 12:13:17.853309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:41.280 [2024-10-21 12:13:17.853317] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:41.280 [2024-10-21 12:13:17.853545] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:41.281 [2024-10-21 12:13:17.853765] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.281 [2024-10-21 12:13:17.853774] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.281 [2024-10-21 12:13:17.853781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.281 [2024-10-21 12:13:17.857328] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.281 [2024-10-21 12:13:17.866730] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.281 [2024-10-21 12:13:17.867400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.281 [2024-10-21 12:13:17.867443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:41.281 [2024-10-21 12:13:17.867456] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:41.281 [2024-10-21 12:13:17.867701] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:41.281 [2024-10-21 12:13:17.867925] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.281 [2024-10-21 12:13:17.867935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.281 [2024-10-21 12:13:17.867943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.543 [2024-10-21 12:13:17.871505] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.543 [2024-10-21 12:13:17.880714] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.543 [2024-10-21 12:13:17.881345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.543 [2024-10-21 12:13:17.881389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:41.543 [2024-10-21 12:13:17.881402] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:41.543 [2024-10-21 12:13:17.881647] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:41.543 [2024-10-21 12:13:17.881876] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.544 [2024-10-21 12:13:17.881887] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.544 [2024-10-21 12:13:17.881895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.544 [2024-10-21 12:13:17.885462] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.544 [2024-10-21 12:13:17.894667] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.544 [2024-10-21 12:13:17.895343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.544 [2024-10-21 12:13:17.895390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:41.544 [2024-10-21 12:13:17.895403] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:41.544 [2024-10-21 12:13:17.895650] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:41.544 [2024-10-21 12:13:17.895875] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.544 [2024-10-21 12:13:17.895885] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.544 [2024-10-21 12:13:17.895894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.544 [2024-10-21 12:13:17.899460] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.544 [2024-10-21 12:13:17.908471] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.544 [2024-10-21 12:13:17.909101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.544 [2024-10-21 12:13:17.909150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:41.544 [2024-10-21 12:13:17.909162] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:41.544 [2024-10-21 12:13:17.909418] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:41.544 [2024-10-21 12:13:17.909645] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.544 [2024-10-21 12:13:17.909655] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.544 [2024-10-21 12:13:17.909663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.544 [2024-10-21 12:13:17.913228] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.544 [2024-10-21 12:13:17.922456] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.544 [2024-10-21 12:13:17.923048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.544 [2024-10-21 12:13:17.923073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:41.544 [2024-10-21 12:13:17.923082] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:41.544 [2024-10-21 12:13:17.923304] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:41.544 [2024-10-21 12:13:17.923537] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.544 [2024-10-21 12:13:17.923549] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.544 [2024-10-21 12:13:17.923556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.544 [2024-10-21 12:13:17.927116] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.544 [2024-10-21 12:13:17.936338] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.544 [2024-10-21 12:13:17.936930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.544 [2024-10-21 12:13:17.936952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:41.544 [2024-10-21 12:13:17.936961] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:41.544 [2024-10-21 12:13:17.937182] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:41.544 [2024-10-21 12:13:17.937413] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.544 [2024-10-21 12:13:17.937425] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.544 [2024-10-21 12:13:17.937434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.544 [2024-10-21 12:13:17.940989] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.544 [2024-10-21 12:13:17.950265] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.544 [2024-10-21 12:13:17.950966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.544 [2024-10-21 12:13:17.951026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:41.544 [2024-10-21 12:13:17.951040] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:41.544 [2024-10-21 12:13:17.951294] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:41.544 [2024-10-21 12:13:17.951537] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.544 [2024-10-21 12:13:17.951550] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.544 [2024-10-21 12:13:17.951558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.544 [2024-10-21 12:13:17.955126] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.544 [2024-10-21 12:13:17.964147] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.544 [2024-10-21 12:13:17.964861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.544 [2024-10-21 12:13:17.964925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:41.544 [2024-10-21 12:13:17.964939] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:41.544 [2024-10-21 12:13:17.965196] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:41.544 [2024-10-21 12:13:17.965443] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.544 [2024-10-21 12:13:17.965456] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.544 [2024-10-21 12:13:17.965465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.544 [2024-10-21 12:13:17.969039] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.544 [2024-10-21 12:13:17.978054] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.544 [2024-10-21 12:13:17.978691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.544 [2024-10-21 12:13:17.978729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:41.544 [2024-10-21 12:13:17.978740] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:41.544 [2024-10-21 12:13:17.978966] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:41.544 [2024-10-21 12:13:17.979189] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.544 [2024-10-21 12:13:17.979199] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.544 [2024-10-21 12:13:17.979207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.544 [2024-10-21 12:13:17.982781] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.544 [2024-10-21 12:13:17.991989] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.544 [2024-10-21 12:13:17.992651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.544 [2024-10-21 12:13:17.992715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:41.544 [2024-10-21 12:13:17.992729] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:41.544 [2024-10-21 12:13:17.992987] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:41.544 [2024-10-21 12:13:17.993216] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.544 [2024-10-21 12:13:17.993229] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.544 [2024-10-21 12:13:17.993238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.544 [2024-10-21 12:13:17.996817] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.544 [2024-10-21 12:13:18.005831] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.544 [2024-10-21 12:13:18.006448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.544 [2024-10-21 12:13:18.006514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:41.544 [2024-10-21 12:13:18.006528] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:41.544 [2024-10-21 12:13:18.006785] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:41.544 [2024-10-21 12:13:18.007015] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.544 [2024-10-21 12:13:18.007027] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.544 [2024-10-21 12:13:18.007036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.544 [2024-10-21 12:13:18.010629] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.544 [2024-10-21 12:13:18.019860] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.544 [2024-10-21 12:13:18.020645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.544 [2024-10-21 12:13:18.020710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:41.544 [2024-10-21 12:13:18.020723] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:41.544 [2024-10-21 12:13:18.020980] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:41.544 [2024-10-21 12:13:18.021222] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.544 [2024-10-21 12:13:18.021235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.544 [2024-10-21 12:13:18.021243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.544 [2024-10-21 12:13:18.024833] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.544 [2024-10-21 12:13:18.033855] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.544 [2024-10-21 12:13:18.034602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.544 [2024-10-21 12:13:18.034667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:41.545 [2024-10-21 12:13:18.034681] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:41.545 [2024-10-21 12:13:18.034938] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:41.545 [2024-10-21 12:13:18.035167] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.545 [2024-10-21 12:13:18.035179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.545 [2024-10-21 12:13:18.035188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.545 [2024-10-21 12:13:18.038777] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.545 [2024-10-21 12:13:18.047801] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.545 [2024-10-21 12:13:18.048483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.545 [2024-10-21 12:13:18.048547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:41.545 [2024-10-21 12:13:18.048561] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:41.545 [2024-10-21 12:13:18.048818] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:41.545 [2024-10-21 12:13:18.049047] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.545 [2024-10-21 12:13:18.049058] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.545 [2024-10-21 12:13:18.049067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.545 [2024-10-21 12:13:18.052914] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.545 [2024-10-21 12:13:18.061757] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.545 [2024-10-21 12:13:18.062516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.545 [2024-10-21 12:13:18.062581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:41.545 [2024-10-21 12:13:18.062595] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:41.545 [2024-10-21 12:13:18.062852] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:41.545 [2024-10-21 12:13:18.063082] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.545 [2024-10-21 12:13:18.063095] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.545 [2024-10-21 12:13:18.063103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.545 [2024-10-21 12:13:18.066685] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.545 [2024-10-21 12:13:18.075723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.545 [2024-10-21 12:13:18.076488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.545 [2024-10-21 12:13:18.076554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:41.545 [2024-10-21 12:13:18.076569] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:41.545 [2024-10-21 12:13:18.076826] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:41.545 [2024-10-21 12:13:18.077055] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.545 [2024-10-21 12:13:18.077067] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.545 [2024-10-21 12:13:18.077077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.545 [2024-10-21 12:13:18.080662] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.545 [2024-10-21 12:13:18.089685] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.545 [2024-10-21 12:13:18.090431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.545 [2024-10-21 12:13:18.090497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:41.545 [2024-10-21 12:13:18.090512] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:41.545 [2024-10-21 12:13:18.090771] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:41.545 [2024-10-21 12:13:18.091000] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.545 [2024-10-21 12:13:18.091012] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.545 [2024-10-21 12:13:18.091021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.545 [2024-10-21 12:13:18.094603] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.545 [2024-10-21 12:13:18.103617] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.545 [2024-10-21 12:13:18.104335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.545 [2024-10-21 12:13:18.104399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:41.545 [2024-10-21 12:13:18.104412] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:41.545 [2024-10-21 12:13:18.104669] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:41.545 [2024-10-21 12:13:18.104898] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.545 [2024-10-21 12:13:18.104910] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.545 [2024-10-21 12:13:18.104920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.545 [2024-10-21 12:13:18.108499] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.545 [2024-10-21 12:13:18.117563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.545 [2024-10-21 12:13:18.118239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.545 [2024-10-21 12:13:18.118304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:41.545 [2024-10-21 12:13:18.118339] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:41.545 [2024-10-21 12:13:18.118598] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:41.545 [2024-10-21 12:13:18.118826] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.545 [2024-10-21 12:13:18.118838] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.545 [2024-10-21 12:13:18.118846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.545 [2024-10-21 12:13:18.122418] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.545 [2024-10-21 12:13:18.131433] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.545 [2024-10-21 12:13:18.132157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.545 [2024-10-21 12:13:18.132221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:41.545 [2024-10-21 12:13:18.132235] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:41.545 [2024-10-21 12:13:18.132512] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:41.545 [2024-10-21 12:13:18.132743] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.545 [2024-10-21 12:13:18.132754] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.545 [2024-10-21 12:13:18.132763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.545 [2024-10-21 12:13:18.136344] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.808 [2024-10-21 12:13:18.145383] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.808 [2024-10-21 12:13:18.146116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.808 [2024-10-21 12:13:18.146181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:41.808 [2024-10-21 12:13:18.146195] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:41.808 [2024-10-21 12:13:18.146469] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:41.808 [2024-10-21 12:13:18.146713] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.808 [2024-10-21 12:13:18.146727] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.808 [2024-10-21 12:13:18.146735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.808 [2024-10-21 12:13:18.150326] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.808 [2024-10-21 12:13:18.159342] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.808 [2024-10-21 12:13:18.159965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.808 [2024-10-21 12:13:18.159994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:41.808 [2024-10-21 12:13:18.160005] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:41.808 [2024-10-21 12:13:18.160229] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:41.808 [2024-10-21 12:13:18.160467] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.808 [2024-10-21 12:13:18.160486] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.808 [2024-10-21 12:13:18.160495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.808 [2024-10-21 12:13:18.164057] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.808 [2024-10-21 12:13:18.173280] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.808 [2024-10-21 12:13:18.173852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.808 [2024-10-21 12:13:18.173877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:41.808 [2024-10-21 12:13:18.173886] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:41.808 [2024-10-21 12:13:18.174109] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:41.808 [2024-10-21 12:13:18.174340] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.809 [2024-10-21 12:13:18.174352] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.809 [2024-10-21 12:13:18.174360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.809 [2024-10-21 12:13:18.177916] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.809 [2024-10-21 12:13:18.187267] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.809 [2024-10-21 12:13:18.187890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.809 [2024-10-21 12:13:18.187916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:41.809 [2024-10-21 12:13:18.187926] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:41.809 [2024-10-21 12:13:18.188149] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:41.809 [2024-10-21 12:13:18.188382] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.809 [2024-10-21 12:13:18.188395] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.809 [2024-10-21 12:13:18.188403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.809 [2024-10-21 12:13:18.191962] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.809 [2024-10-21 12:13:18.201174] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.809 [2024-10-21 12:13:18.201753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.809 [2024-10-21 12:13:18.201778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:41.809 [2024-10-21 12:13:18.201787] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:41.809 [2024-10-21 12:13:18.202009] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:41.809 [2024-10-21 12:13:18.202232] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.809 [2024-10-21 12:13:18.202244] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.809 [2024-10-21 12:13:18.202252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.809 [2024-10-21 12:13:18.205821] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.809 [2024-10-21 12:13:18.215031] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.809 [2024-10-21 12:13:18.215759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.809 [2024-10-21 12:13:18.215825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:41.809 [2024-10-21 12:13:18.215839] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:41.809 [2024-10-21 12:13:18.216096] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:41.809 [2024-10-21 12:13:18.216341] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.809 [2024-10-21 12:13:18.216355] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.809 [2024-10-21 12:13:18.216364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.809 [2024-10-21 12:13:18.219934] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.809 [2024-10-21 12:13:18.228958] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.809 [2024-10-21 12:13:18.229679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.809 [2024-10-21 12:13:18.229744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:41.809 [2024-10-21 12:13:18.229758] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:41.809 [2024-10-21 12:13:18.230015] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:41.809 [2024-10-21 12:13:18.230245] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.809 [2024-10-21 12:13:18.230257] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.809 [2024-10-21 12:13:18.230266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.809 [2024-10-21 12:13:18.233854] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.809 [2024-10-21 12:13:18.242867] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.809 [2024-10-21 12:13:18.243495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.809 [2024-10-21 12:13:18.243526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:41.809 [2024-10-21 12:13:18.243536] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:41.809 [2024-10-21 12:13:18.243761] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:41.809 [2024-10-21 12:13:18.243991] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.809 [2024-10-21 12:13:18.244004] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.809 [2024-10-21 12:13:18.244013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.809 9423.33 IOPS, 36.81 MiB/s [2024-10-21T10:13:18.404Z] [2024-10-21 12:13:18.249248] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.809 [2024-10-21 12:13:18.256816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.809 [2024-10-21 12:13:18.257519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.809 [2024-10-21 12:13:18.257584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:41.809 [2024-10-21 12:13:18.257598] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:41.809 [2024-10-21 12:13:18.257862] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:41.809 [2024-10-21 12:13:18.258091] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.809 [2024-10-21 12:13:18.258102] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.809 [2024-10-21 12:13:18.258111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.809 [2024-10-21 12:13:18.261700] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.809 [2024-10-21 12:13:18.270713] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.809 [2024-10-21 12:13:18.271400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.809 [2024-10-21 12:13:18.271466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:41.809 [2024-10-21 12:13:18.271479] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:41.809 [2024-10-21 12:13:18.271737] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:41.809 [2024-10-21 12:13:18.271967] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.809 [2024-10-21 12:13:18.271979] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.809 [2024-10-21 12:13:18.271987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.809 [2024-10-21 12:13:18.275581] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.809 [2024-10-21 12:13:18.284606] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.809 [2024-10-21 12:13:18.285331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.809 [2024-10-21 12:13:18.285394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:41.809 [2024-10-21 12:13:18.285408] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:41.809 [2024-10-21 12:13:18.285665] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:41.809 [2024-10-21 12:13:18.285894] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.809 [2024-10-21 12:13:18.285906] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.809 [2024-10-21 12:13:18.285915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.809 [2024-10-21 12:13:18.289493] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.809 [2024-10-21 12:13:18.298508] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.809 [2024-10-21 12:13:18.299139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.809 [2024-10-21 12:13:18.299168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:41.809 [2024-10-21 12:13:18.299178] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:41.809 [2024-10-21 12:13:18.299415] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:41.809 [2024-10-21 12:13:18.299639] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.809 [2024-10-21 12:13:18.299651] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.809 [2024-10-21 12:13:18.299668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.809 [2024-10-21 12:13:18.303230] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.809 [2024-10-21 12:13:18.312447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.809 [2024-10-21 12:13:18.313021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.809 [2024-10-21 12:13:18.313048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:41.809 [2024-10-21 12:13:18.313057] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:41.809 [2024-10-21 12:13:18.313280] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:41.809 [2024-10-21 12:13:18.313514] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.809 [2024-10-21 12:13:18.313526] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.809 [2024-10-21 12:13:18.313534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.809 [2024-10-21 12:13:18.317095] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.809 [2024-10-21 12:13:18.326339] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.809 [2024-10-21 12:13:18.327001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.809 [2024-10-21 12:13:18.327065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:41.809 [2024-10-21 12:13:18.327078] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:41.810 [2024-10-21 12:13:18.327352] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:41.810 [2024-10-21 12:13:18.327583] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.810 [2024-10-21 12:13:18.327595] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.810 [2024-10-21 12:13:18.327604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.810 [2024-10-21 12:13:18.331175] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.810 [2024-10-21 12:13:18.340190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.810 [2024-10-21 12:13:18.340871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.810 [2024-10-21 12:13:18.340935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:41.810 [2024-10-21 12:13:18.340949] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:41.810 [2024-10-21 12:13:18.341207] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:41.810 [2024-10-21 12:13:18.341451] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.810 [2024-10-21 12:13:18.341464] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.810 [2024-10-21 12:13:18.341472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.810 [2024-10-21 12:13:18.345047] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.810 [2024-10-21 12:13:18.354111] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.810 [2024-10-21 12:13:18.354630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.810 [2024-10-21 12:13:18.354659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:41.810 [2024-10-21 12:13:18.354668] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:41.810 [2024-10-21 12:13:18.354892] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:41.810 [2024-10-21 12:13:18.355115] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.810 [2024-10-21 12:13:18.355126] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.810 [2024-10-21 12:13:18.355135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.810 [2024-10-21 12:13:18.358708] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.810 [2024-10-21 12:13:18.368131] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.810 [2024-10-21 12:13:18.368702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.810 [2024-10-21 12:13:18.368727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:41.810 [2024-10-21 12:13:18.368736] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:41.810 [2024-10-21 12:13:18.368959] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:41.810 [2024-10-21 12:13:18.369181] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.810 [2024-10-21 12:13:18.369193] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.810 [2024-10-21 12:13:18.369201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.810 [2024-10-21 12:13:18.372764] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.810 [2024-10-21 12:13:18.381969] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.810 [2024-10-21 12:13:18.382667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.810 [2024-10-21 12:13:18.382732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:41.810 [2024-10-21 12:13:18.382745] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:41.810 [2024-10-21 12:13:18.383004] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:41.810 [2024-10-21 12:13:18.383233] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.810 [2024-10-21 12:13:18.383245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.810 [2024-10-21 12:13:18.383253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.810 [2024-10-21 12:13:18.386840] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.810 [2024-10-21 12:13:18.395877] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.810 [2024-10-21 12:13:18.396479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.810 [2024-10-21 12:13:18.396544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:41.810 [2024-10-21 12:13:18.396558] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:41.810 [2024-10-21 12:13:18.396816] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:41.810 [2024-10-21 12:13:18.397052] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.810 [2024-10-21 12:13:18.397064] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.810 [2024-10-21 12:13:18.397073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.810 [2024-10-21 12:13:18.400665] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.074 [2024-10-21 12:13:18.409720] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.074 [2024-10-21 12:13:18.410414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.074 [2024-10-21 12:13:18.410480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.074 [2024-10-21 12:13:18.410493] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.074 [2024-10-21 12:13:18.410751] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.074 [2024-10-21 12:13:18.410980] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.074 [2024-10-21 12:13:18.410992] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.074 [2024-10-21 12:13:18.411001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.074 [2024-10-21 12:13:18.414592] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.074 [2024-10-21 12:13:18.423617] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.074 [2024-10-21 12:13:18.424264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.074 [2024-10-21 12:13:18.424293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.074 [2024-10-21 12:13:18.424303] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.074 [2024-10-21 12:13:18.424538] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.074 [2024-10-21 12:13:18.424762] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.074 [2024-10-21 12:13:18.424774] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.074 [2024-10-21 12:13:18.424782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.074 [2024-10-21 12:13:18.428341] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.074 [2024-10-21 12:13:18.437650] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.074 [2024-10-21 12:13:18.438363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.074 [2024-10-21 12:13:18.438428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.074 [2024-10-21 12:13:18.438442] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.074 [2024-10-21 12:13:18.438700] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.074 [2024-10-21 12:13:18.438929] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.074 [2024-10-21 12:13:18.438940] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.074 [2024-10-21 12:13:18.438949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.074 [2024-10-21 12:13:18.442546] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.074 [2024-10-21 12:13:18.451595] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.074 [2024-10-21 12:13:18.452286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.074 [2024-10-21 12:13:18.452361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.074 [2024-10-21 12:13:18.452375] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.074 [2024-10-21 12:13:18.452633] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.074 [2024-10-21 12:13:18.452862] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.074 [2024-10-21 12:13:18.452873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.074 [2024-10-21 12:13:18.452882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.074 [2024-10-21 12:13:18.456463] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.074 [2024-10-21 12:13:18.465478] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.074 [2024-10-21 12:13:18.466160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.074 [2024-10-21 12:13:18.466225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.074 [2024-10-21 12:13:18.466238] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.074 [2024-10-21 12:13:18.466515] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.074 [2024-10-21 12:13:18.466745] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.074 [2024-10-21 12:13:18.466756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.074 [2024-10-21 12:13:18.466765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.074 [2024-10-21 12:13:18.470334] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.074 [2024-10-21 12:13:18.479349] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.074 [2024-10-21 12:13:18.480033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.074 [2024-10-21 12:13:18.480097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.074 [2024-10-21 12:13:18.480110] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.074 [2024-10-21 12:13:18.480385] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.074 [2024-10-21 12:13:18.480614] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.074 [2024-10-21 12:13:18.480627] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.074 [2024-10-21 12:13:18.480636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.074 [2024-10-21 12:13:18.484204] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.074 [2024-10-21 12:13:18.493220] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.074 [2024-10-21 12:13:18.493900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.074 [2024-10-21 12:13:18.493971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.074 [2024-10-21 12:13:18.493985] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.074 [2024-10-21 12:13:18.494242] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.074 [2024-10-21 12:13:18.494488] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.074 [2024-10-21 12:13:18.494501] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.074 [2024-10-21 12:13:18.494510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.074 [2024-10-21 12:13:18.498083] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.074 [2024-10-21 12:13:18.507107] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.074 [2024-10-21 12:13:18.507739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.074 [2024-10-21 12:13:18.507770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.074 [2024-10-21 12:13:18.507780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.074 [2024-10-21 12:13:18.508004] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.074 [2024-10-21 12:13:18.508227] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.074 [2024-10-21 12:13:18.508237] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.074 [2024-10-21 12:13:18.508246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.074 [2024-10-21 12:13:18.511816] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.074 [2024-10-21 12:13:18.521025] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.074 [2024-10-21 12:13:18.521749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.074 [2024-10-21 12:13:18.521813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.074 [2024-10-21 12:13:18.521827] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.074 [2024-10-21 12:13:18.522084] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.074 [2024-10-21 12:13:18.522313] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.074 [2024-10-21 12:13:18.522340] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.074 [2024-10-21 12:13:18.522349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.074 [2024-10-21 12:13:18.525924] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.075 [2024-10-21 12:13:18.534964] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.075 [2024-10-21 12:13:18.535630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.075 [2024-10-21 12:13:18.535695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.075 [2024-10-21 12:13:18.535709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.075 [2024-10-21 12:13:18.535966] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.075 [2024-10-21 12:13:18.536203] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.075 [2024-10-21 12:13:18.536216] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.075 [2024-10-21 12:13:18.536224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.075 [2024-10-21 12:13:18.539815] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.075 [2024-10-21 12:13:18.548847] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.075 [2024-10-21 12:13:18.549457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.075 [2024-10-21 12:13:18.549521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.075 [2024-10-21 12:13:18.549535] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.075 [2024-10-21 12:13:18.549792] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.075 [2024-10-21 12:13:18.550022] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.075 [2024-10-21 12:13:18.550035] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.075 [2024-10-21 12:13:18.550044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.075 [2024-10-21 12:13:18.553648] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.075 [2024-10-21 12:13:18.562662] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.075 [2024-10-21 12:13:18.563408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.075 [2024-10-21 12:13:18.563473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.075 [2024-10-21 12:13:18.563487] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.075 [2024-10-21 12:13:18.563745] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.075 [2024-10-21 12:13:18.563974] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.075 [2024-10-21 12:13:18.563986] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.075 [2024-10-21 12:13:18.563995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.075 [2024-10-21 12:13:18.567590] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.075 [2024-10-21 12:13:18.576632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.075 [2024-10-21 12:13:18.577319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.075 [2024-10-21 12:13:18.577398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.075 [2024-10-21 12:13:18.577411] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.075 [2024-10-21 12:13:18.577669] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.075 [2024-10-21 12:13:18.577898] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.075 [2024-10-21 12:13:18.577910] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.075 [2024-10-21 12:13:18.577918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.075 [2024-10-21 12:13:18.581511] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.075 [2024-10-21 12:13:18.590570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.075 [2024-10-21 12:13:18.591266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.075 [2024-10-21 12:13:18.591345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.075 [2024-10-21 12:13:18.591360] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.075 [2024-10-21 12:13:18.591618] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.075 [2024-10-21 12:13:18.591847] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.075 [2024-10-21 12:13:18.591859] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.075 [2024-10-21 12:13:18.591868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.075 [2024-10-21 12:13:18.595445] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.075 [2024-10-21 12:13:18.604480] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.075 [2024-10-21 12:13:18.605202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.075 [2024-10-21 12:13:18.605268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.075 [2024-10-21 12:13:18.605282] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.075 [2024-10-21 12:13:18.605552] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.075 [2024-10-21 12:13:18.605782] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.075 [2024-10-21 12:13:18.605795] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.075 [2024-10-21 12:13:18.605804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.075 [2024-10-21 12:13:18.609373] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.075 [2024-10-21 12:13:18.618399] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.075 [2024-10-21 12:13:18.619024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.075 [2024-10-21 12:13:18.619054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.075 [2024-10-21 12:13:18.619064] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.075 [2024-10-21 12:13:18.619288] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.075 [2024-10-21 12:13:18.619521] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.075 [2024-10-21 12:13:18.619533] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.075 [2024-10-21 12:13:18.619542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.075 [2024-10-21 12:13:18.623109] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.075 [2024-10-21 12:13:18.632362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.075 [2024-10-21 12:13:18.632946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.075 [2024-10-21 12:13:18.632975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.075 [2024-10-21 12:13:18.632994] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.075 [2024-10-21 12:13:18.633220] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.075 [2024-10-21 12:13:18.633450] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.075 [2024-10-21 12:13:18.633463] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.075 [2024-10-21 12:13:18.633471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.075 [2024-10-21 12:13:18.637030] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.075 [2024-10-21 12:13:18.646254] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.075 [2024-10-21 12:13:18.646863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.075 [2024-10-21 12:13:18.646888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.075 [2024-10-21 12:13:18.646898] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.075 [2024-10-21 12:13:18.647121] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.075 [2024-10-21 12:13:18.647354] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.075 [2024-10-21 12:13:18.647366] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.075 [2024-10-21 12:13:18.647374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.075 [2024-10-21 12:13:18.650979] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.075 [2024-10-21 12:13:18.660237] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.075 [2024-10-21 12:13:18.660900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.075 [2024-10-21 12:13:18.660964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.075 [2024-10-21 12:13:18.660978] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.075 [2024-10-21 12:13:18.661235] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.075 [2024-10-21 12:13:18.661481] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.075 [2024-10-21 12:13:18.661494] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.075 [2024-10-21 12:13:18.661503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.076 [2024-10-21 12:13:18.665088] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.337 [2024-10-21 12:13:18.674145] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.337 [2024-10-21 12:13:18.674763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.337 [2024-10-21 12:13:18.674793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.337 [2024-10-21 12:13:18.674803] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.337 [2024-10-21 12:13:18.675028] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.337 [2024-10-21 12:13:18.675251] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.337 [2024-10-21 12:13:18.675270] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.337 [2024-10-21 12:13:18.675279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.337 [2024-10-21 12:13:18.678863] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.337 [2024-10-21 12:13:18.688175] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.337 [2024-10-21 12:13:18.688851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.337 [2024-10-21 12:13:18.688915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.337 [2024-10-21 12:13:18.688929] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.337 [2024-10-21 12:13:18.689187] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.337 [2024-10-21 12:13:18.689429] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.338 [2024-10-21 12:13:18.689442] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.338 [2024-10-21 12:13:18.689452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.338 [2024-10-21 12:13:18.693055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.338 [2024-10-21 12:13:18.702098] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.338 [2024-10-21 12:13:18.702802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.338 [2024-10-21 12:13:18.702867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.338 [2024-10-21 12:13:18.702881] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.338 [2024-10-21 12:13:18.703140] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.338 [2024-10-21 12:13:18.703385] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.338 [2024-10-21 12:13:18.703398] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.338 [2024-10-21 12:13:18.703407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.338 [2024-10-21 12:13:18.706974] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.338 [2024-10-21 12:13:18.714770] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.338 [2024-10-21 12:13:18.715380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.338 [2024-10-21 12:13:18.715439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.338 [2024-10-21 12:13:18.715449] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.338 [2024-10-21 12:13:18.715637] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.338 [2024-10-21 12:13:18.715796] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.338 [2024-10-21 12:13:18.715806] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.338 [2024-10-21 12:13:18.715812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.338 [2024-10-21 12:13:18.718282] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.338 [2024-10-21 12:13:18.727494] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.338 [2024-10-21 12:13:18.728102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.338 [2024-10-21 12:13:18.728152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.338 [2024-10-21 12:13:18.728162] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.338 [2024-10-21 12:13:18.728351] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.338 [2024-10-21 12:13:18.728510] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.338 [2024-10-21 12:13:18.728520] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.338 [2024-10-21 12:13:18.728527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.338 [2024-10-21 12:13:18.730981] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.338 [2024-10-21 12:13:18.740185] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.338 [2024-10-21 12:13:18.740595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.338 [2024-10-21 12:13:18.740617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.338 [2024-10-21 12:13:18.740625] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.338 [2024-10-21 12:13:18.740779] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.338 [2024-10-21 12:13:18.740933] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.338 [2024-10-21 12:13:18.740941] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.338 [2024-10-21 12:13:18.740947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.338 [2024-10-21 12:13:18.743398] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.338 [2024-10-21 12:13:18.752926] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.338 [2024-10-21 12:13:18.753829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.338 [2024-10-21 12:13:18.753855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.338 [2024-10-21 12:13:18.753863] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.338 [2024-10-21 12:13:18.754029] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.338 [2024-10-21 12:13:18.754185] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.338 [2024-10-21 12:13:18.754193] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.338 [2024-10-21 12:13:18.754199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.338 [2024-10-21 12:13:18.756664] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.338 [2024-10-21 12:13:18.765591] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.338 [2024-10-21 12:13:18.766104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.338 [2024-10-21 12:13:18.766123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.338 [2024-10-21 12:13:18.766129] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.338 [2024-10-21 12:13:18.766287] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.338 [2024-10-21 12:13:18.766448] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.338 [2024-10-21 12:13:18.766456] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.338 [2024-10-21 12:13:18.766463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.338 [2024-10-21 12:13:18.768909] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.338 [2024-10-21 12:13:18.778264] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.338 [2024-10-21 12:13:18.778870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.338 [2024-10-21 12:13:18.778909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.338 [2024-10-21 12:13:18.778918] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.338 [2024-10-21 12:13:18.779092] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.338 [2024-10-21 12:13:18.779248] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.338 [2024-10-21 12:13:18.779256] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.338 [2024-10-21 12:13:18.779262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.338 [2024-10-21 12:13:18.781720] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.338 [2024-10-21 12:13:18.790916] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.338 [2024-10-21 12:13:18.791462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.338 [2024-10-21 12:13:18.791499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.338 [2024-10-21 12:13:18.791508] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.338 [2024-10-21 12:13:18.791678] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.338 [2024-10-21 12:13:18.791834] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.338 [2024-10-21 12:13:18.791841] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.338 [2024-10-21 12:13:18.791847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.338 [2024-10-21 12:13:18.794296] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.338 [2024-10-21 12:13:18.803633] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.338 [2024-10-21 12:13:18.804212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.338 [2024-10-21 12:13:18.804248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.338 [2024-10-21 12:13:18.804257] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.338 [2024-10-21 12:13:18.804433] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.338 [2024-10-21 12:13:18.804589] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.338 [2024-10-21 12:13:18.804597] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.338 [2024-10-21 12:13:18.804608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.338 [2024-10-21 12:13:18.807048] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.338 [2024-10-21 12:13:18.816383] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.338 [2024-10-21 12:13:18.816941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.338 [2024-10-21 12:13:18.816975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.338 [2024-10-21 12:13:18.816984] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.338 [2024-10-21 12:13:18.817154] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.338 [2024-10-21 12:13:18.817309] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.338 [2024-10-21 12:13:18.817318] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.338 [2024-10-21 12:13:18.817331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.338 [2024-10-21 12:13:18.819776] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.338 [2024-10-21 12:13:18.829120] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.338 [2024-10-21 12:13:18.829675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.338 [2024-10-21 12:13:18.829708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.338 [2024-10-21 12:13:18.829717] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.338 [2024-10-21 12:13:18.829885] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.338 [2024-10-21 12:13:18.830040] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.338 [2024-10-21 12:13:18.830047] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.338 [2024-10-21 12:13:18.830052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.338 [2024-10-21 12:13:18.832499] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.338 [2024-10-21 12:13:18.841775] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.338 [2024-10-21 12:13:18.842283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.338 [2024-10-21 12:13:18.842299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.338 [2024-10-21 12:13:18.842305] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.338 [2024-10-21 12:13:18.842462] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.338 [2024-10-21 12:13:18.842614] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.338 [2024-10-21 12:13:18.842621] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.338 [2024-10-21 12:13:18.842627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.338 [2024-10-21 12:13:18.845056] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.339 [2024-10-21 12:13:18.854398] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.339 [2024-10-21 12:13:18.854974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.339 [2024-10-21 12:13:18.855006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.339 [2024-10-21 12:13:18.855015] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.339 [2024-10-21 12:13:18.855181] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.339 [2024-10-21 12:13:18.855344] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.339 [2024-10-21 12:13:18.855352] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.339 [2024-10-21 12:13:18.855358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.339 [2024-10-21 12:13:18.857797] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.339 [2024-10-21 12:13:18.867127] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.339 [2024-10-21 12:13:18.867624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.339 [2024-10-21 12:13:18.867640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.339 [2024-10-21 12:13:18.867646] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.339 [2024-10-21 12:13:18.867798] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.339 [2024-10-21 12:13:18.867949] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.339 [2024-10-21 12:13:18.867956] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.339 [2024-10-21 12:13:18.867961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.339 [2024-10-21 12:13:18.870401] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.339 [2024-10-21 12:13:18.879874] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.339 [2024-10-21 12:13:18.880456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.339 [2024-10-21 12:13:18.880488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.339 [2024-10-21 12:13:18.880497] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.339 [2024-10-21 12:13:18.880664] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.339 [2024-10-21 12:13:18.880818] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.339 [2024-10-21 12:13:18.880825] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.339 [2024-10-21 12:13:18.880831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.339 [2024-10-21 12:13:18.883269] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.339 [2024-10-21 12:13:18.892618] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.339 [2024-10-21 12:13:18.893080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.339 [2024-10-21 12:13:18.893095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.339 [2024-10-21 12:13:18.893101] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.339 [2024-10-21 12:13:18.893252] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.339 [2024-10-21 12:13:18.893414] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.339 [2024-10-21 12:13:18.893422] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.339 [2024-10-21 12:13:18.893427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.339 [2024-10-21 12:13:18.895857] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.339 [2024-10-21 12:13:18.905314] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.339 [2024-10-21 12:13:18.905907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.339 [2024-10-21 12:13:18.905938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.339 [2024-10-21 12:13:18.905947] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.339 [2024-10-21 12:13:18.906114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.339 [2024-10-21 12:13:18.906268] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.339 [2024-10-21 12:13:18.906276] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.339 [2024-10-21 12:13:18.906281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.339 [2024-10-21 12:13:18.908725] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.339 [2024-10-21 12:13:18.918050] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.339 [2024-10-21 12:13:18.918626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.339 [2024-10-21 12:13:18.918657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.339 [2024-10-21 12:13:18.918666] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.339 [2024-10-21 12:13:18.918833] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.339 [2024-10-21 12:13:18.918988] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.339 [2024-10-21 12:13:18.918995] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.339 [2024-10-21 12:13:18.919000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.339 [2024-10-21 12:13:18.921446] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.339 [2024-10-21 12:13:18.930764] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.339 [2024-10-21 12:13:18.931356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.339 [2024-10-21 12:13:18.931389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.339 [2024-10-21 12:13:18.931398] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.602 [2024-10-21 12:13:18.931566] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.602 [2024-10-21 12:13:18.931722] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.602 [2024-10-21 12:13:18.931730] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.602 [2024-10-21 12:13:18.931736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.602 [2024-10-21 12:13:18.934185] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.602 [2024-10-21 12:13:18.943506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.602 [2024-10-21 12:13:18.944114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.602 [2024-10-21 12:13:18.944145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.602 [2024-10-21 12:13:18.944154] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.602 [2024-10-21 12:13:18.944328] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.602 [2024-10-21 12:13:18.944483] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.602 [2024-10-21 12:13:18.944490] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.602 [2024-10-21 12:13:18.944497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.602 [2024-10-21 12:13:18.946930] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.602 [2024-10-21 12:13:18.956120] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.602 [2024-10-21 12:13:18.956594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.602 [2024-10-21 12:13:18.956610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.602 [2024-10-21 12:13:18.956616] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.602 [2024-10-21 12:13:18.956768] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.602 [2024-10-21 12:13:18.956920] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.602 [2024-10-21 12:13:18.956926] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.602 [2024-10-21 12:13:18.956932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.602 [2024-10-21 12:13:18.959365] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.602 [2024-10-21 12:13:18.968823] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.602 [2024-10-21 12:13:18.969412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.602 [2024-10-21 12:13:18.969443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.602 [2024-10-21 12:13:18.969452] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.602 [2024-10-21 12:13:18.969621] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.602 [2024-10-21 12:13:18.969775] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.602 [2024-10-21 12:13:18.969782] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.602 [2024-10-21 12:13:18.969788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.602 [2024-10-21 12:13:18.972232] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.602 [2024-10-21 12:13:18.981552] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.602 [2024-10-21 12:13:18.982095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.602 [2024-10-21 12:13:18.982126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.602 [2024-10-21 12:13:18.982143] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.602 [2024-10-21 12:13:18.982311] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.602 [2024-10-21 12:13:18.982475] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.602 [2024-10-21 12:13:18.982483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.602 [2024-10-21 12:13:18.982489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.603 [2024-10-21 12:13:18.984926] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.603 [2024-10-21 12:13:18.994241] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.603 [2024-10-21 12:13:18.994733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.603 [2024-10-21 12:13:18.994749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.603 [2024-10-21 12:13:18.994756] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.603 [2024-10-21 12:13:18.994907] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.603 [2024-10-21 12:13:18.995059] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.603 [2024-10-21 12:13:18.995066] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.603 [2024-10-21 12:13:18.995071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.603 [2024-10-21 12:13:18.997503] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.603 [2024-10-21 12:13:19.006967] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.603 [2024-10-21 12:13:19.007450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.603 [2024-10-21 12:13:19.007464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.603 [2024-10-21 12:13:19.007470] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.603 [2024-10-21 12:13:19.007621] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.603 [2024-10-21 12:13:19.007773] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.603 [2024-10-21 12:13:19.007779] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.603 [2024-10-21 12:13:19.007785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.603 [2024-10-21 12:13:19.010217] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.603 [2024-10-21 12:13:19.019687] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.603 [2024-10-21 12:13:19.020174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.603 [2024-10-21 12:13:19.020187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.603 [2024-10-21 12:13:19.020192] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.603 [2024-10-21 12:13:19.020348] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.603 [2024-10-21 12:13:19.020500] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.603 [2024-10-21 12:13:19.020510] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.603 [2024-10-21 12:13:19.020515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.603 [2024-10-21 12:13:19.022949] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.603 [2024-10-21 12:13:19.032420] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.603 [2024-10-21 12:13:19.033027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.603 [2024-10-21 12:13:19.033058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.603 [2024-10-21 12:13:19.033067] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.603 [2024-10-21 12:13:19.033234] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.603 [2024-10-21 12:13:19.033398] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.603 [2024-10-21 12:13:19.033406] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.603 [2024-10-21 12:13:19.033412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.603 [2024-10-21 12:13:19.035852] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.603 [2024-10-21 12:13:19.045042] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.603 [2024-10-21 12:13:19.045642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.603 [2024-10-21 12:13:19.045674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.603 [2024-10-21 12:13:19.045683] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.603 [2024-10-21 12:13:19.045850] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.603 [2024-10-21 12:13:19.046005] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.603 [2024-10-21 12:13:19.046012] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.603 [2024-10-21 12:13:19.046017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.603 [2024-10-21 12:13:19.048465] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.603 [2024-10-21 12:13:19.057704] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.603 [2024-10-21 12:13:19.058183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.603 [2024-10-21 12:13:19.058214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.603 [2024-10-21 12:13:19.058224] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.603 [2024-10-21 12:13:19.058398] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.603 [2024-10-21 12:13:19.058554] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.603 [2024-10-21 12:13:19.058562] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.603 [2024-10-21 12:13:19.058568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.603 [2024-10-21 12:13:19.061008] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.603 [2024-10-21 12:13:19.070356] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.603 [2024-10-21 12:13:19.070841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.603 [2024-10-21 12:13:19.070856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.603 [2024-10-21 12:13:19.070862] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.603 [2024-10-21 12:13:19.071014] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.603 [2024-10-21 12:13:19.071165] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.603 [2024-10-21 12:13:19.071172] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.603 [2024-10-21 12:13:19.071178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.603 [2024-10-21 12:13:19.073622] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.603 [2024-10-21 12:13:19.083106] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.603 [2024-10-21 12:13:19.083580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.603 [2024-10-21 12:13:19.083594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.603 [2024-10-21 12:13:19.083600] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.603 [2024-10-21 12:13:19.083751] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.603 [2024-10-21 12:13:19.083903] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.603 [2024-10-21 12:13:19.083910] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.603 [2024-10-21 12:13:19.083915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.603 [2024-10-21 12:13:19.086357] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.603 [2024-10-21 12:13:19.095836] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.603 [2024-10-21 12:13:19.096423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.603 [2024-10-21 12:13:19.096455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.603 [2024-10-21 12:13:19.096464] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.603 [2024-10-21 12:13:19.096633] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.603 [2024-10-21 12:13:19.096788] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.603 [2024-10-21 12:13:19.096795] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.603 [2024-10-21 12:13:19.096801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.603 [2024-10-21 12:13:19.099244] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.603 [2024-10-21 12:13:19.108573] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.603 [2024-10-21 12:13:19.109145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.603 [2024-10-21 12:13:19.109176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.603 [2024-10-21 12:13:19.109188] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.604 [2024-10-21 12:13:19.109361] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.604 [2024-10-21 12:13:19.109516] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.604 [2024-10-21 12:13:19.109524] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.604 [2024-10-21 12:13:19.109529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.604 [2024-10-21 12:13:19.111963] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.604 [2024-10-21 12:13:19.121280] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.604 [2024-10-21 12:13:19.121738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.604 [2024-10-21 12:13:19.121754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.604 [2024-10-21 12:13:19.121760] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.604 [2024-10-21 12:13:19.121912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.604 [2024-10-21 12:13:19.122063] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.604 [2024-10-21 12:13:19.122070] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.604 [2024-10-21 12:13:19.122076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.604 [2024-10-21 12:13:19.124511] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.604 [2024-10-21 12:13:19.133976] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.604 [2024-10-21 12:13:19.134567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.604 [2024-10-21 12:13:19.134599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.604 [2024-10-21 12:13:19.134608] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.604 [2024-10-21 12:13:19.134775] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.604 [2024-10-21 12:13:19.134929] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.604 [2024-10-21 12:13:19.134936] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.604 [2024-10-21 12:13:19.134942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.604 [2024-10-21 12:13:19.137385] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.604 [2024-10-21 12:13:19.146710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.604 [2024-10-21 12:13:19.147304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.604 [2024-10-21 12:13:19.147343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.604 [2024-10-21 12:13:19.147352] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.604 [2024-10-21 12:13:19.147519] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.604 [2024-10-21 12:13:19.147673] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.604 [2024-10-21 12:13:19.147683] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.604 [2024-10-21 12:13:19.147689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.604 [2024-10-21 12:13:19.150134] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.604 [2024-10-21 12:13:19.159335] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.604 [2024-10-21 12:13:19.159837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.604 [2024-10-21 12:13:19.159852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.604 [2024-10-21 12:13:19.159858] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.604 [2024-10-21 12:13:19.160010] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.604 [2024-10-21 12:13:19.160162] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.604 [2024-10-21 12:13:19.160169] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.604 [2024-10-21 12:13:19.160174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.604 [2024-10-21 12:13:19.162609] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.604 [2024-10-21 12:13:19.172071] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.604 [2024-10-21 12:13:19.172670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.604 [2024-10-21 12:13:19.172701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.604 [2024-10-21 12:13:19.172710] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.604 [2024-10-21 12:13:19.172877] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.604 [2024-10-21 12:13:19.173032] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.604 [2024-10-21 12:13:19.173040] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.604 [2024-10-21 12:13:19.173045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.604 [2024-10-21 12:13:19.175486] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.604 [2024-10-21 12:13:19.184811] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.604 [2024-10-21 12:13:19.185289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.604 [2024-10-21 12:13:19.185305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.604 [2024-10-21 12:13:19.185311] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.604 [2024-10-21 12:13:19.185467] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.604 [2024-10-21 12:13:19.185619] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.604 [2024-10-21 12:13:19.185626] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.604 [2024-10-21 12:13:19.185631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.604 [2024-10-21 12:13:19.188060] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.867 [2024-10-21 12:13:19.197523] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.867 [2024-10-21 12:13:19.198047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.867 [2024-10-21 12:13:19.198079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.867 [2024-10-21 12:13:19.198088] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.867 [2024-10-21 12:13:19.198255] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.867 [2024-10-21 12:13:19.198414] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.867 [2024-10-21 12:13:19.198422] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.867 [2024-10-21 12:13:19.198428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.867 [2024-10-21 12:13:19.200863] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.867 [2024-10-21 12:13:19.210184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.867 [2024-10-21 12:13:19.210657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.867 [2024-10-21 12:13:19.210672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.867 [2024-10-21 12:13:19.210678] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.867 [2024-10-21 12:13:19.210830] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.867 [2024-10-21 12:13:19.210982] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.867 [2024-10-21 12:13:19.210988] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.867 [2024-10-21 12:13:19.210993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.867 [2024-10-21 12:13:19.213427] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.867 [2024-10-21 12:13:19.222885] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.867 [2024-10-21 12:13:19.223299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.867 [2024-10-21 12:13:19.223312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.867 [2024-10-21 12:13:19.223317] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.867 [2024-10-21 12:13:19.223472] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.867 [2024-10-21 12:13:19.223623] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.867 [2024-10-21 12:13:19.223630] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.867 [2024-10-21 12:13:19.223635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.867 [2024-10-21 12:13:19.226064] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.867 [2024-10-21 12:13:19.235519] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.867 [2024-10-21 12:13:19.235891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.867 [2024-10-21 12:13:19.235904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.867 [2024-10-21 12:13:19.235909] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.867 [2024-10-21 12:13:19.236063] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.867 [2024-10-21 12:13:19.236214] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.867 [2024-10-21 12:13:19.236221] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.867 [2024-10-21 12:13:19.236226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.867 [2024-10-21 12:13:19.238660] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.867 [2024-10-21 12:13:19.248263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.867 [2024-10-21 12:13:19.248633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.867 [2024-10-21 12:13:19.248664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.868 [2024-10-21 12:13:19.248673] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.868 7067.50 IOPS, 27.61 MiB/s [2024-10-21T10:13:19.463Z] [2024-10-21 12:13:19.249977] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.868 [2024-10-21 12:13:19.250132] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.868 [2024-10-21 12:13:19.250139] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.868 [2024-10-21 12:13:19.250145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.868 [2024-10-21 12:13:19.252600] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.868 [2024-10-21 12:13:19.260919] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.868 [2024-10-21 12:13:19.261507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.868 [2024-10-21 12:13:19.261538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.868 [2024-10-21 12:13:19.261547] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.868 [2024-10-21 12:13:19.261716] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.868 [2024-10-21 12:13:19.261871] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.868 [2024-10-21 12:13:19.261878] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.868 [2024-10-21 12:13:19.261884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.868 [2024-10-21 12:13:19.264325] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.868 [2024-10-21 12:13:19.273649] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.868 [2024-10-21 12:13:19.274212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.868 [2024-10-21 12:13:19.274243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.868 [2024-10-21 12:13:19.274252] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.868 [2024-10-21 12:13:19.274428] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.868 [2024-10-21 12:13:19.274583] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.868 [2024-10-21 12:13:19.274590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.868 [2024-10-21 12:13:19.274599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.868 [2024-10-21 12:13:19.277035] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.868 [2024-10-21 12:13:19.286358] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.868 [2024-10-21 12:13:19.286906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.868 [2024-10-21 12:13:19.286938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.868 [2024-10-21 12:13:19.286947] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.868 [2024-10-21 12:13:19.287115] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.868 [2024-10-21 12:13:19.287269] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.868 [2024-10-21 12:13:19.287277] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.868 [2024-10-21 12:13:19.287282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.868 [2024-10-21 12:13:19.289725] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.868 [2024-10-21 12:13:19.299045] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.868 [2024-10-21 12:13:19.299643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.868 [2024-10-21 12:13:19.299675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.868 [2024-10-21 12:13:19.299684] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.868 [2024-10-21 12:13:19.299852] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.868 [2024-10-21 12:13:19.300006] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.868 [2024-10-21 12:13:19.300013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.868 [2024-10-21 12:13:19.300019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.868 [2024-10-21 12:13:19.302462] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.868 [2024-10-21 12:13:19.311777] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.868 [2024-10-21 12:13:19.312274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.868 [2024-10-21 12:13:19.312290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.868 [2024-10-21 12:13:19.312296] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.868 [2024-10-21 12:13:19.312452] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.868 [2024-10-21 12:13:19.312603] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.868 [2024-10-21 12:13:19.312610] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.868 [2024-10-21 12:13:19.312616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.868 [2024-10-21 12:13:19.315046] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.868 [2024-10-21 12:13:19.324504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.868 [2024-10-21 12:13:19.324959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.868 [2024-10-21 12:13:19.324971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.868 [2024-10-21 12:13:19.324977] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.868 [2024-10-21 12:13:19.325128] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.868 [2024-10-21 12:13:19.325280] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.868 [2024-10-21 12:13:19.325286] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.868 [2024-10-21 12:13:19.325291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.868 [2024-10-21 12:13:19.327728] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.868 [2024-10-21 12:13:19.337190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.868 [2024-10-21 12:13:19.337790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.868 [2024-10-21 12:13:19.337821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.868 [2024-10-21 12:13:19.337830] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.868 [2024-10-21 12:13:19.337997] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.868 [2024-10-21 12:13:19.338152] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.868 [2024-10-21 12:13:19.338159] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.868 [2024-10-21 12:13:19.338165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.868 [2024-10-21 12:13:19.340609] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.868 [2024-10-21 12:13:19.349926] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.868 [2024-10-21 12:13:19.350520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.868 [2024-10-21 12:13:19.350551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.868 [2024-10-21 12:13:19.350561] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.868 [2024-10-21 12:13:19.350728] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.868 [2024-10-21 12:13:19.350890] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.868 [2024-10-21 12:13:19.350898] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.868 [2024-10-21 12:13:19.350904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.868 [2024-10-21 12:13:19.353349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.868 [2024-10-21 12:13:19.362667] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.868 [2024-10-21 12:13:19.363128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.868 [2024-10-21 12:13:19.363143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.868 [2024-10-21 12:13:19.363149] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.868 [2024-10-21 12:13:19.363301] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.869 [2024-10-21 12:13:19.363462] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.869 [2024-10-21 12:13:19.363469] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.869 [2024-10-21 12:13:19.363475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.869 [2024-10-21 12:13:19.365909] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.869 [2024-10-21 12:13:19.375364] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.869 [2024-10-21 12:13:19.375947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.869 [2024-10-21 12:13:19.375978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.869 [2024-10-21 12:13:19.375988] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.869 [2024-10-21 12:13:19.376154] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.869 [2024-10-21 12:13:19.376309] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.869 [2024-10-21 12:13:19.376316] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.869 [2024-10-21 12:13:19.376329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.869 [2024-10-21 12:13:19.378767] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.869 [2024-10-21 12:13:19.388087] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.869 [2024-10-21 12:13:19.388693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.869 [2024-10-21 12:13:19.388725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.869 [2024-10-21 12:13:19.388733] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.869 [2024-10-21 12:13:19.388900] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.869 [2024-10-21 12:13:19.389055] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.869 [2024-10-21 12:13:19.389061] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.869 [2024-10-21 12:13:19.389067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.869 [2024-10-21 12:13:19.391508] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.869 [2024-10-21 12:13:19.400898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.869 [2024-10-21 12:13:19.401423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.869 [2024-10-21 12:13:19.401455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.869 [2024-10-21 12:13:19.401464] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.869 [2024-10-21 12:13:19.401633] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.869 [2024-10-21 12:13:19.401788] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.869 [2024-10-21 12:13:19.401795] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.869 [2024-10-21 12:13:19.401801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.869 [2024-10-21 12:13:19.404246] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.869 [2024-10-21 12:13:19.413561] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.869 [2024-10-21 12:13:19.414095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.869 [2024-10-21 12:13:19.414126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.869 [2024-10-21 12:13:19.414135] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.869 [2024-10-21 12:13:19.414302] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.869 [2024-10-21 12:13:19.414465] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.869 [2024-10-21 12:13:19.414473] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.869 [2024-10-21 12:13:19.414479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.869 [2024-10-21 12:13:19.416914] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.869 [2024-10-21 12:13:19.426235] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.869 [2024-10-21 12:13:19.426713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.869 [2024-10-21 12:13:19.426743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.869 [2024-10-21 12:13:19.426752] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.869 [2024-10-21 12:13:19.426918] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.869 [2024-10-21 12:13:19.427072] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.869 [2024-10-21 12:13:19.427079] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.869 [2024-10-21 12:13:19.427084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.869 [2024-10-21 12:13:19.429533] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.869 [2024-10-21 12:13:19.438849] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.869 [2024-10-21 12:13:19.439387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.869 [2024-10-21 12:13:19.439419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.869 [2024-10-21 12:13:19.439428] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.869 [2024-10-21 12:13:19.439594] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.869 [2024-10-21 12:13:19.439749] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.869 [2024-10-21 12:13:19.439756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.869 [2024-10-21 12:13:19.439762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.869 [2024-10-21 12:13:19.442204] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.869 [2024-10-21 12:13:19.451528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.869 [2024-10-21 12:13:19.452129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.869 [2024-10-21 12:13:19.452161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:42.869 [2024-10-21 12:13:19.452173] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:42.869 [2024-10-21 12:13:19.452354] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:42.869 [2024-10-21 12:13:19.452510] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.869 [2024-10-21 12:13:19.452517] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.869 [2024-10-21 12:13:19.452522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.869 [2024-10-21 12:13:19.454957] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.132 [2024-10-21 12:13:19.464324] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.132 [2024-10-21 12:13:19.464929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.132 [2024-10-21 12:13:19.464961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.132 [2024-10-21 12:13:19.464970] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.132 [2024-10-21 12:13:19.465137] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.132 [2024-10-21 12:13:19.465291] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.132 [2024-10-21 12:13:19.465298] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.132 [2024-10-21 12:13:19.465304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.132 [2024-10-21 12:13:19.467748] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.132 [2024-10-21 12:13:19.477062] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.132 [2024-10-21 12:13:19.477623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.132 [2024-10-21 12:13:19.477655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.132 [2024-10-21 12:13:19.477664] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.132 [2024-10-21 12:13:19.477831] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.132 [2024-10-21 12:13:19.477986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.132 [2024-10-21 12:13:19.477993] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.132 [2024-10-21 12:13:19.477999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.132 [2024-10-21 12:13:19.480447] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.132 [2024-10-21 12:13:19.489758] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.133 [2024-10-21 12:13:19.490351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.133 [2024-10-21 12:13:19.490382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.133 [2024-10-21 12:13:19.490391] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.133 [2024-10-21 12:13:19.490558] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.133 [2024-10-21 12:13:19.490716] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.133 [2024-10-21 12:13:19.490724] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.133 [2024-10-21 12:13:19.490729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.133 [2024-10-21 12:13:19.493173] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.133 [2024-10-21 12:13:19.502491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.133 [2024-10-21 12:13:19.503066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.133 [2024-10-21 12:13:19.503097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.133 [2024-10-21 12:13:19.503106] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.133 [2024-10-21 12:13:19.503274] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.133 [2024-10-21 12:13:19.503436] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.133 [2024-10-21 12:13:19.503444] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.133 [2024-10-21 12:13:19.503449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.133 [2024-10-21 12:13:19.505884] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.133 [2024-10-21 12:13:19.515205] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.133 [2024-10-21 12:13:19.515762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.133 [2024-10-21 12:13:19.515794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.133 [2024-10-21 12:13:19.515802] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.133 [2024-10-21 12:13:19.515969] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.133 [2024-10-21 12:13:19.516124] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.133 [2024-10-21 12:13:19.516131] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.133 [2024-10-21 12:13:19.516137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.133 [2024-10-21 12:13:19.518580] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.133 [2024-10-21 12:13:19.527897] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.133 [2024-10-21 12:13:19.528458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.133 [2024-10-21 12:13:19.528489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.133 [2024-10-21 12:13:19.528498] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.133 [2024-10-21 12:13:19.528667] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.133 [2024-10-21 12:13:19.528821] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.133 [2024-10-21 12:13:19.528829] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.133 [2024-10-21 12:13:19.528834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.133 [2024-10-21 12:13:19.531277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.133 [2024-10-21 12:13:19.540604] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.133 [2024-10-21 12:13:19.541201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.133 [2024-10-21 12:13:19.541232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.133 [2024-10-21 12:13:19.541240] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.133 [2024-10-21 12:13:19.541416] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.133 [2024-10-21 12:13:19.541571] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.133 [2024-10-21 12:13:19.541578] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.133 [2024-10-21 12:13:19.541585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.133 [2024-10-21 12:13:19.544021] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.133 [2024-10-21 12:13:19.553348] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.133 [2024-10-21 12:13:19.553945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.133 [2024-10-21 12:13:19.553976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.133 [2024-10-21 12:13:19.553985] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.133 [2024-10-21 12:13:19.554152] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.133 [2024-10-21 12:13:19.554306] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.133 [2024-10-21 12:13:19.554314] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.133 [2024-10-21 12:13:19.554319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.133 [2024-10-21 12:13:19.556762] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.133 [2024-10-21 12:13:19.566075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.133 [2024-10-21 12:13:19.566644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.133 [2024-10-21 12:13:19.566675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.133 [2024-10-21 12:13:19.566684] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.133 [2024-10-21 12:13:19.566852] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.133 [2024-10-21 12:13:19.567006] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.133 [2024-10-21 12:13:19.567013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.133 [2024-10-21 12:13:19.567018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.133 [2024-10-21 12:13:19.569461] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.133 [2024-10-21 12:13:19.578778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.133 [2024-10-21 12:13:19.579336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.133 [2024-10-21 12:13:19.579367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.133 [2024-10-21 12:13:19.579380] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.133 [2024-10-21 12:13:19.579549] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.133 [2024-10-21 12:13:19.579704] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.133 [2024-10-21 12:13:19.579710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.133 [2024-10-21 12:13:19.579716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.133 [2024-10-21 12:13:19.582154] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.133 [2024-10-21 12:13:19.591468] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.133 [2024-10-21 12:13:19.591927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.133 [2024-10-21 12:13:19.591942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.133 [2024-10-21 12:13:19.591948] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.133 [2024-10-21 12:13:19.592100] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.133 [2024-10-21 12:13:19.592252] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.133 [2024-10-21 12:13:19.592259] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.133 [2024-10-21 12:13:19.592264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.133 [2024-10-21 12:13:19.594700] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.133 [2024-10-21 12:13:19.604150] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.133 [2024-10-21 12:13:19.604737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.133 [2024-10-21 12:13:19.604769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.133 [2024-10-21 12:13:19.604778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.133 [2024-10-21 12:13:19.604945] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.134 [2024-10-21 12:13:19.605099] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.134 [2024-10-21 12:13:19.605106] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.134 [2024-10-21 12:13:19.605112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.134 [2024-10-21 12:13:19.607556] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.134 [2024-10-21 12:13:19.616870] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.134 [2024-10-21 12:13:19.617421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.134 [2024-10-21 12:13:19.617452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.134 [2024-10-21 12:13:19.617462] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.134 [2024-10-21 12:13:19.617631] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.134 [2024-10-21 12:13:19.617785] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.134 [2024-10-21 12:13:19.617799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.134 [2024-10-21 12:13:19.617806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.134 [2024-10-21 12:13:19.620246] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.134 [2024-10-21 12:13:19.629557] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.134 [2024-10-21 12:13:19.630024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.134 [2024-10-21 12:13:19.630054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.134 [2024-10-21 12:13:19.630062] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.134 [2024-10-21 12:13:19.630231] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.134 [2024-10-21 12:13:19.630392] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.134 [2024-10-21 12:13:19.630399] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.134 [2024-10-21 12:13:19.630405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.134 [2024-10-21 12:13:19.632841] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.134 [2024-10-21 12:13:19.642301] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.134 [2024-10-21 12:13:19.642892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.134 [2024-10-21 12:13:19.642924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.134 [2024-10-21 12:13:19.642933] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.134 [2024-10-21 12:13:19.643099] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.134 [2024-10-21 12:13:19.643254] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.134 [2024-10-21 12:13:19.643261] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.134 [2024-10-21 12:13:19.643267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.134 [2024-10-21 12:13:19.645709] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.134 [2024-10-21 12:13:19.655039] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.134 [2024-10-21 12:13:19.655630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.134 [2024-10-21 12:13:19.655662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.134 [2024-10-21 12:13:19.655671] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.134 [2024-10-21 12:13:19.655837] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.134 [2024-10-21 12:13:19.655992] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.134 [2024-10-21 12:13:19.655999] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.134 [2024-10-21 12:13:19.656004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.134 [2024-10-21 12:13:19.658448] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.134 [2024-10-21 12:13:19.667758] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.134 [2024-10-21 12:13:19.668335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.134 [2024-10-21 12:13:19.668367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.134 [2024-10-21 12:13:19.668376] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.134 [2024-10-21 12:13:19.668545] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.134 [2024-10-21 12:13:19.668699] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.134 [2024-10-21 12:13:19.668706] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.134 [2024-10-21 12:13:19.668712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.134 [2024-10-21 12:13:19.671157] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.134 [2024-10-21 12:13:19.680484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.134 [2024-10-21 12:13:19.681084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.134 [2024-10-21 12:13:19.681115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.134 [2024-10-21 12:13:19.681124] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.134 [2024-10-21 12:13:19.681291] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.134 [2024-10-21 12:13:19.681453] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.134 [2024-10-21 12:13:19.681461] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.134 [2024-10-21 12:13:19.681466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.134 [2024-10-21 12:13:19.683903] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.134 [2024-10-21 12:13:19.693213] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.134 [2024-10-21 12:13:19.693762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.134 [2024-10-21 12:13:19.693794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.134 [2024-10-21 12:13:19.693803] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.134 [2024-10-21 12:13:19.693970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.134 [2024-10-21 12:13:19.694124] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.134 [2024-10-21 12:13:19.694131] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.134 [2024-10-21 12:13:19.694137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.134 [2024-10-21 12:13:19.696581] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.134 [2024-10-21 12:13:19.705897] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.134 [2024-10-21 12:13:19.706425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.134 [2024-10-21 12:13:19.706456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.134 [2024-10-21 12:13:19.706465] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.134 [2024-10-21 12:13:19.706638] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.134 [2024-10-21 12:13:19.706792] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.134 [2024-10-21 12:13:19.706800] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.134 [2024-10-21 12:13:19.706805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.134 [2024-10-21 12:13:19.709245] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.134 [2024-10-21 12:13:19.718565] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.134 [2024-10-21 12:13:19.719061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.134 [2024-10-21 12:13:19.719076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.134 [2024-10-21 12:13:19.719082] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.134 [2024-10-21 12:13:19.719234] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.134 [2024-10-21 12:13:19.719391] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.134 [2024-10-21 12:13:19.719398] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.134 [2024-10-21 12:13:19.719403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.134 [2024-10-21 12:13:19.721833] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.412 [2024-10-21 12:13:19.731288] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.412 [2024-10-21 12:13:19.731876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.412 [2024-10-21 12:13:19.731907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.412 [2024-10-21 12:13:19.731916] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.413 [2024-10-21 12:13:19.732083] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.413 [2024-10-21 12:13:19.732238] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.413 [2024-10-21 12:13:19.732245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.413 [2024-10-21 12:13:19.732250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.413 [2024-10-21 12:13:19.734695] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.413 [2024-10-21 12:13:19.744012] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.413 [2024-10-21 12:13:19.744578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.413 [2024-10-21 12:13:19.744609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.413 [2024-10-21 12:13:19.744618] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.413 [2024-10-21 12:13:19.744785] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.413 [2024-10-21 12:13:19.744939] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.413 [2024-10-21 12:13:19.744947] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.413 [2024-10-21 12:13:19.744956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.413 [2024-10-21 12:13:19.747399] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.413 [2024-10-21 12:13:19.756728] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.413 [2024-10-21 12:13:19.757282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.413 [2024-10-21 12:13:19.757314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.413 [2024-10-21 12:13:19.757330] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.413 [2024-10-21 12:13:19.757498] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.413 [2024-10-21 12:13:19.757652] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.413 [2024-10-21 12:13:19.757659] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.413 [2024-10-21 12:13:19.757666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.413 [2024-10-21 12:13:19.760102] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.413 [2024-10-21 12:13:19.769433] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.413 [2024-10-21 12:13:19.770025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.413 [2024-10-21 12:13:19.770056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.413 [2024-10-21 12:13:19.770065] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.413 [2024-10-21 12:13:19.770232] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.413 [2024-10-21 12:13:19.770394] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.413 [2024-10-21 12:13:19.770402] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.413 [2024-10-21 12:13:19.770407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.413 [2024-10-21 12:13:19.772840] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.413 [2024-10-21 12:13:19.782152] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.413 [2024-10-21 12:13:19.782789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.413 [2024-10-21 12:13:19.782821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.413 [2024-10-21 12:13:19.782830] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.413 [2024-10-21 12:13:19.782996] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.413 [2024-10-21 12:13:19.783151] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.413 [2024-10-21 12:13:19.783158] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.413 [2024-10-21 12:13:19.783164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.413 [2024-10-21 12:13:19.785607] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.413 [2024-10-21 12:13:19.794778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.413 [2024-10-21 12:13:19.795386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.413 [2024-10-21 12:13:19.795421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.413 [2024-10-21 12:13:19.795430] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.413 [2024-10-21 12:13:19.795597] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.413 [2024-10-21 12:13:19.795752] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.413 [2024-10-21 12:13:19.795759] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.413 [2024-10-21 12:13:19.795765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.413 [2024-10-21 12:13:19.798208] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.413 [2024-10-21 12:13:19.807523] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.413 [2024-10-21 12:13:19.808076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.413 [2024-10-21 12:13:19.808106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.413 [2024-10-21 12:13:19.808115] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.413 [2024-10-21 12:13:19.808283] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.413 [2024-10-21 12:13:19.808448] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.413 [2024-10-21 12:13:19.808456] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.413 [2024-10-21 12:13:19.808461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.413 [2024-10-21 12:13:19.810895] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.413 [2024-10-21 12:13:19.820207] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.413 [2024-10-21 12:13:19.820758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.413 [2024-10-21 12:13:19.820789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.413 [2024-10-21 12:13:19.820798] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.413 [2024-10-21 12:13:19.820965] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.413 [2024-10-21 12:13:19.821120] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.413 [2024-10-21 12:13:19.821127] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.413 [2024-10-21 12:13:19.821133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.413 [2024-10-21 12:13:19.823574] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.413 [2024-10-21 12:13:19.832892] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.413 [2024-10-21 12:13:19.833423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.413 [2024-10-21 12:13:19.833454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.413 [2024-10-21 12:13:19.833464] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.413 [2024-10-21 12:13:19.833632] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.413 [2024-10-21 12:13:19.833790] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.413 [2024-10-21 12:13:19.833798] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.413 [2024-10-21 12:13:19.833803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.413 [2024-10-21 12:13:19.836245] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.413 [2024-10-21 12:13:19.845564] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.413 [2024-10-21 12:13:19.846072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.413 [2024-10-21 12:13:19.846087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.413 [2024-10-21 12:13:19.846093] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.413 [2024-10-21 12:13:19.846245] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.413 [2024-10-21 12:13:19.846402] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.413 [2024-10-21 12:13:19.846410] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.413 [2024-10-21 12:13:19.846416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.413 [2024-10-21 12:13:19.848845] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.413 [2024-10-21 12:13:19.858170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.413 [2024-10-21 12:13:19.858671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.413 [2024-10-21 12:13:19.858685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.413 [2024-10-21 12:13:19.858691] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.413 [2024-10-21 12:13:19.858842] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.413 [2024-10-21 12:13:19.858993] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.413 [2024-10-21 12:13:19.858999] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.413 [2024-10-21 12:13:19.859004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.413 [2024-10-21 12:13:19.861437] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.413 [2024-10-21 12:13:19.870892] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.414 [2024-10-21 12:13:19.871238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.414 [2024-10-21 12:13:19.871252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.414 [2024-10-21 12:13:19.871259] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.414 [2024-10-21 12:13:19.871501] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.414 [2024-10-21 12:13:19.871655] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.414 [2024-10-21 12:13:19.871663] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.414 [2024-10-21 12:13:19.871668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.414 [2024-10-21 12:13:19.874107] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.414 [2024-10-21 12:13:19.883570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.414 [2024-10-21 12:13:19.884107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.414 [2024-10-21 12:13:19.884138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.414 [2024-10-21 12:13:19.884147] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.414 [2024-10-21 12:13:19.884315] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.414 [2024-10-21 12:13:19.884477] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.414 [2024-10-21 12:13:19.884485] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.414 [2024-10-21 12:13:19.884491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.414 [2024-10-21 12:13:19.886926] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.414 [2024-10-21 12:13:19.896241] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.414 [2024-10-21 12:13:19.896746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.414 [2024-10-21 12:13:19.896761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.414 [2024-10-21 12:13:19.896767] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.414 [2024-10-21 12:13:19.896918] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.414 [2024-10-21 12:13:19.897069] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.414 [2024-10-21 12:13:19.897076] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.414 [2024-10-21 12:13:19.897081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.414 [2024-10-21 12:13:19.899517] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.414 [2024-10-21 12:13:19.908974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.414 [2024-10-21 12:13:19.909581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.414 [2024-10-21 12:13:19.909613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.414 [2024-10-21 12:13:19.909622] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.414 [2024-10-21 12:13:19.909788] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.414 [2024-10-21 12:13:19.909943] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.414 [2024-10-21 12:13:19.909950] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.414 [2024-10-21 12:13:19.909956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.414 [2024-10-21 12:13:19.912396] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.414 [2024-10-21 12:13:19.921720] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.414 [2024-10-21 12:13:19.922314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.414 [2024-10-21 12:13:19.922351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.414 [2024-10-21 12:13:19.922364] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.414 [2024-10-21 12:13:19.922532] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.414 [2024-10-21 12:13:19.922687] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.414 [2024-10-21 12:13:19.922694] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.414 [2024-10-21 12:13:19.922700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.414 [2024-10-21 12:13:19.925135] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.414 [2024-10-21 12:13:19.934459] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.414 [2024-10-21 12:13:19.935052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.414 [2024-10-21 12:13:19.935084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.414 [2024-10-21 12:13:19.935093] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.414 [2024-10-21 12:13:19.935260] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.414 [2024-10-21 12:13:19.935421] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.414 [2024-10-21 12:13:19.935429] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.414 [2024-10-21 12:13:19.935435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.414 [2024-10-21 12:13:19.937872] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.414 [2024-10-21 12:13:19.947192] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.414 [2024-10-21 12:13:19.947747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.414 [2024-10-21 12:13:19.947779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.414 [2024-10-21 12:13:19.947787] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.414 [2024-10-21 12:13:19.947955] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.414 [2024-10-21 12:13:19.948109] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.414 [2024-10-21 12:13:19.948116] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.414 [2024-10-21 12:13:19.948122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.414 [2024-10-21 12:13:19.950567] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.414 [2024-10-21 12:13:19.959910] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.414 [2024-10-21 12:13:19.960440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.414 [2024-10-21 12:13:19.960472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.414 [2024-10-21 12:13:19.960480] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.414 [2024-10-21 12:13:19.960649] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.414 [2024-10-21 12:13:19.960803] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.414 [2024-10-21 12:13:19.960814] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.414 [2024-10-21 12:13:19.960820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.414 [2024-10-21 12:13:19.963261] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.414 [2024-10-21 12:13:19.972577] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.414 [2024-10-21 12:13:19.973171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.414 [2024-10-21 12:13:19.973202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.414 [2024-10-21 12:13:19.973211] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.414 [2024-10-21 12:13:19.973384] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.414 [2024-10-21 12:13:19.973539] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.414 [2024-10-21 12:13:19.973546] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.414 [2024-10-21 12:13:19.973552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.414 [2024-10-21 12:13:19.975986] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.414 [2024-10-21 12:13:19.985302] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.414 [2024-10-21 12:13:19.985909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.414 [2024-10-21 12:13:19.985940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.414 [2024-10-21 12:13:19.985949] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.414 [2024-10-21 12:13:19.986116] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.414 [2024-10-21 12:13:19.986271] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.414 [2024-10-21 12:13:19.986278] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.414 [2024-10-21 12:13:19.986284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.414 [2024-10-21 12:13:19.988727] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.414 [2024-10-21 12:13:19.998050] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.414 [2024-10-21 12:13:19.998613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.414 [2024-10-21 12:13:19.998645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.414 [2024-10-21 12:13:19.998654] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.414 [2024-10-21 12:13:19.998821] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.414 [2024-10-21 12:13:19.998975] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.414 [2024-10-21 12:13:19.998982] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.414 [2024-10-21 12:13:19.998988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.680 [2024-10-21 12:13:20.001434] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.680 [2024-10-21 12:13:20.010694] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.680 [2024-10-21 12:13:20.011276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.680 [2024-10-21 12:13:20.011308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.680 [2024-10-21 12:13:20.011317] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.680 [2024-10-21 12:13:20.011492] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.680 [2024-10-21 12:13:20.011647] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.680 [2024-10-21 12:13:20.011654] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.680 [2024-10-21 12:13:20.011660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.680 [2024-10-21 12:13:20.014093] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.680 [2024-10-21 12:13:20.023422] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.680 [2024-10-21 12:13:20.023986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.680 [2024-10-21 12:13:20.024018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.680 [2024-10-21 12:13:20.024026] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.680 [2024-10-21 12:13:20.024193] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.680 [2024-10-21 12:13:20.024355] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.680 [2024-10-21 12:13:20.024363] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.680 [2024-10-21 12:13:20.024369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.680 [2024-10-21 12:13:20.026806] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.680 [2024-10-21 12:13:20.036127] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.680 [2024-10-21 12:13:20.036634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.680 [2024-10-21 12:13:20.036665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.680 [2024-10-21 12:13:20.036675] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.680 [2024-10-21 12:13:20.036842] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.680 [2024-10-21 12:13:20.036996] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.680 [2024-10-21 12:13:20.037003] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.680 [2024-10-21 12:13:20.037008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.680 [2024-10-21 12:13:20.039452] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.680 [2024-10-21 12:13:20.048776] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.680 [2024-10-21 12:13:20.049387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.680 [2024-10-21 12:13:20.049418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.680 [2024-10-21 12:13:20.049427] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.680 [2024-10-21 12:13:20.049599] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.680 [2024-10-21 12:13:20.049754] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.680 [2024-10-21 12:13:20.049761] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.680 [2024-10-21 12:13:20.049767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.680 [2024-10-21 12:13:20.052210] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.680 [2024-10-21 12:13:20.061439] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.680 [2024-10-21 12:13:20.062044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.680 [2024-10-21 12:13:20.062075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.680 [2024-10-21 12:13:20.062084] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.680 [2024-10-21 12:13:20.062251] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.680 [2024-10-21 12:13:20.062414] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.680 [2024-10-21 12:13:20.062422] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.680 [2024-10-21 12:13:20.062427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.680 [2024-10-21 12:13:20.064865] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.680 [2024-10-21 12:13:20.074180] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.680 [2024-10-21 12:13:20.074710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.680 [2024-10-21 12:13:20.074742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.680 [2024-10-21 12:13:20.074752] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.680 [2024-10-21 12:13:20.074919] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.680 [2024-10-21 12:13:20.075074] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.680 [2024-10-21 12:13:20.075081] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.680 [2024-10-21 12:13:20.075087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.680 [2024-10-21 12:13:20.077530] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.680 [2024-10-21 12:13:20.086848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.680 [2024-10-21 12:13:20.087293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.680 [2024-10-21 12:13:20.087309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.680 [2024-10-21 12:13:20.087315] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.680 [2024-10-21 12:13:20.087471] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.680 [2024-10-21 12:13:20.087623] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.680 [2024-10-21 12:13:20.087630] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.680 [2024-10-21 12:13:20.087640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.680 [2024-10-21 12:13:20.090077] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.680 [2024-10-21 12:13:20.099547] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.680 [2024-10-21 12:13:20.100048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.680 [2024-10-21 12:13:20.100062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.680 [2024-10-21 12:13:20.100067] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.680 [2024-10-21 12:13:20.100219] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.680 [2024-10-21 12:13:20.100377] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.680 [2024-10-21 12:13:20.100384] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.680 [2024-10-21 12:13:20.100390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.680 [2024-10-21 12:13:20.102820] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.680 [2024-10-21 12:13:20.112271] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.680 [2024-10-21 12:13:20.112833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.680 [2024-10-21 12:13:20.112864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.680 [2024-10-21 12:13:20.112874] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.680 [2024-10-21 12:13:20.113042] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.680 [2024-10-21 12:13:20.113197] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.680 [2024-10-21 12:13:20.113204] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.681 [2024-10-21 12:13:20.113210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.681 [2024-10-21 12:13:20.115649] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.681 [2024-10-21 12:13:20.124971] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.681 [2024-10-21 12:13:20.125607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.681 [2024-10-21 12:13:20.125638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.681 [2024-10-21 12:13:20.125647] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.681 [2024-10-21 12:13:20.125815] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.681 [2024-10-21 12:13:20.125969] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.681 [2024-10-21 12:13:20.125976] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.681 [2024-10-21 12:13:20.125982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.681 [2024-10-21 12:13:20.128425] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.681 [2024-10-21 12:13:20.137601] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.681 [2024-10-21 12:13:20.138213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.681 [2024-10-21 12:13:20.138244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.681 [2024-10-21 12:13:20.138253] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.681 [2024-10-21 12:13:20.138429] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.681 [2024-10-21 12:13:20.138584] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.681 [2024-10-21 12:13:20.138592] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.681 [2024-10-21 12:13:20.138597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.681 [2024-10-21 12:13:20.141034] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.681 [2024-10-21 12:13:20.150213] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.681 [2024-10-21 12:13:20.150728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.681 [2024-10-21 12:13:20.150744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.681 [2024-10-21 12:13:20.150750] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.681 [2024-10-21 12:13:20.150901] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.681 [2024-10-21 12:13:20.151052] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.681 [2024-10-21 12:13:20.151059] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.681 [2024-10-21 12:13:20.151064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.681 [2024-10-21 12:13:20.153506] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.681 [2024-10-21 12:13:20.162829] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.681 [2024-10-21 12:13:20.163285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.681 [2024-10-21 12:13:20.163298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.681 [2024-10-21 12:13:20.163304] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.681 [2024-10-21 12:13:20.163462] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.681 [2024-10-21 12:13:20.163614] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.681 [2024-10-21 12:13:20.163620] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.681 [2024-10-21 12:13:20.163625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.681 [2024-10-21 12:13:20.166055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.681 [2024-10-21 12:13:20.175546] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.681 [2024-10-21 12:13:20.176133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.681 [2024-10-21 12:13:20.176164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.681 [2024-10-21 12:13:20.176173] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.681 [2024-10-21 12:13:20.176351] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.681 [2024-10-21 12:13:20.176507] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.681 [2024-10-21 12:13:20.176514] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.681 [2024-10-21 12:13:20.176520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.681 [2024-10-21 12:13:20.178957] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.681 [2024-10-21 12:13:20.188278] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.681 [2024-10-21 12:13:20.188858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.681 [2024-10-21 12:13:20.188889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.681 [2024-10-21 12:13:20.188898] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.681 [2024-10-21 12:13:20.189065] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.681 [2024-10-21 12:13:20.189220] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.681 [2024-10-21 12:13:20.189227] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.681 [2024-10-21 12:13:20.189232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.681 [2024-10-21 12:13:20.191676] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.681 [2024-10-21 12:13:20.200999] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.681 [2024-10-21 12:13:20.201598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.681 [2024-10-21 12:13:20.201628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.681 [2024-10-21 12:13:20.201637] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.681 [2024-10-21 12:13:20.201804] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.681 [2024-10-21 12:13:20.201959] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.681 [2024-10-21 12:13:20.201966] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.681 [2024-10-21 12:13:20.201972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.681 [2024-10-21 12:13:20.204417] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.681 [2024-10-21 12:13:20.213726] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.681 [2024-10-21 12:13:20.214326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.681 [2024-10-21 12:13:20.214357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.681 [2024-10-21 12:13:20.214365] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.681 [2024-10-21 12:13:20.214532] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.681 [2024-10-21 12:13:20.214686] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.681 [2024-10-21 12:13:20.214693] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.681 [2024-10-21 12:13:20.214703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.681 [2024-10-21 12:13:20.217140] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.681 [2024-10-21 12:13:20.226464] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.681 [2024-10-21 12:13:20.226938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.681 [2024-10-21 12:13:20.226969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.681 [2024-10-21 12:13:20.226978] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.681 [2024-10-21 12:13:20.227147] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.681 [2024-10-21 12:13:20.227301] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.681 [2024-10-21 12:13:20.227308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.681 [2024-10-21 12:13:20.227314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.681 [2024-10-21 12:13:20.229757] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.682 [2024-10-21 12:13:20.239095] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.682 [2024-10-21 12:13:20.239672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.682 [2024-10-21 12:13:20.239703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.682 [2024-10-21 12:13:20.239712] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.682 [2024-10-21 12:13:20.239881] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.682 [2024-10-21 12:13:20.240035] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.682 [2024-10-21 12:13:20.240043] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.682 [2024-10-21 12:13:20.240048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.682 [2024-10-21 12:13:20.242493] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.682 5654.00 IOPS, 22.09 MiB/s [2024-10-21T10:13:20.277Z] [2024-10-21 12:13:20.252947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.682 [2024-10-21 12:13:20.253633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.682 [2024-10-21 12:13:20.253665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.682 [2024-10-21 12:13:20.253674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.682 [2024-10-21 12:13:20.253841] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.682 [2024-10-21 12:13:20.254003] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.682 [2024-10-21 12:13:20.254012] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.682 [2024-10-21 12:13:20.254018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.682 [2024-10-21 12:13:20.256465] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.682 [2024-10-21 12:13:20.265638] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.682 [2024-10-21 12:13:20.266199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.682 [2024-10-21 12:13:20.266236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.682 [2024-10-21 12:13:20.266245] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.682 [2024-10-21 12:13:20.266418] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.682 [2024-10-21 12:13:20.266573] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.682 [2024-10-21 12:13:20.266580] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.682 [2024-10-21 12:13:20.266586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.682 [2024-10-21 12:13:20.269022] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.946 [2024-10-21 12:13:20.278340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.946 [2024-10-21 12:13:20.278929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.946 [2024-10-21 12:13:20.278961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.946 [2024-10-21 12:13:20.278970] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.946 [2024-10-21 12:13:20.279138] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.946 [2024-10-21 12:13:20.279292] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.946 [2024-10-21 12:13:20.279299] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.946 [2024-10-21 12:13:20.279305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.946 [2024-10-21 12:13:20.281751] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.946 [2024-10-21 12:13:20.291062] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.946 [2024-10-21 12:13:20.291645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.946 [2024-10-21 12:13:20.291676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.946 [2024-10-21 12:13:20.291685] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.946 [2024-10-21 12:13:20.291853] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.946 [2024-10-21 12:13:20.292007] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.947 [2024-10-21 12:13:20.292014] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.947 [2024-10-21 12:13:20.292020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.947 [2024-10-21 12:13:20.294463] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.947 [2024-10-21 12:13:20.303786] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.947 [2024-10-21 12:13:20.304277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.947 [2024-10-21 12:13:20.304292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.947 [2024-10-21 12:13:20.304299] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.947 [2024-10-21 12:13:20.304456] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.947 [2024-10-21 12:13:20.304613] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.947 [2024-10-21 12:13:20.304620] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.947 [2024-10-21 12:13:20.304625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.947 [2024-10-21 12:13:20.307054] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.947 [2024-10-21 12:13:20.316508] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.947 [2024-10-21 12:13:20.317097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.947 [2024-10-21 12:13:20.317128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.947 [2024-10-21 12:13:20.317137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.947 [2024-10-21 12:13:20.317305] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.947 [2024-10-21 12:13:20.317468] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.947 [2024-10-21 12:13:20.317476] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.947 [2024-10-21 12:13:20.317481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.947 [2024-10-21 12:13:20.319918] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.947 [2024-10-21 12:13:20.329232] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.947 [2024-10-21 12:13:20.329827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.947 [2024-10-21 12:13:20.329858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.947 [2024-10-21 12:13:20.329867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.947 [2024-10-21 12:13:20.330034] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.947 [2024-10-21 12:13:20.330188] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.947 [2024-10-21 12:13:20.330196] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.947 [2024-10-21 12:13:20.330201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.947 [2024-10-21 12:13:20.332646] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.947 [2024-10-21 12:13:20.341958] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.947 [2024-10-21 12:13:20.342433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.947 [2024-10-21 12:13:20.342464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.947 [2024-10-21 12:13:20.342474] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.947 [2024-10-21 12:13:20.342643] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.947 [2024-10-21 12:13:20.342798] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.947 [2024-10-21 12:13:20.342805] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.947 [2024-10-21 12:13:20.342810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.947 [2024-10-21 12:13:20.345256] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.947 [2024-10-21 12:13:20.354582] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.947 [2024-10-21 12:13:20.355093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.947 [2024-10-21 12:13:20.355122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.947 [2024-10-21 12:13:20.355131] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.947 [2024-10-21 12:13:20.355298] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.947 [2024-10-21 12:13:20.355458] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.947 [2024-10-21 12:13:20.355466] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.947 [2024-10-21 12:13:20.355472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.947 [2024-10-21 12:13:20.357917] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.947 [2024-10-21 12:13:20.367241] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.947 [2024-10-21 12:13:20.367711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.947 [2024-10-21 12:13:20.367726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.947 [2024-10-21 12:13:20.367733] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.947 [2024-10-21 12:13:20.367885] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.947 [2024-10-21 12:13:20.368036] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.947 [2024-10-21 12:13:20.368043] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.947 [2024-10-21 12:13:20.368049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.947 [2024-10-21 12:13:20.370484] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.947 [2024-10-21 12:13:20.379948] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.947 [2024-10-21 12:13:20.380432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.947 [2024-10-21 12:13:20.380445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.947 [2024-10-21 12:13:20.380451] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.947 [2024-10-21 12:13:20.380602] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.947 [2024-10-21 12:13:20.380754] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.947 [2024-10-21 12:13:20.380760] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.947 [2024-10-21 12:13:20.380765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.947 [2024-10-21 12:13:20.383194] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.947 [2024-10-21 12:13:20.392657] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.947 [2024-10-21 12:13:20.392998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.947 [2024-10-21 12:13:20.393011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.947 [2024-10-21 12:13:20.393020] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.947 [2024-10-21 12:13:20.393171] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.947 [2024-10-21 12:13:20.393327] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.947 [2024-10-21 12:13:20.393334] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.947 [2024-10-21 12:13:20.393339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.947 [2024-10-21 12:13:20.395767] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.947 [2024-10-21 12:13:20.405369] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.947 [2024-10-21 12:13:20.405725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.947 [2024-10-21 12:13:20.405737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.947 [2024-10-21 12:13:20.405743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.947 [2024-10-21 12:13:20.405894] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.947 [2024-10-21 12:13:20.406045] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.947 [2024-10-21 12:13:20.406051] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.948 [2024-10-21 12:13:20.406057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.948 [2024-10-21 12:13:20.408488] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.948 [2024-10-21 12:13:20.418088] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.948 [2024-10-21 12:13:20.418645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.948 [2024-10-21 12:13:20.418677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.948 [2024-10-21 12:13:20.418686] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.948 [2024-10-21 12:13:20.418855] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.948 [2024-10-21 12:13:20.419009] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.948 [2024-10-21 12:13:20.419016] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.948 [2024-10-21 12:13:20.419022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.948 [2024-10-21 12:13:20.421463] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.948 [2024-10-21 12:13:20.430723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.948 [2024-10-21 12:13:20.431183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.948 [2024-10-21 12:13:20.431199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.948 [2024-10-21 12:13:20.431206] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.948 [2024-10-21 12:13:20.431362] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.948 [2024-10-21 12:13:20.431514] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.948 [2024-10-21 12:13:20.431525] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.948 [2024-10-21 12:13:20.431530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.948 [2024-10-21 12:13:20.433963] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.948 [2024-10-21 12:13:20.443426] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.948 [2024-10-21 12:13:20.444005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.948 [2024-10-21 12:13:20.444037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.948 [2024-10-21 12:13:20.444045] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.948 [2024-10-21 12:13:20.444212] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.948 [2024-10-21 12:13:20.444373] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.948 [2024-10-21 12:13:20.444381] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.948 [2024-10-21 12:13:20.444387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.948 [2024-10-21 12:13:20.446821] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.948 [2024-10-21 12:13:20.456148] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.948 [2024-10-21 12:13:20.456635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.948 [2024-10-21 12:13:20.456666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.948 [2024-10-21 12:13:20.456675] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.948 [2024-10-21 12:13:20.456843] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.948 [2024-10-21 12:13:20.456998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.948 [2024-10-21 12:13:20.457005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.948 [2024-10-21 12:13:20.457011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.948 [2024-10-21 12:13:20.459457] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.948 [2024-10-21 12:13:20.468774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.948 [2024-10-21 12:13:20.469377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.948 [2024-10-21 12:13:20.469409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.948 [2024-10-21 12:13:20.469418] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.948 [2024-10-21 12:13:20.469585] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.948 [2024-10-21 12:13:20.469739] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.948 [2024-10-21 12:13:20.469746] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.948 [2024-10-21 12:13:20.469751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.948 [2024-10-21 12:13:20.472195] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.948 [2024-10-21 12:13:20.481388] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.948 [2024-10-21 12:13:20.481965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.948 [2024-10-21 12:13:20.481996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.948 [2024-10-21 12:13:20.482005] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.948 [2024-10-21 12:13:20.482172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.948 [2024-10-21 12:13:20.482333] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.948 [2024-10-21 12:13:20.482341] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.948 [2024-10-21 12:13:20.482347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.948 [2024-10-21 12:13:20.484784] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.948 [2024-10-21 12:13:20.494111] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.948 [2024-10-21 12:13:20.494608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.948 [2024-10-21 12:13:20.494624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.948 [2024-10-21 12:13:20.494630] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.948 [2024-10-21 12:13:20.494782] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.948 [2024-10-21 12:13:20.494933] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.948 [2024-10-21 12:13:20.494940] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.948 [2024-10-21 12:13:20.494945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.948 [2024-10-21 12:13:20.497398] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.948 [2024-10-21 12:13:20.506723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.948 [2024-10-21 12:13:20.507072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.948 [2024-10-21 12:13:20.507087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.948 [2024-10-21 12:13:20.507093] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.948 [2024-10-21 12:13:20.507244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.948 [2024-10-21 12:13:20.507406] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.948 [2024-10-21 12:13:20.507414] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.948 [2024-10-21 12:13:20.507419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.948 [2024-10-21 12:13:20.509849] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.948 [2024-10-21 12:13:20.519448] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.948 [2024-10-21 12:13:20.520052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.948 [2024-10-21 12:13:20.520083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.949 [2024-10-21 12:13:20.520096] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.949 [2024-10-21 12:13:20.520263] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.949 [2024-10-21 12:13:20.520424] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.949 [2024-10-21 12:13:20.520432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.949 [2024-10-21 12:13:20.520438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.949 [2024-10-21 12:13:20.522875] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.949 [2024-10-21 12:13:20.532057] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.949 [2024-10-21 12:13:20.532497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.949 [2024-10-21 12:13:20.532513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:43.949 [2024-10-21 12:13:20.532519] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:43.949 [2024-10-21 12:13:20.532671] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:43.949 [2024-10-21 12:13:20.532823] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.949 [2024-10-21 12:13:20.532829] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.949 [2024-10-21 12:13:20.532834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.949 [2024-10-21 12:13:20.535268] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.213 [2024-10-21 12:13:20.544738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.213 [2024-10-21 12:13:20.545199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.213 [2024-10-21 12:13:20.545212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:44.213 [2024-10-21 12:13:20.545218] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:44.213 [2024-10-21 12:13:20.545376] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:44.213 [2024-10-21 12:13:20.545528] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.213 [2024-10-21 12:13:20.545534] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.213 [2024-10-21 12:13:20.545540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.213 [2024-10-21 12:13:20.547973] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.213 [2024-10-21 12:13:20.557469] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.213 [2024-10-21 12:13:20.557959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.213 [2024-10-21 12:13:20.557972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:44.213 [2024-10-21 12:13:20.557978] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:44.213 [2024-10-21 12:13:20.558129] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:44.213 [2024-10-21 12:13:20.558281] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.213 [2024-10-21 12:13:20.558288] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.213 [2024-10-21 12:13:20.558296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.213 [2024-10-21 12:13:20.560739] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.213 [2024-10-21 12:13:20.570212] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.213 [2024-10-21 12:13:20.570760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.213 [2024-10-21 12:13:20.570791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:44.213 [2024-10-21 12:13:20.570800] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:44.213 [2024-10-21 12:13:20.570967] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:44.213 [2024-10-21 12:13:20.571122] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.213 [2024-10-21 12:13:20.571130] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.213 [2024-10-21 12:13:20.571135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.213 [2024-10-21 12:13:20.573580] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.213 [2024-10-21 12:13:20.582911] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.213 [2024-10-21 12:13:20.583364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.213 [2024-10-21 12:13:20.583380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:44.213 [2024-10-21 12:13:20.583386] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:44.213 [2024-10-21 12:13:20.583537] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:44.213 [2024-10-21 12:13:20.583689] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.213 [2024-10-21 12:13:20.583696] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.213 [2024-10-21 12:13:20.583701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.213 [2024-10-21 12:13:20.586136] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.213 [2024-10-21 12:13:20.595609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.213 [2024-10-21 12:13:20.596067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.213 [2024-10-21 12:13:20.596080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:44.213 [2024-10-21 12:13:20.596086] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:44.213 [2024-10-21 12:13:20.596237] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:44.213 [2024-10-21 12:13:20.596393] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.213 [2024-10-21 12:13:20.596400] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.213 [2024-10-21 12:13:20.596405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.213 [2024-10-21 12:13:20.598837] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.213 [2024-10-21 12:13:20.608307] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.213 [2024-10-21 12:13:20.608686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.213 [2024-10-21 12:13:20.608699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:44.213 [2024-10-21 12:13:20.608705] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:44.213 [2024-10-21 12:13:20.608856] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:44.213 [2024-10-21 12:13:20.609007] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.213 [2024-10-21 12:13:20.609014] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.214 [2024-10-21 12:13:20.609019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.214 [2024-10-21 12:13:20.611455] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.214 [2024-10-21 12:13:20.620929] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.214 [2024-10-21 12:13:20.621558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.214 [2024-10-21 12:13:20.621590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:44.214 [2024-10-21 12:13:20.621599] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:44.214 [2024-10-21 12:13:20.621766] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:44.214 [2024-10-21 12:13:20.621920] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.214 [2024-10-21 12:13:20.621927] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.214 [2024-10-21 12:13:20.621933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.214 [2024-10-21 12:13:20.624374] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.214 [2024-10-21 12:13:20.633558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.214 [2024-10-21 12:13:20.634059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.214 [2024-10-21 12:13:20.634075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:44.214 [2024-10-21 12:13:20.634081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:44.214 [2024-10-21 12:13:20.634232] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:44.214 [2024-10-21 12:13:20.634389] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.214 [2024-10-21 12:13:20.634396] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.214 [2024-10-21 12:13:20.634402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.214 [2024-10-21 12:13:20.636833] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.214 [2024-10-21 12:13:20.646299] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.214 [2024-10-21 12:13:20.646744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.214 [2024-10-21 12:13:20.646776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:44.214 [2024-10-21 12:13:20.646785] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:44.214 [2024-10-21 12:13:20.646956] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:44.214 [2024-10-21 12:13:20.647110] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.214 [2024-10-21 12:13:20.647117] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.214 [2024-10-21 12:13:20.647122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.214 [2024-10-21 12:13:20.649570] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.214 [2024-10-21 12:13:20.658919] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.214 [2024-10-21 12:13:20.659418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.214 [2024-10-21 12:13:20.659433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:44.214 [2024-10-21 12:13:20.659439] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:44.214 [2024-10-21 12:13:20.659591] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:44.214 [2024-10-21 12:13:20.659742] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.214 [2024-10-21 12:13:20.659748] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.214 [2024-10-21 12:13:20.659754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.214 [2024-10-21 12:13:20.662188] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.214 [2024-10-21 12:13:20.671660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.214 [2024-10-21 12:13:20.672102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.214 [2024-10-21 12:13:20.672115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:44.214 [2024-10-21 12:13:20.672121] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:44.214 [2024-10-21 12:13:20.672272] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:44.214 [2024-10-21 12:13:20.672428] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.214 [2024-10-21 12:13:20.672435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.214 [2024-10-21 12:13:20.672441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.214 [2024-10-21 12:13:20.674873] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.214 [2024-10-21 12:13:20.684345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.214 [2024-10-21 12:13:20.684933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.214 [2024-10-21 12:13:20.684964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:44.214 [2024-10-21 12:13:20.684973] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:44.214 [2024-10-21 12:13:20.685142] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:44.214 [2024-10-21 12:13:20.685296] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.214 [2024-10-21 12:13:20.685304] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.214 [2024-10-21 12:13:20.685314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.214 [2024-10-21 12:13:20.687762] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.214 [2024-10-21 12:13:20.696961] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.214 [2024-10-21 12:13:20.697442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.214 [2024-10-21 12:13:20.697474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:44.214 [2024-10-21 12:13:20.697483] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:44.214 [2024-10-21 12:13:20.697651] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:44.214 [2024-10-21 12:13:20.697806] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.214 [2024-10-21 12:13:20.697812] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.214 [2024-10-21 12:13:20.697818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.214 [2024-10-21 12:13:20.700260] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.214 [2024-10-21 12:13:20.709590] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.214 [2024-10-21 12:13:20.710194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.214 [2024-10-21 12:13:20.710225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:44.214 [2024-10-21 12:13:20.710235] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:44.214 [2024-10-21 12:13:20.710407] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:44.214 [2024-10-21 12:13:20.710562] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.214 [2024-10-21 12:13:20.710569] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.214 [2024-10-21 12:13:20.710575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.214 [2024-10-21 12:13:20.713012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.214 [2024-10-21 12:13:20.722340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.214 [2024-10-21 12:13:20.722935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.214 [2024-10-21 12:13:20.722966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:44.214 [2024-10-21 12:13:20.722975] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:44.214 [2024-10-21 12:13:20.723143] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:44.214 [2024-10-21 12:13:20.723297] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.214 [2024-10-21 12:13:20.723304] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.214 [2024-10-21 12:13:20.723310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.214 [2024-10-21 12:13:20.725754] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.214 [2024-10-21 12:13:20.735074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.214 [2024-10-21 12:13:20.735638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.215 [2024-10-21 12:13:20.735672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:44.215 [2024-10-21 12:13:20.735682] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:44.215 [2024-10-21 12:13:20.735850] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:44.215 [2024-10-21 12:13:20.736004] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.215 [2024-10-21 12:13:20.736011] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.215 [2024-10-21 12:13:20.736017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.215 [2024-10-21 12:13:20.738464] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.215 [2024-10-21 12:13:20.747816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.215 [2024-10-21 12:13:20.748431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.215 [2024-10-21 12:13:20.748463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:44.215 [2024-10-21 12:13:20.748472] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:44.215 [2024-10-21 12:13:20.748639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:44.215 [2024-10-21 12:13:20.748793] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.215 [2024-10-21 12:13:20.748802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.215 [2024-10-21 12:13:20.748807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.215 [2024-10-21 12:13:20.751245] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.215 [2024-10-21 12:13:20.760445] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.215 [2024-10-21 12:13:20.761040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.215 [2024-10-21 12:13:20.761071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:44.215 [2024-10-21 12:13:20.761080] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:44.215 [2024-10-21 12:13:20.761247] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:44.215 [2024-10-21 12:13:20.761409] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.215 [2024-10-21 12:13:20.761417] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.215 [2024-10-21 12:13:20.761423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.215 [2024-10-21 12:13:20.763859] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.215 [2024-10-21 12:13:20.773183] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.215 [2024-10-21 12:13:20.773781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.215 [2024-10-21 12:13:20.773813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:44.215 [2024-10-21 12:13:20.773822] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:44.215 [2024-10-21 12:13:20.773989] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:44.215 [2024-10-21 12:13:20.774147] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.215 [2024-10-21 12:13:20.774154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.215 [2024-10-21 12:13:20.774160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.215 [2024-10-21 12:13:20.776600] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.215 [2024-10-21 12:13:20.785920] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.215 [2024-10-21 12:13:20.786427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.215 [2024-10-21 12:13:20.786459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:44.215 [2024-10-21 12:13:20.786468] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:44.215 [2024-10-21 12:13:20.786638] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:44.215 [2024-10-21 12:13:20.786792] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.215 [2024-10-21 12:13:20.786799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.215 [2024-10-21 12:13:20.786805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.215 [2024-10-21 12:13:20.789247] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.215 [2024-10-21 12:13:20.798570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.215 [2024-10-21 12:13:20.799009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.215 [2024-10-21 12:13:20.799024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:44.215 [2024-10-21 12:13:20.799031] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:44.215 [2024-10-21 12:13:20.799182] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:44.215 [2024-10-21 12:13:20.799338] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.215 [2024-10-21 12:13:20.799345] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.215 [2024-10-21 12:13:20.799351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1153167 Killed "${NVMF_APP[@]}" "$@" 00:28:44.215 12:13:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:28:44.215 12:13:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:44.215 [2024-10-21 12:13:20.801782] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.215 12:13:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:44.215 12:13:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:44.215 12:13:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:44.478 12:13:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=1154737 00:28:44.478 12:13:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 1154737 00:28:44.479 12:13:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:44.479 12:13:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1154737 ']' 00:28:44.479 [2024-10-21 12:13:20.811240] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.479 12:13:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:44.479 [2024-10-21 12:13:20.811603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.479 [2024-10-21 12:13:20.811616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:44.479 [2024-10-21 12:13:20.811625] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:44.479 12:13:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:44.479 [2024-10-21 12:13:20.811777] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:44.479 12:13:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:44.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:44.479 [2024-10-21 12:13:20.811928] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.479 [2024-10-21 12:13:20.811935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.479 [2024-10-21 12:13:20.811940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.479 12:13:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:44.479 12:13:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:44.479 [2024-10-21 12:13:20.814374] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.479 [2024-10-21 12:13:20.823975] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.479 [2024-10-21 12:13:20.824453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.479 [2024-10-21 12:13:20.824485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:44.479 [2024-10-21 12:13:20.824494] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:44.479 [2024-10-21 12:13:20.824663] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:44.479 [2024-10-21 12:13:20.824817] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.479 [2024-10-21 12:13:20.824824] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.479 [2024-10-21 12:13:20.824829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.479 [2024-10-21 12:13:20.827270] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.479 [2024-10-21 12:13:20.836594] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.479 [2024-10-21 12:13:20.837191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.479 [2024-10-21 12:13:20.837223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:44.479 [2024-10-21 12:13:20.837232] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:44.479 [2024-10-21 12:13:20.837408] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:44.479 [2024-10-21 12:13:20.837563] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.479 [2024-10-21 12:13:20.837570] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.479 [2024-10-21 12:13:20.837576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.479 [2024-10-21 12:13:20.840020] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.479 [2024-10-21 12:13:20.849219] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.479 [2024-10-21 12:13:20.849647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.479 [2024-10-21 12:13:20.849677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:44.479 [2024-10-21 12:13:20.849686] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:44.479 [2024-10-21 12:13:20.849854] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:44.479 [2024-10-21 12:13:20.850008] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.479 [2024-10-21 12:13:20.850015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.479 [2024-10-21 12:13:20.850020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.479 [2024-10-21 12:13:20.852466] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.479 [2024-10-21 12:13:20.861957] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.479 [2024-10-21 12:13:20.862593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.479 [2024-10-21 12:13:20.862625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:44.479 [2024-10-21 12:13:20.862634] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:44.479 [2024-10-21 12:13:20.862801] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:44.479 [2024-10-21 12:13:20.862956] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.479 [2024-10-21 12:13:20.862963] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.479 [2024-10-21 12:13:20.862969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.479 [2024-10-21 12:13:20.863432] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:28:44.479 [2024-10-21 12:13:20.863482] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:44.479 [2024-10-21 12:13:20.865414] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.479 [2024-10-21 12:13:20.874597] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.479 [2024-10-21 12:13:20.875045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.479 [2024-10-21 12:13:20.875078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:44.479 [2024-10-21 12:13:20.875087] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:44.479 [2024-10-21 12:13:20.875256] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:44.479 [2024-10-21 12:13:20.875416] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.479 [2024-10-21 12:13:20.875424] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.479 [2024-10-21 12:13:20.875431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.479 [2024-10-21 12:13:20.877873] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.479 [2024-10-21 12:13:20.887342] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.479 [2024-10-21 12:13:20.887894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.479 [2024-10-21 12:13:20.887926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:44.479 [2024-10-21 12:13:20.887935] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:44.479 [2024-10-21 12:13:20.888102] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:44.479 [2024-10-21 12:13:20.888256] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.479 [2024-10-21 12:13:20.888263] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.479 [2024-10-21 12:13:20.888269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.479 [2024-10-21 12:13:20.890715] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.479 [2024-10-21 12:13:20.900042] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.479 [2024-10-21 12:13:20.900516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.479 [2024-10-21 12:13:20.900532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:44.479 [2024-10-21 12:13:20.900538] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:44.479 [2024-10-21 12:13:20.900691] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:44.479 [2024-10-21 12:13:20.900842] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.479 [2024-10-21 12:13:20.900849] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.479 [2024-10-21 12:13:20.900854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.479 [2024-10-21 12:13:20.903288] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.479 [2024-10-21 12:13:20.912714] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.479 [2024-10-21 12:13:20.913094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.479 [2024-10-21 12:13:20.913108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:44.479 [2024-10-21 12:13:20.913114] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:44.479 [2024-10-21 12:13:20.913266] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:44.479 [2024-10-21 12:13:20.913423] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.479 [2024-10-21 12:13:20.913431] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.480 [2024-10-21 12:13:20.913437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.480 [2024-10-21 12:13:20.915867] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.480 [2024-10-21 12:13:20.925333] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.480 [2024-10-21 12:13:20.925931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.480 [2024-10-21 12:13:20.925963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:44.480 [2024-10-21 12:13:20.925979] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:44.480 [2024-10-21 12:13:20.926147] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:44.480 [2024-10-21 12:13:20.926180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:44.480 [2024-10-21 12:13:20.926301] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.480 [2024-10-21 12:13:20.926308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.480 [2024-10-21 12:13:20.926314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.480 [2024-10-21 12:13:20.928759] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.480 [2024-10-21 12:13:20.937953] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.480 [2024-10-21 12:13:20.938442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.480 [2024-10-21 12:13:20.938460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:44.480 [2024-10-21 12:13:20.938466] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:44.480 [2024-10-21 12:13:20.938619] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:44.480 [2024-10-21 12:13:20.938771] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.480 [2024-10-21 12:13:20.938778] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.480 [2024-10-21 12:13:20.938783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.480 [2024-10-21 12:13:20.941218] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.480 [2024-10-21 12:13:20.950694] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.480 [2024-10-21 12:13:20.951335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.480 [2024-10-21 12:13:20.951367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:44.480 [2024-10-21 12:13:20.951376] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:44.480 [2024-10-21 12:13:20.951548] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:44.480 [2024-10-21 12:13:20.951703] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.480 [2024-10-21 12:13:20.951710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.480 [2024-10-21 12:13:20.951715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.480 [2024-10-21 12:13:20.954152] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.480 [2024-10-21 12:13:20.955424] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:44.480 [2024-10-21 12:13:20.955447] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:44.480 [2024-10-21 12:13:20.955454] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:44.480 [2024-10-21 12:13:20.955460] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:44.480 [2024-10-21 12:13:20.955465] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:44.480 [2024-10-21 12:13:20.959337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:44.480 [2024-10-21 12:13:20.959524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:44.480 [2024-10-21 12:13:20.959525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:44.480 [2024-10-21 12:13:20.963376] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.480 [2024-10-21 12:13:20.963905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.480 [2024-10-21 12:13:20.963922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:44.480 [2024-10-21 12:13:20.963928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:44.480 [2024-10-21 12:13:20.964080] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:44.480 [2024-10-21 12:13:20.964232] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.480 [2024-10-21 12:13:20.964239] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.480 [2024-10-21 12:13:20.964244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.480 [2024-10-21 12:13:20.966680] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.480 [2024-10-21 12:13:20.976007] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.480 [2024-10-21 12:13:20.976372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.480 [2024-10-21 12:13:20.976387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:44.480 [2024-10-21 12:13:20.976393] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:44.480 [2024-10-21 12:13:20.976545] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:44.480 [2024-10-21 12:13:20.976696] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.480 [2024-10-21 12:13:20.976703] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.480 [2024-10-21 12:13:20.976708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.480 [2024-10-21 12:13:20.979140] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.480 [2024-10-21 12:13:20.988758] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.480 [2024-10-21 12:13:20.989229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.480 [2024-10-21 12:13:20.989244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:44.480 [2024-10-21 12:13:20.989250] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:44.480 [2024-10-21 12:13:20.989406] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:44.480 [2024-10-21 12:13:20.989558] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.480 [2024-10-21 12:13:20.989565] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.480 [2024-10-21 12:13:20.989570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.480 [2024-10-21 12:13:20.992000] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.480 [2024-10-21 12:13:21.001473] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.480 [2024-10-21 12:13:21.002038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.480 [2024-10-21 12:13:21.002079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:44.480 [2024-10-21 12:13:21.002089] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:44.480 [2024-10-21 12:13:21.002263] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:44.480 [2024-10-21 12:13:21.002425] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.480 [2024-10-21 12:13:21.002433] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.480 [2024-10-21 12:13:21.002439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.480 [2024-10-21 12:13:21.004879] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.480 [2024-10-21 12:13:21.014200] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.480 [2024-10-21 12:13:21.014777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.480 [2024-10-21 12:13:21.014810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:44.480 [2024-10-21 12:13:21.014820] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:44.480 [2024-10-21 12:13:21.014991] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:44.480 [2024-10-21 12:13:21.015146] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.480 [2024-10-21 12:13:21.015153] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.480 [2024-10-21 12:13:21.015159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.480 [2024-10-21 12:13:21.017604] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.480 [2024-10-21 12:13:21.026931] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.481 [2024-10-21 12:13:21.027538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.481 [2024-10-21 12:13:21.027570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:44.481 [2024-10-21 12:13:21.027579] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:44.481 [2024-10-21 12:13:21.027746] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:44.481 [2024-10-21 12:13:21.027901] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.481 [2024-10-21 12:13:21.027908] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.481 [2024-10-21 12:13:21.027914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.481 [2024-10-21 12:13:21.030354] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.481 [2024-10-21 12:13:21.039670] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.481 [2024-10-21 12:13:21.039944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.481 [2024-10-21 12:13:21.039960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:44.481 [2024-10-21 12:13:21.039966] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:44.481 [2024-10-21 12:13:21.040118] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:44.481 [2024-10-21 12:13:21.040274] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.481 [2024-10-21 12:13:21.040281] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.481 [2024-10-21 12:13:21.040286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.481 [2024-10-21 12:13:21.042720] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.481 [2024-10-21 12:13:21.052493] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.481 [2024-10-21 12:13:21.052980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.481 [2024-10-21 12:13:21.052994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:44.481 [2024-10-21 12:13:21.052999] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:44.481 [2024-10-21 12:13:21.053151] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:44.481 [2024-10-21 12:13:21.053302] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.481 [2024-10-21 12:13:21.053308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.481 [2024-10-21 12:13:21.053314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.481 [2024-10-21 12:13:21.055751] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.481 12:13:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:44.481 12:13:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:28:44.481 12:13:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:44.481 12:13:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:44.481 12:13:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:44.481 [2024-10-21 12:13:21.065232] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.481 [2024-10-21 12:13:21.065826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.481 [2024-10-21 12:13:21.065858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:44.481 [2024-10-21 12:13:21.065867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:44.481 [2024-10-21 12:13:21.066036] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:44.481 [2024-10-21 12:13:21.066191] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.481 [2024-10-21 12:13:21.066199] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.481 [2024-10-21 12:13:21.066205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.481 [2024-10-21 12:13:21.068649] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.743 [2024-10-21 12:13:21.077976] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.743 [2024-10-21 12:13:21.078444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.743 [2024-10-21 12:13:21.078460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:44.743 [2024-10-21 12:13:21.078466] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:44.743 [2024-10-21 12:13:21.078619] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:44.743 [2024-10-21 12:13:21.078774] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.743 [2024-10-21 12:13:21.078781] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.743 [2024-10-21 12:13:21.078787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.743 [2024-10-21 12:13:21.081221] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.743 [2024-10-21 12:13:21.090689] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.743 [2024-10-21 12:13:21.091175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.743 [2024-10-21 12:13:21.091189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:44.743 [2024-10-21 12:13:21.091194] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:44.743 [2024-10-21 12:13:21.091350] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:44.743 [2024-10-21 12:13:21.091502] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.743 [2024-10-21 12:13:21.091508] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.743 [2024-10-21 12:13:21.091513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.743 [2024-10-21 12:13:21.093942] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.743 [2024-10-21 12:13:21.103414] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.743 [2024-10-21 12:13:21.103883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.743 [2024-10-21 12:13:21.103896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:44.743 [2024-10-21 12:13:21.103901] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:44.743 [2024-10-21 12:13:21.104052] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:44.743 12:13:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:44.743 [2024-10-21 12:13:21.104204] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.743 [2024-10-21 12:13:21.104212] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.743 [2024-10-21 12:13:21.104217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.743 12:13:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:44.743 12:13:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.743 12:13:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:44.743 [2024-10-21 12:13:21.106654] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.743 [2024-10-21 12:13:21.109464] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:44.743 12:13:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.743 12:13:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:44.743 12:13:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.743 12:13:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:44.744 [2024-10-21 12:13:21.116121] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.744 [2024-10-21 12:13:21.116547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.744 [2024-10-21 12:13:21.116565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:44.744 [2024-10-21 12:13:21.116572] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:44.744 [2024-10-21 12:13:21.116724] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:44.744 [2024-10-21 12:13:21.116875] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.744 [2024-10-21 12:13:21.116882] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.744 [2024-10-21 12:13:21.116888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.744 [2024-10-21 12:13:21.119319] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.744 [2024-10-21 12:13:21.128787] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.744 [2024-10-21 12:13:21.129284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.744 [2024-10-21 12:13:21.129297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:44.744 [2024-10-21 12:13:21.129303] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:44.744 [2024-10-21 12:13:21.129458] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:44.744 [2024-10-21 12:13:21.129610] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.744 [2024-10-21 12:13:21.129616] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.744 [2024-10-21 12:13:21.129622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.744 [2024-10-21 12:13:21.132049] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.744 [2024-10-21 12:13:21.141508] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.744 [2024-10-21 12:13:21.142086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.744 [2024-10-21 12:13:21.142118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:44.744 [2024-10-21 12:13:21.142127] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:44.744 [2024-10-21 12:13:21.142295] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:44.744 [2024-10-21 12:13:21.142455] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.744 [2024-10-21 12:13:21.142463] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.744 [2024-10-21 12:13:21.142469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.744 Malloc0 00:28:44.744 12:13:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.744 12:13:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:44.744 12:13:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.744 [2024-10-21 12:13:21.144906] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.744 12:13:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:44.744 [2024-10-21 12:13:21.154227] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.744 [2024-10-21 12:13:21.154825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.744 [2024-10-21 12:13:21.154860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:44.744 [2024-10-21 12:13:21.154869] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:44.744 [2024-10-21 12:13:21.155038] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:44.744 [2024-10-21 12:13:21.155192] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.744 [2024-10-21 12:13:21.155199] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.744 [2024-10-21 12:13:21.155205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.744 12:13:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.744 12:13:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:44.744 12:13:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.744 12:13:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:44.744 [2024-10-21 12:13:21.157671] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.744 [2024-10-21 12:13:21.166849] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.744 [2024-10-21 12:13:21.167242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.744 [2024-10-21 12:13:21.167257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082a10 with addr=10.0.0.2, port=4420 00:28:44.744 [2024-10-21 12:13:21.167263] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082a10 is same with the state(6) to be set 00:28:44.744 [2024-10-21 12:13:21.167419] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082a10 (9): Bad file descriptor 00:28:44.744 [2024-10-21 12:13:21.167571] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.744 [2024-10-21 12:13:21.167579] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.744 [2024-10-21 12:13:21.167584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.744 12:13:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.744 12:13:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:44.744 12:13:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.744 12:13:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:44.744 [2024-10-21 12:13:21.170015] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.744 [2024-10-21 12:13:21.175092] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:44.744 [2024-10-21 12:13:21.179475] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.744 12:13:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.744 12:13:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1153556 00:28:44.744 [2024-10-21 12:13:21.208556] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:45.689 4806.00 IOPS, 18.77 MiB/s [2024-10-21T10:13:23.668Z] 5979.14 IOPS, 23.36 MiB/s [2024-10-21T10:13:24.612Z] 6859.50 IOPS, 26.79 MiB/s [2024-10-21T10:13:25.555Z] 7554.67 IOPS, 29.51 MiB/s [2024-10-21T10:13:26.498Z] 8095.00 IOPS, 31.62 MiB/s [2024-10-21T10:13:27.441Z] 8528.09 IOPS, 33.31 MiB/s [2024-10-21T10:13:28.384Z] 8887.42 IOPS, 34.72 MiB/s [2024-10-21T10:13:29.326Z] 9214.23 IOPS, 35.99 MiB/s [2024-10-21T10:13:30.712Z] 9486.43 IOPS, 37.06 MiB/s 00:28:54.117 Latency(us) 00:28:54.117 [2024-10-21T10:13:30.712Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:54.117 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:54.117 Verification LBA range: start 0x0 length 0x4000 00:28:54.117 Nvme1n1 : 15.01 9717.45 37.96 11067.30 0.00 6139.07 559.79 15400.96 00:28:54.117 [2024-10-21T10:13:30.712Z] =================================================================================================================== 00:28:54.117 [2024-10-21T10:13:30.712Z] Total : 9717.45 37.96 11067.30 0.00 6139.07 559.79 15400.96 00:28:54.117 12:13:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:28:54.117 12:13:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:54.117 12:13:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.117 12:13:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:54.117 12:13:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.117 12:13:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:54.117 12:13:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:54.117 12:13:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:54.117 12:13:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:28:54.117 12:13:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:54.117 12:13:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:28:54.117 12:13:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:54.117 12:13:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:54.117 rmmod nvme_tcp 00:28:54.117 rmmod nvme_fabrics 00:28:54.117 rmmod nvme_keyring 00:28:54.117 12:13:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:54.117 12:13:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:28:54.117 12:13:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:28:54.117 12:13:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@515 -- # '[' -n 1154737 ']' 00:28:54.117 12:13:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # killprocess 1154737 00:28:54.117 12:13:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 1154737 ']' 00:28:54.117 12:13:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 1154737 00:28:54.117 12:13:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:28:54.117 12:13:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:54.117 12:13:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1154737 00:28:54.117 12:13:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:54.117 12:13:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:54.117 12:13:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1154737' 00:28:54.117 killing process with pid 1154737 00:28:54.117 12:13:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 1154737 00:28:54.117 12:13:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 1154737 00:28:54.117 12:13:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:54.117 12:13:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:54.117 12:13:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:54.117 12:13:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:28:54.117 12:13:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-save 00:28:54.117 12:13:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:54.117 12:13:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-restore 00:28:54.117 12:13:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:54.117 12:13:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:54.117 12:13:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:54.117 12:13:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:54.117 12:13:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:56.665 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:56.665 00:28:56.665 real 0m28.324s 00:28:56.665 user 1m3.405s 00:28:56.665 sys 0m7.679s 00:28:56.665 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:56.665 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:56.665 ************************************ 00:28:56.665 END TEST nvmf_bdevperf 00:28:56.665 ************************************ 00:28:56.665 12:13:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:56.665 12:13:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:56.665 12:13:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:56.665 12:13:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.665 ************************************ 00:28:56.665 START TEST nvmf_target_disconnect 00:28:56.665 ************************************ 00:28:56.665 12:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:56.665 * Looking for test storage... 00:28:56.665 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:56.665 12:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:56.665 12:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:28:56.665 12:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:56.665 12:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:56.665 12:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:56.665 12:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:56.665 12:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:56.665 12:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:28:56.665 12:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:28:56.665 12:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:28:56.665 12:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:28:56.665 12:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:28:56.665 12:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:28:56.665 12:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:28:56.665 12:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:56.665 12:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:28:56.665 12:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:28:56.665 12:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:56.665 12:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:56.665 12:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:28:56.665 12:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:28:56.665 12:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:56.665 12:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:28:56.665 12:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:28:56.665 12:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:28:56.665 12:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:28:56.665 12:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:56.665 12:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:28:56.665 12:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:28:56.665 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:56.665 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:56.665 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:28:56.665 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:56.665 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:56.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.665 --rc genhtml_branch_coverage=1 00:28:56.665 --rc genhtml_function_coverage=1 00:28:56.665 --rc genhtml_legend=1 00:28:56.665 --rc geninfo_all_blocks=1 00:28:56.665 --rc geninfo_unexecuted_blocks=1 00:28:56.665 00:28:56.665 ' 00:28:56.665 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:56.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.665 --rc genhtml_branch_coverage=1 00:28:56.665 --rc genhtml_function_coverage=1 00:28:56.665 --rc genhtml_legend=1 00:28:56.665 --rc geninfo_all_blocks=1 00:28:56.665 --rc geninfo_unexecuted_blocks=1 00:28:56.665 00:28:56.665 ' 00:28:56.665 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:56.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.665 --rc genhtml_branch_coverage=1 00:28:56.665 --rc genhtml_function_coverage=1 00:28:56.665 --rc genhtml_legend=1 00:28:56.665 --rc geninfo_all_blocks=1 00:28:56.665 --rc geninfo_unexecuted_blocks=1 00:28:56.665 00:28:56.665 ' 00:28:56.665 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:56.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.665 --rc genhtml_branch_coverage=1 00:28:56.665 --rc genhtml_function_coverage=1 00:28:56.665 --rc genhtml_legend=1 00:28:56.665 --rc geninfo_all_blocks=1 00:28:56.665 --rc geninfo_unexecuted_blocks=1 00:28:56.665 00:28:56.665 ' 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:56.666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:28:56.666 12:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:04.811 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:04.811 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:04.811 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:04.811 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:04.811 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:04.812 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:04.812 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:29:04.812 00:29:04.812 --- 10.0.0.2 ping statistics --- 00:29:04.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:04.812 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:04.812 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:04.812 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:29:04.812 00:29:04.812 --- 10.0.0.1 ping statistics --- 00:29:04.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:04.812 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # return 0 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:04.812 ************************************ 00:29:04.812 START TEST nvmf_target_disconnect_tc1 00:29:04.812 ************************************ 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:04.812 [2024-10-21 12:13:40.680964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.812 [2024-10-21 12:13:40.681033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa06150 with addr=10.0.0.2, port=4420 00:29:04.812 [2024-10-21 12:13:40.681065] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:04.812 [2024-10-21 12:13:40.681080] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:04.812 [2024-10-21 12:13:40.681088] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:29:04.812 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:04.812 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:04.812 Initializing NVMe Controllers 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:04.812 00:29:04.812 real 0m0.132s 00:29:04.812 user 0m0.050s 00:29:04.812 sys 0m0.082s 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:04.812 ************************************ 00:29:04.812 END TEST nvmf_target_disconnect_tc1 00:29:04.812 ************************************ 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:04.812 ************************************ 00:29:04.812 START TEST nvmf_target_disconnect_tc2 00:29:04.812 ************************************ 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=1160893 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 1160893 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1160893 ']' 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:04.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:04.812 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:04.813 12:13:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:04.813 [2024-10-21 12:13:40.844424] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:29:04.813 [2024-10-21 12:13:40.844485] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:04.813 [2024-10-21 12:13:40.934383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:04.813 [2024-10-21 12:13:40.986479] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:04.813 [2024-10-21 12:13:40.986531] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:04.813 [2024-10-21 12:13:40.986540] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:04.813 [2024-10-21 12:13:40.986547] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:04.813 [2024-10-21 12:13:40.986554] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:04.813 [2024-10-21 12:13:40.988583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:04.813 [2024-10-21 12:13:40.988743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:04.813 [2024-10-21 12:13:40.988894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:04.813 [2024-10-21 12:13:40.988895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:05.386 12:13:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:05.386 12:13:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:29:05.386 12:13:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:05.386 12:13:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:05.386 12:13:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:05.386 12:13:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:05.386 12:13:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:05.386 12:13:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.386 12:13:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:05.386 Malloc0 00:29:05.386 12:13:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.386 12:13:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:05.386 12:13:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.386 12:13:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:05.386 [2024-10-21 12:13:41.762002] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:05.386 12:13:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.386 12:13:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:05.386 12:13:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.386 12:13:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:05.386 12:13:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.386 12:13:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:05.386 12:13:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.386 12:13:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:05.386 12:13:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.387 12:13:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:05.387 12:13:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.387 12:13:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:05.387 [2024-10-21 12:13:41.802430] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:05.387 12:13:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.387 12:13:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:05.387 12:13:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.387 12:13:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:05.387 12:13:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.387 12:13:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1160974 00:29:05.387 12:13:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:05.387 12:13:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:07.309 12:13:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1160893 00:29:07.309 12:13:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:07.309 Read completed with error (sct=0, sc=8) 00:29:07.309 starting I/O failed 00:29:07.309 Read completed with error (sct=0, sc=8) 00:29:07.309 starting I/O failed 00:29:07.309 Read completed with error (sct=0, sc=8) 00:29:07.309 starting I/O failed 00:29:07.309 Read completed with error (sct=0, sc=8) 00:29:07.309 starting I/O failed 00:29:07.309 Read completed with error (sct=0, sc=8) 00:29:07.309 starting I/O failed 00:29:07.309 Read completed with error (sct=0, sc=8) 00:29:07.309 starting I/O failed 00:29:07.309 Read completed with error (sct=0, sc=8) 00:29:07.309 starting I/O failed 00:29:07.309 Read completed with error (sct=0, sc=8) 00:29:07.309 starting I/O failed 00:29:07.309 Read completed with error (sct=0, sc=8) 00:29:07.309 starting I/O failed 00:29:07.309 Read completed with error (sct=0, sc=8) 00:29:07.309 starting I/O failed 00:29:07.309 Read completed with error (sct=0, sc=8) 00:29:07.309 starting I/O failed 00:29:07.309 Read completed with error (sct=0, sc=8) 00:29:07.309 starting I/O failed 00:29:07.309 Read completed with error (sct=0, sc=8) 00:29:07.309 starting I/O failed 00:29:07.309 Read completed with error (sct=0, sc=8) 00:29:07.309 starting I/O failed 00:29:07.309 Read completed with error (sct=0, sc=8) 00:29:07.309 starting I/O failed 00:29:07.309 Read completed with error (sct=0, sc=8) 00:29:07.309 starting I/O failed 00:29:07.309 Read completed with error (sct=0, sc=8) 00:29:07.309 starting I/O failed 00:29:07.309 Read completed with error (sct=0, sc=8) 00:29:07.309 starting I/O failed 00:29:07.309 Read completed with error (sct=0, sc=8) 00:29:07.309 starting I/O failed 00:29:07.309 Write completed with error (sct=0, sc=8) 00:29:07.309 starting I/O failed 00:29:07.309 Read completed with error (sct=0, sc=8) 00:29:07.309 starting I/O failed 00:29:07.309 Write completed with error (sct=0, sc=8) 00:29:07.309 starting I/O failed 00:29:07.309 Read completed with error (sct=0, sc=8) 00:29:07.309 starting I/O failed 00:29:07.309 Write completed with error (sct=0, sc=8) 00:29:07.309 starting I/O failed 00:29:07.309 Read completed with error (sct=0, sc=8) 00:29:07.309 starting I/O failed 00:29:07.309 Write completed with error (sct=0, sc=8) 00:29:07.309 starting I/O failed 00:29:07.309 Write completed with error (sct=0, sc=8) 00:29:07.309 starting I/O failed 00:29:07.309 Write completed with error (sct=0, sc=8) 00:29:07.309 starting I/O failed 00:29:07.309 Write completed with error (sct=0, sc=8) 00:29:07.309 starting I/O failed 00:29:07.309 Read completed with error (sct=0, sc=8) 00:29:07.309 starting I/O failed 00:29:07.309 Read completed with error (sct=0, sc=8) 00:29:07.309 starting I/O failed 00:29:07.309 Write completed with error (sct=0, sc=8) 00:29:07.309 starting I/O failed 00:29:07.309 [2024-10-21 12:13:43.841570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.309 [2024-10-21 12:13:43.842007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.309 [2024-10-21 12:13:43.842045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.309 qpair failed and we were unable to recover it. 00:29:07.309 [2024-10-21 12:13:43.842383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.309 [2024-10-21 12:13:43.842406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.309 qpair failed and we were unable to recover it. 00:29:07.309 [2024-10-21 12:13:43.842758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.309 [2024-10-21 12:13:43.842772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.309 qpair failed and we were unable to recover it. 00:29:07.309 [2024-10-21 12:13:43.843098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.309 [2024-10-21 12:13:43.843118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.309 qpair failed and we were unable to recover it. 00:29:07.309 [2024-10-21 12:13:43.843386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.309 [2024-10-21 12:13:43.843421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.309 qpair failed and we were unable to recover it. 00:29:07.309 [2024-10-21 12:13:43.843784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.309 [2024-10-21 12:13:43.843799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.309 qpair failed and we were unable to recover it. 00:29:07.309 [2024-10-21 12:13:43.843987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.309 [2024-10-21 12:13:43.844008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.309 qpair failed and we were unable to recover it. 00:29:07.309 [2024-10-21 12:13:43.844173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.309 [2024-10-21 12:13:43.844185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.309 qpair failed and we were unable to recover it. 00:29:07.309 [2024-10-21 12:13:43.844466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.309 [2024-10-21 12:13:43.844479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.309 qpair failed and we were unable to recover it. 00:29:07.310 [2024-10-21 12:13:43.844845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.310 [2024-10-21 12:13:43.844857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.310 qpair failed and we were unable to recover it. 00:29:07.310 [2024-10-21 12:13:43.845207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.310 [2024-10-21 12:13:43.845220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.310 qpair failed and we were unable to recover it. 00:29:07.310 [2024-10-21 12:13:43.845549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.310 [2024-10-21 12:13:43.845564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.310 qpair failed and we were unable to recover it. 00:29:07.310 [2024-10-21 12:13:43.846798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.310 [2024-10-21 12:13:43.846833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.310 qpair failed and we were unable to recover it. 00:29:07.310 [2024-10-21 12:13:43.847144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.310 [2024-10-21 12:13:43.847158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.310 qpair failed and we were unable to recover it. 00:29:07.310 [2024-10-21 12:13:43.847554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.310 [2024-10-21 12:13:43.847568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.310 qpair failed and we were unable to recover it. 00:29:07.310 [2024-10-21 12:13:43.847915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.310 [2024-10-21 12:13:43.847928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.310 qpair failed and we were unable to recover it. 00:29:07.310 [2024-10-21 12:13:43.848091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.310 [2024-10-21 12:13:43.848104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.310 qpair failed and we were unable to recover it. 00:29:07.310 [2024-10-21 12:13:43.848393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.310 [2024-10-21 12:13:43.848405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.310 qpair failed and we were unable to recover it. 00:29:07.310 [2024-10-21 12:13:43.848729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.310 [2024-10-21 12:13:43.848742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.310 qpair failed and we were unable to recover it. 00:29:07.310 [2024-10-21 12:13:43.849085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.310 [2024-10-21 12:13:43.849100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.310 qpair failed and we were unable to recover it. 00:29:07.310 [2024-10-21 12:13:43.849449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.310 [2024-10-21 12:13:43.849464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.310 qpair failed and we were unable to recover it. 00:29:07.310 [2024-10-21 12:13:43.849761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.310 [2024-10-21 12:13:43.849773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.310 qpair failed and we were unable to recover it. 00:29:07.310 [2024-10-21 12:13:43.850079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.310 [2024-10-21 12:13:43.850092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.310 qpair failed and we were unable to recover it. 00:29:07.310 [2024-10-21 12:13:43.850310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.310 [2024-10-21 12:13:43.850328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.310 qpair failed and we were unable to recover it. 00:29:07.310 [2024-10-21 12:13:43.850587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.310 [2024-10-21 12:13:43.850602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.310 qpair failed and we were unable to recover it. 00:29:07.310 [2024-10-21 12:13:43.850941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.310 [2024-10-21 12:13:43.850953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.310 qpair failed and we were unable to recover it. 00:29:07.310 [2024-10-21 12:13:43.851311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.310 [2024-10-21 12:13:43.851330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.310 qpair failed and we were unable to recover it. 00:29:07.310 [2024-10-21 12:13:43.852735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.310 [2024-10-21 12:13:43.852773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.310 qpair failed and we were unable to recover it. 00:29:07.310 [2024-10-21 12:13:43.853121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.310 [2024-10-21 12:13:43.853139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.310 qpair failed and we were unable to recover it. 00:29:07.310 [2024-10-21 12:13:43.853464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.310 [2024-10-21 12:13:43.853479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.310 qpair failed and we were unable to recover it. 00:29:07.310 [2024-10-21 12:13:43.853828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.310 [2024-10-21 12:13:43.853843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.310 qpair failed and we were unable to recover it. 00:29:07.310 [2024-10-21 12:13:43.854156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.310 [2024-10-21 12:13:43.854170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.310 qpair failed and we were unable to recover it. 00:29:07.310 [2024-10-21 12:13:43.854520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.310 [2024-10-21 12:13:43.854536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.310 qpair failed and we were unable to recover it. 00:29:07.310 [2024-10-21 12:13:43.854875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.310 [2024-10-21 12:13:43.854888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.310 qpair failed and we were unable to recover it. 00:29:07.310 [2024-10-21 12:13:43.855234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.310 [2024-10-21 12:13:43.855249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.310 qpair failed and we were unable to recover it. 00:29:07.310 [2024-10-21 12:13:43.855574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.311 [2024-10-21 12:13:43.855589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.311 qpair failed and we were unable to recover it. 00:29:07.311 [2024-10-21 12:13:43.855945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.311 [2024-10-21 12:13:43.855958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.311 qpair failed and we were unable to recover it. 00:29:07.311 [2024-10-21 12:13:43.856293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.311 [2024-10-21 12:13:43.856305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.311 qpair failed and we were unable to recover it. 00:29:07.311 [2024-10-21 12:13:43.856526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.311 [2024-10-21 12:13:43.856539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.311 qpair failed and we were unable to recover it. 00:29:07.311 [2024-10-21 12:13:43.856901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.311 [2024-10-21 12:13:43.856914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.311 qpair failed and we were unable to recover it. 00:29:07.311 [2024-10-21 12:13:43.857249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.311 [2024-10-21 12:13:43.857264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.311 qpair failed and we were unable to recover it. 00:29:07.311 [2024-10-21 12:13:43.857567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.311 [2024-10-21 12:13:43.857580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.311 qpair failed and we were unable to recover it. 00:29:07.311 [2024-10-21 12:13:43.857895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.311 [2024-10-21 12:13:43.857909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.311 qpair failed and we were unable to recover it. 00:29:07.311 [2024-10-21 12:13:43.858212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.311 [2024-10-21 12:13:43.858225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.311 qpair failed and we were unable to recover it. 00:29:07.311 [2024-10-21 12:13:43.858555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.311 [2024-10-21 12:13:43.858568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.311 qpair failed and we were unable to recover it. 00:29:07.311 [2024-10-21 12:13:43.858867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.311 [2024-10-21 12:13:43.858879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.311 qpair failed and we were unable to recover it. 00:29:07.311 [2024-10-21 12:13:43.859242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.311 [2024-10-21 12:13:43.859258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.311 qpair failed and we were unable to recover it. 00:29:07.311 [2024-10-21 12:13:43.859575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.311 [2024-10-21 12:13:43.859589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.311 qpair failed and we were unable to recover it. 00:29:07.311 [2024-10-21 12:13:43.860752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.311 [2024-10-21 12:13:43.860789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.311 qpair failed and we were unable to recover it. 00:29:07.311 [2024-10-21 12:13:43.861159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.311 [2024-10-21 12:13:43.861175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.311 qpair failed and we were unable to recover it. 00:29:07.311 [2024-10-21 12:13:43.861502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.311 [2024-10-21 12:13:43.861514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.311 qpair failed and we were unable to recover it. 00:29:07.311 [2024-10-21 12:13:43.861861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.311 [2024-10-21 12:13:43.861874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.311 qpair failed and we were unable to recover it. 00:29:07.311 [2024-10-21 12:13:43.862183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.311 [2024-10-21 12:13:43.862195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.311 qpair failed and we were unable to recover it. 00:29:07.311 [2024-10-21 12:13:43.862617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.311 [2024-10-21 12:13:43.862632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.311 qpair failed and we were unable to recover it. 00:29:07.311 [2024-10-21 12:13:43.862935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.311 [2024-10-21 12:13:43.862950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.311 qpair failed and we were unable to recover it. 00:29:07.311 [2024-10-21 12:13:43.863312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.311 [2024-10-21 12:13:43.863340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.311 qpair failed and we were unable to recover it. 00:29:07.311 [2024-10-21 12:13:43.863688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.311 [2024-10-21 12:13:43.863703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.311 qpair failed and we were unable to recover it. 00:29:07.311 [2024-10-21 12:13:43.864024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.311 [2024-10-21 12:13:43.864040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.311 qpair failed and we were unable to recover it. 00:29:07.311 [2024-10-21 12:13:43.864355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.311 [2024-10-21 12:13:43.864371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.311 qpair failed and we were unable to recover it. 00:29:07.311 [2024-10-21 12:13:43.864780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.311 [2024-10-21 12:13:43.864796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.311 qpair failed and we were unable to recover it. 00:29:07.311 [2024-10-21 12:13:43.865127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.311 [2024-10-21 12:13:43.865145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.311 qpair failed and we were unable to recover it. 00:29:07.311 [2024-10-21 12:13:43.865351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.311 [2024-10-21 12:13:43.865368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.311 qpair failed and we were unable to recover it. 00:29:07.311 [2024-10-21 12:13:43.866432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.311 [2024-10-21 12:13:43.866468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.311 qpair failed and we were unable to recover it. 00:29:07.311 [2024-10-21 12:13:43.866863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.311 [2024-10-21 12:13:43.866881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.311 qpair failed and we were unable to recover it. 00:29:07.311 [2024-10-21 12:13:43.868362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.311 [2024-10-21 12:13:43.868404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.311 qpair failed and we were unable to recover it. 00:29:07.311 [2024-10-21 12:13:43.868786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.311 [2024-10-21 12:13:43.868804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.311 qpair failed and we were unable to recover it. 00:29:07.311 [2024-10-21 12:13:43.869955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.311 [2024-10-21 12:13:43.869989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.311 qpair failed and we were unable to recover it. 00:29:07.311 [2024-10-21 12:13:43.870340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.311 [2024-10-21 12:13:43.870358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.311 qpair failed and we were unable to recover it. 00:29:07.311 [2024-10-21 12:13:43.870678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.311 [2024-10-21 12:13:43.870694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.311 qpair failed and we were unable to recover it. 00:29:07.311 [2024-10-21 12:13:43.871065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.311 [2024-10-21 12:13:43.871081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.311 qpair failed and we were unable to recover it. 00:29:07.312 [2024-10-21 12:13:43.871288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.312 [2024-10-21 12:13:43.871306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.312 qpair failed and we were unable to recover it. 00:29:07.312 [2024-10-21 12:13:43.871617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.312 [2024-10-21 12:13:43.871634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.312 qpair failed and we were unable to recover it. 00:29:07.312 [2024-10-21 12:13:43.871941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.312 [2024-10-21 12:13:43.871955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.312 qpair failed and we were unable to recover it. 00:29:07.312 [2024-10-21 12:13:43.872278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.312 [2024-10-21 12:13:43.872293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.312 qpair failed and we were unable to recover it. 00:29:07.312 [2024-10-21 12:13:43.872525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.312 [2024-10-21 12:13:43.872543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.312 qpair failed and we were unable to recover it. 00:29:07.312 [2024-10-21 12:13:43.872865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.312 [2024-10-21 12:13:43.872881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.312 qpair failed and we were unable to recover it. 00:29:07.312 [2024-10-21 12:13:43.874087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.312 [2024-10-21 12:13:43.874127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.312 qpair failed and we were unable to recover it. 00:29:07.312 [2024-10-21 12:13:43.874483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.312 [2024-10-21 12:13:43.874509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.312 qpair failed and we were unable to recover it. 00:29:07.312 [2024-10-21 12:13:43.874848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.312 [2024-10-21 12:13:43.874867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.312 qpair failed and we were unable to recover it. 00:29:07.312 [2024-10-21 12:13:43.875187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.312 [2024-10-21 12:13:43.875207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.312 qpair failed and we were unable to recover it. 00:29:07.312 [2024-10-21 12:13:43.875629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.312 [2024-10-21 12:13:43.875650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.312 qpair failed and we were unable to recover it. 00:29:07.312 [2024-10-21 12:13:43.875994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.312 [2024-10-21 12:13:43.876015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.312 qpair failed and we were unable to recover it. 00:29:07.312 [2024-10-21 12:13:43.876338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.312 [2024-10-21 12:13:43.876359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.312 qpair failed and we were unable to recover it. 00:29:07.312 [2024-10-21 12:13:43.876717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.312 [2024-10-21 12:13:43.876739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.312 qpair failed and we were unable to recover it. 00:29:07.312 [2024-10-21 12:13:43.877057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.312 [2024-10-21 12:13:43.877079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.312 qpair failed and we were unable to recover it. 00:29:07.312 [2024-10-21 12:13:43.877242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.312 [2024-10-21 12:13:43.877266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.312 qpair failed and we were unable to recover it. 00:29:07.312 [2024-10-21 12:13:43.877584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.312 [2024-10-21 12:13:43.877610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.312 qpair failed and we were unable to recover it. 00:29:07.312 [2024-10-21 12:13:43.877958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.312 [2024-10-21 12:13:43.877978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.312 qpair failed and we were unable to recover it. 00:29:07.312 [2024-10-21 12:13:43.878298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.312 [2024-10-21 12:13:43.878318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.312 qpair failed and we were unable to recover it. 00:29:07.312 [2024-10-21 12:13:43.878668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.312 [2024-10-21 12:13:43.878688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.312 qpair failed and we were unable to recover it. 00:29:07.312 [2024-10-21 12:13:43.879046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.312 [2024-10-21 12:13:43.879066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.312 qpair failed and we were unable to recover it. 00:29:07.312 [2024-10-21 12:13:43.879402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.312 [2024-10-21 12:13:43.879425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.312 qpair failed and we were unable to recover it. 00:29:07.312 [2024-10-21 12:13:43.879748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.312 [2024-10-21 12:13:43.879768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.312 qpair failed and we were unable to recover it. 00:29:07.312 [2024-10-21 12:13:43.880111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.312 [2024-10-21 12:13:43.880131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.312 qpair failed and we were unable to recover it. 00:29:07.312 [2024-10-21 12:13:43.880465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.312 [2024-10-21 12:13:43.880487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.312 qpair failed and we were unable to recover it. 00:29:07.312 [2024-10-21 12:13:43.880813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.312 [2024-10-21 12:13:43.880834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.312 qpair failed and we were unable to recover it. 00:29:07.312 [2024-10-21 12:13:43.881169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.312 [2024-10-21 12:13:43.881188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.312 qpair failed and we were unable to recover it. 00:29:07.312 [2024-10-21 12:13:43.881500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.312 [2024-10-21 12:13:43.881521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.312 qpair failed and we were unable to recover it. 00:29:07.312 [2024-10-21 12:13:43.881867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.312 [2024-10-21 12:13:43.881886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.312 qpair failed and we were unable to recover it. 00:29:07.312 [2024-10-21 12:13:43.882225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.312 [2024-10-21 12:13:43.882246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.312 qpair failed and we were unable to recover it. 00:29:07.312 [2024-10-21 12:13:43.882572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.312 [2024-10-21 12:13:43.882593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.312 qpair failed and we were unable to recover it. 00:29:07.312 [2024-10-21 12:13:43.882919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.312 [2024-10-21 12:13:43.882941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.312 qpair failed and we were unable to recover it. 00:29:07.312 [2024-10-21 12:13:43.883299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.312 [2024-10-21 12:13:43.883319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.312 qpair failed and we were unable to recover it. 00:29:07.312 [2024-10-21 12:13:43.883669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.312 [2024-10-21 12:13:43.883690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.312 qpair failed and we were unable to recover it. 00:29:07.312 [2024-10-21 12:13:43.885340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.312 [2024-10-21 12:13:43.885393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.312 qpair failed and we were unable to recover it. 00:29:07.312 [2024-10-21 12:13:43.885792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.312 [2024-10-21 12:13:43.885820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.313 qpair failed and we were unable to recover it. 00:29:07.313 [2024-10-21 12:13:43.886171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.313 [2024-10-21 12:13:43.886196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.313 qpair failed and we were unable to recover it. 00:29:07.313 [2024-10-21 12:13:43.886540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.313 [2024-10-21 12:13:43.886567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.313 qpair failed and we were unable to recover it. 00:29:07.313 [2024-10-21 12:13:43.886910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.313 [2024-10-21 12:13:43.886935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.313 qpair failed and we were unable to recover it. 00:29:07.313 [2024-10-21 12:13:43.887182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.313 [2024-10-21 12:13:43.887212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.313 qpair failed and we were unable to recover it. 00:29:07.313 [2024-10-21 12:13:43.887448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.313 [2024-10-21 12:13:43.887475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.313 qpair failed and we were unable to recover it. 00:29:07.313 [2024-10-21 12:13:43.887830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.313 [2024-10-21 12:13:43.887855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.313 qpair failed and we were unable to recover it. 00:29:07.313 [2024-10-21 12:13:43.888210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.313 [2024-10-21 12:13:43.888234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.313 qpair failed and we were unable to recover it. 00:29:07.313 [2024-10-21 12:13:43.888580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.313 [2024-10-21 12:13:43.888606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.313 qpair failed and we were unable to recover it. 00:29:07.313 [2024-10-21 12:13:43.888942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.313 [2024-10-21 12:13:43.888965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.313 qpair failed and we were unable to recover it. 00:29:07.313 [2024-10-21 12:13:43.889296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.313 [2024-10-21 12:13:43.889318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.313 qpair failed and we were unable to recover it. 00:29:07.313 [2024-10-21 12:13:43.889698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.313 [2024-10-21 12:13:43.889723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.313 qpair failed and we were unable to recover it. 00:29:07.313 [2024-10-21 12:13:43.890076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.313 [2024-10-21 12:13:43.890101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.313 qpair failed and we were unable to recover it. 00:29:07.313 [2024-10-21 12:13:43.890351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.313 [2024-10-21 12:13:43.890377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.313 qpair failed and we were unable to recover it. 00:29:07.313 [2024-10-21 12:13:43.891390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.313 [2024-10-21 12:13:43.891435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.313 qpair failed and we were unable to recover it. 00:29:07.313 [2024-10-21 12:13:43.891815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.313 [2024-10-21 12:13:43.891843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.313 qpair failed and we were unable to recover it. 00:29:07.313 [2024-10-21 12:13:43.892070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.313 [2024-10-21 12:13:43.892099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.313 qpair failed and we were unable to recover it. 00:29:07.313 [2024-10-21 12:13:43.892426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.313 [2024-10-21 12:13:43.892451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.313 qpair failed and we were unable to recover it. 00:29:07.313 [2024-10-21 12:13:43.892794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.313 [2024-10-21 12:13:43.892818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.313 qpair failed and we were unable to recover it. 00:29:07.313 [2024-10-21 12:13:43.894530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.313 [2024-10-21 12:13:43.894582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.313 qpair failed and we were unable to recover it. 00:29:07.313 [2024-10-21 12:13:43.894947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.313 [2024-10-21 12:13:43.894973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.313 qpair failed and we were unable to recover it. 00:29:07.313 [2024-10-21 12:13:43.895342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.313 [2024-10-21 12:13:43.895374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.313 qpair failed and we were unable to recover it. 00:29:07.313 [2024-10-21 12:13:43.896400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.313 [2024-10-21 12:13:43.896449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.313 qpair failed and we were unable to recover it. 00:29:07.313 [2024-10-21 12:13:43.896845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.313 [2024-10-21 12:13:43.896878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.313 qpair failed and we were unable to recover it. 00:29:07.313 [2024-10-21 12:13:43.897273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.313 [2024-10-21 12:13:43.897304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.313 qpair failed and we were unable to recover it. 00:29:07.313 [2024-10-21 12:13:43.897694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.313 [2024-10-21 12:13:43.897724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.313 qpair failed and we were unable to recover it. 00:29:07.313 [2024-10-21 12:13:43.899207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.313 [2024-10-21 12:13:43.899256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.313 qpair failed and we were unable to recover it. 00:29:07.313 [2024-10-21 12:13:43.899647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.313 [2024-10-21 12:13:43.899681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.313 qpair failed and we were unable to recover it. 00:29:07.313 [2024-10-21 12:13:43.900048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.313 [2024-10-21 12:13:43.900077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.313 qpair failed and we were unable to recover it. 00:29:07.313 [2024-10-21 12:13:43.900405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.313 [2024-10-21 12:13:43.900435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.313 qpair failed and we were unable to recover it. 00:29:07.313 [2024-10-21 12:13:43.900783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.313 [2024-10-21 12:13:43.900811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.313 qpair failed and we were unable to recover it. 00:29:07.586 [2024-10-21 12:13:43.901176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-10-21 12:13:43.901207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.586 qpair failed and we were unable to recover it. 00:29:07.586 [2024-10-21 12:13:43.901571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-10-21 12:13:43.901600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.586 qpair failed and we were unable to recover it. 00:29:07.586 [2024-10-21 12:13:43.901965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-10-21 12:13:43.901992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.586 qpair failed and we were unable to recover it. 00:29:07.586 [2024-10-21 12:13:43.902336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-10-21 12:13:43.902367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.586 qpair failed and we were unable to recover it. 00:29:07.586 [2024-10-21 12:13:43.902545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-10-21 12:13:43.902577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.586 qpair failed and we were unable to recover it. 00:29:07.586 [2024-10-21 12:13:43.902921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-10-21 12:13:43.902951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.586 qpair failed and we were unable to recover it. 00:29:07.586 [2024-10-21 12:13:43.903308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-10-21 12:13:43.903351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.586 qpair failed and we were unable to recover it. 00:29:07.586 [2024-10-21 12:13:43.903706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-10-21 12:13:43.903734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.586 qpair failed and we were unable to recover it. 00:29:07.586 [2024-10-21 12:13:43.904111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-10-21 12:13:43.904138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.586 qpair failed and we were unable to recover it. 00:29:07.586 [2024-10-21 12:13:43.904521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-10-21 12:13:43.904553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.586 qpair failed and we were unable to recover it. 00:29:07.586 [2024-10-21 12:13:43.904901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-10-21 12:13:43.904930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.586 qpair failed and we were unable to recover it. 00:29:07.586 [2024-10-21 12:13:43.905177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-10-21 12:13:43.905207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.586 qpair failed and we were unable to recover it. 00:29:07.586 [2024-10-21 12:13:43.905546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-10-21 12:13:43.905575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.586 qpair failed and we were unable to recover it. 00:29:07.586 [2024-10-21 12:13:43.905931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-10-21 12:13:43.905959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.586 qpair failed and we were unable to recover it. 00:29:07.586 [2024-10-21 12:13:43.906332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-10-21 12:13:43.906366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.586 qpair failed and we were unable to recover it. 00:29:07.586 [2024-10-21 12:13:43.906597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-10-21 12:13:43.906632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.586 qpair failed and we were unable to recover it. 00:29:07.586 [2024-10-21 12:13:43.906978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-10-21 12:13:43.907010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.586 qpair failed and we were unable to recover it. 00:29:07.586 [2024-10-21 12:13:43.907412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-10-21 12:13:43.907446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.586 qpair failed and we were unable to recover it. 00:29:07.586 [2024-10-21 12:13:43.907842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-10-21 12:13:43.907873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.586 qpair failed and we were unable to recover it. 00:29:07.586 [2024-10-21 12:13:43.908224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-10-21 12:13:43.908256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.586 qpair failed and we were unable to recover it. 00:29:07.586 [2024-10-21 12:13:43.908630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-10-21 12:13:43.908662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.586 qpair failed and we were unable to recover it. 00:29:07.586 [2024-10-21 12:13:43.909024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-10-21 12:13:43.909056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.586 qpair failed and we were unable to recover it. 00:29:07.586 [2024-10-21 12:13:43.909354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-10-21 12:13:43.909387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.586 qpair failed and we were unable to recover it. 00:29:07.586 [2024-10-21 12:13:43.909750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-10-21 12:13:43.909780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.586 qpair failed and we were unable to recover it. 00:29:07.586 [2024-10-21 12:13:43.910134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-10-21 12:13:43.910166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.586 qpair failed and we were unable to recover it. 00:29:07.586 [2024-10-21 12:13:43.910501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-10-21 12:13:43.910534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.586 qpair failed and we were unable to recover it. 00:29:07.586 [2024-10-21 12:13:43.910896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-10-21 12:13:43.910927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.586 qpair failed and we were unable to recover it. 00:29:07.586 [2024-10-21 12:13:43.911288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-10-21 12:13:43.911331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.586 qpair failed and we were unable to recover it. 00:29:07.586 [2024-10-21 12:13:43.911661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-10-21 12:13:43.911692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.586 qpair failed and we were unable to recover it. 00:29:07.586 [2024-10-21 12:13:43.912390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-10-21 12:13:43.912433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.586 qpair failed and we were unable to recover it. 00:29:07.586 [2024-10-21 12:13:43.912814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.587 [2024-10-21 12:13:43.912860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.587 qpair failed and we were unable to recover it. 00:29:07.587 [2024-10-21 12:13:43.913250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.587 [2024-10-21 12:13:43.913290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.587 qpair failed and we were unable to recover it. 00:29:07.587 [2024-10-21 12:13:43.914247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.587 [2024-10-21 12:13:43.914299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.587 qpair failed and we were unable to recover it. 00:29:07.587 [2024-10-21 12:13:43.914720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.587 [2024-10-21 12:13:43.914754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.587 qpair failed and we were unable to recover it. 00:29:07.587 [2024-10-21 12:13:43.915109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.587 [2024-10-21 12:13:43.915141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.587 qpair failed and we were unable to recover it. 00:29:07.587 [2024-10-21 12:13:43.915505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.587 [2024-10-21 12:13:43.915539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.587 qpair failed and we were unable to recover it. 00:29:07.587 [2024-10-21 12:13:43.915914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.587 [2024-10-21 12:13:43.915946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.587 qpair failed and we were unable to recover it. 00:29:07.587 [2024-10-21 12:13:43.916311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.587 [2024-10-21 12:13:43.916354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.587 qpair failed and we were unable to recover it. 00:29:07.587 [2024-10-21 12:13:43.916616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.587 [2024-10-21 12:13:43.916650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.587 qpair failed and we were unable to recover it. 00:29:07.587 [2024-10-21 12:13:43.917011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.587 [2024-10-21 12:13:43.917043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.587 qpair failed and we were unable to recover it. 00:29:07.587 [2024-10-21 12:13:43.917344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.587 [2024-10-21 12:13:43.917376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.587 qpair failed and we were unable to recover it. 00:29:07.587 [2024-10-21 12:13:43.917762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.587 [2024-10-21 12:13:43.917792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.587 qpair failed and we were unable to recover it. 00:29:07.587 [2024-10-21 12:13:43.918149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.587 [2024-10-21 12:13:43.918180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.587 qpair failed and we were unable to recover it. 00:29:07.587 [2024-10-21 12:13:43.918432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.587 [2024-10-21 12:13:43.918464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.587 qpair failed and we were unable to recover it. 00:29:07.587 [2024-10-21 12:13:43.918872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.587 [2024-10-21 12:13:43.918904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.587 qpair failed and we were unable to recover it. 00:29:07.587 [2024-10-21 12:13:43.919259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.587 [2024-10-21 12:13:43.919291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.587 qpair failed and we were unable to recover it. 00:29:07.587 [2024-10-21 12:13:43.919690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.587 [2024-10-21 12:13:43.919724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.587 qpair failed and we were unable to recover it. 00:29:07.587 [2024-10-21 12:13:43.920092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.587 [2024-10-21 12:13:43.920124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.587 qpair failed and we were unable to recover it. 00:29:07.587 [2024-10-21 12:13:43.920503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.587 [2024-10-21 12:13:43.920539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.587 qpair failed and we were unable to recover it. 00:29:07.587 [2024-10-21 12:13:43.920902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.587 [2024-10-21 12:13:43.920934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.587 qpair failed and we were unable to recover it. 00:29:07.587 [2024-10-21 12:13:43.921205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.587 [2024-10-21 12:13:43.921236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.587 qpair failed and we were unable to recover it. 00:29:07.587 [2024-10-21 12:13:43.921628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.587 [2024-10-21 12:13:43.921660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.587 qpair failed and we were unable to recover it. 00:29:07.587 [2024-10-21 12:13:43.922000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.587 [2024-10-21 12:13:43.922031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.587 qpair failed and we were unable to recover it. 00:29:07.587 [2024-10-21 12:13:43.922319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.587 [2024-10-21 12:13:43.922362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.587 qpair failed and we were unable to recover it. 00:29:07.587 [2024-10-21 12:13:43.922773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.587 [2024-10-21 12:13:43.922804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.587 qpair failed and we were unable to recover it. 00:29:07.587 [2024-10-21 12:13:43.923153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.587 [2024-10-21 12:13:43.923184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.587 qpair failed and we were unable to recover it. 00:29:07.587 [2024-10-21 12:13:43.923564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.587 [2024-10-21 12:13:43.923599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.587 qpair failed and we were unable to recover it. 00:29:07.587 [2024-10-21 12:13:43.923945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.587 [2024-10-21 12:13:43.923976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.587 qpair failed and we were unable to recover it. 00:29:07.587 [2024-10-21 12:13:43.924390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.587 [2024-10-21 12:13:43.924424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.587 qpair failed and we were unable to recover it. 00:29:07.587 [2024-10-21 12:13:43.924801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.587 [2024-10-21 12:13:43.924832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.587 qpair failed and we were unable to recover it. 00:29:07.587 [2024-10-21 12:13:43.925185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.587 [2024-10-21 12:13:43.925216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.587 qpair failed and we were unable to recover it. 00:29:07.587 [2024-10-21 12:13:43.925616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.587 [2024-10-21 12:13:43.925649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.587 qpair failed and we were unable to recover it. 00:29:07.587 [2024-10-21 12:13:43.925987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.587 [2024-10-21 12:13:43.926018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.587 qpair failed and we were unable to recover it. 00:29:07.587 [2024-10-21 12:13:43.926385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.587 [2024-10-21 12:13:43.926418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.587 qpair failed and we were unable to recover it. 00:29:07.587 [2024-10-21 12:13:43.926782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.587 [2024-10-21 12:13:43.926813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.587 qpair failed and we were unable to recover it. 00:29:07.587 [2024-10-21 12:13:43.926992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.587 [2024-10-21 12:13:43.927022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.588 qpair failed and we were unable to recover it. 00:29:07.588 [2024-10-21 12:13:43.927379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.588 [2024-10-21 12:13:43.927411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.588 qpair failed and we were unable to recover it. 00:29:07.588 [2024-10-21 12:13:43.927789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.588 [2024-10-21 12:13:43.927821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.588 qpair failed and we were unable to recover it. 00:29:07.588 [2024-10-21 12:13:43.928199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.588 [2024-10-21 12:13:43.928230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.588 qpair failed and we were unable to recover it. 00:29:07.588 [2024-10-21 12:13:43.928570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.588 [2024-10-21 12:13:43.928605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.588 qpair failed and we were unable to recover it. 00:29:07.588 [2024-10-21 12:13:43.928944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.588 [2024-10-21 12:13:43.928979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.588 qpair failed and we were unable to recover it. 00:29:07.588 [2024-10-21 12:13:43.929211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.588 [2024-10-21 12:13:43.929241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.588 qpair failed and we were unable to recover it. 00:29:07.588 [2024-10-21 12:13:43.929611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.588 [2024-10-21 12:13:43.929642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.588 qpair failed and we were unable to recover it. 00:29:07.588 [2024-10-21 12:13:43.929905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.588 [2024-10-21 12:13:43.929935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.588 qpair failed and we were unable to recover it. 00:29:07.588 [2024-10-21 12:13:43.930297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.588 [2024-10-21 12:13:43.930338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.588 qpair failed and we were unable to recover it. 00:29:07.588 [2024-10-21 12:13:43.930560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.588 [2024-10-21 12:13:43.930589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.588 qpair failed and we were unable to recover it. 00:29:07.588 [2024-10-21 12:13:43.930933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.588 [2024-10-21 12:13:43.930964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.588 qpair failed and we were unable to recover it. 00:29:07.588 [2024-10-21 12:13:43.931204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.588 [2024-10-21 12:13:43.931233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.588 qpair failed and we were unable to recover it. 00:29:07.588 [2024-10-21 12:13:43.931648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.588 [2024-10-21 12:13:43.931680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.588 qpair failed and we were unable to recover it. 00:29:07.588 [2024-10-21 12:13:43.932039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.588 [2024-10-21 12:13:43.932069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.588 qpair failed and we were unable to recover it. 00:29:07.588 [2024-10-21 12:13:43.932413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.588 [2024-10-21 12:13:43.932447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.588 qpair failed and we were unable to recover it. 00:29:07.588 [2024-10-21 12:13:43.932822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.588 [2024-10-21 12:13:43.932851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.588 qpair failed and we were unable to recover it. 00:29:07.588 [2024-10-21 12:13:43.933210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.588 [2024-10-21 12:13:43.933241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.588 qpair failed and we were unable to recover it. 00:29:07.588 [2024-10-21 12:13:43.933567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.588 [2024-10-21 12:13:43.933599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.588 qpair failed and we were unable to recover it. 00:29:07.588 [2024-10-21 12:13:43.933954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.588 [2024-10-21 12:13:43.933984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.588 qpair failed and we were unable to recover it. 00:29:07.588 [2024-10-21 12:13:43.934351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.588 [2024-10-21 12:13:43.934384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.588 qpair failed and we were unable to recover it. 00:29:07.588 [2024-10-21 12:13:43.934756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.588 [2024-10-21 12:13:43.934789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.588 qpair failed and we were unable to recover it. 00:29:07.588 [2024-10-21 12:13:43.935009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.588 [2024-10-21 12:13:43.935039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.588 qpair failed and we were unable to recover it. 00:29:07.588 [2024-10-21 12:13:43.935400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.588 [2024-10-21 12:13:43.935433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.588 qpair failed and we were unable to recover it. 00:29:07.588 [2024-10-21 12:13:43.935818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.588 [2024-10-21 12:13:43.935850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.588 qpair failed and we were unable to recover it. 00:29:07.588 [2024-10-21 12:13:43.936207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.588 [2024-10-21 12:13:43.936237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.588 qpair failed and we were unable to recover it. 00:29:07.588 [2024-10-21 12:13:43.936618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.588 [2024-10-21 12:13:43.936654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.588 qpair failed and we were unable to recover it. 00:29:07.588 [2024-10-21 12:13:43.936998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.588 [2024-10-21 12:13:43.937030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.588 qpair failed and we were unable to recover it. 00:29:07.588 [2024-10-21 12:13:43.937277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.588 [2024-10-21 12:13:43.937306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.588 qpair failed and we were unable to recover it. 00:29:07.588 [2024-10-21 12:13:43.937680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.588 [2024-10-21 12:13:43.937711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.588 qpair failed and we were unable to recover it. 00:29:07.588 [2024-10-21 12:13:43.938125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.588 [2024-10-21 12:13:43.938156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.588 qpair failed and we were unable to recover it. 00:29:07.588 [2024-10-21 12:13:43.938529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.588 [2024-10-21 12:13:43.938560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.588 qpair failed and we were unable to recover it. 00:29:07.588 [2024-10-21 12:13:43.938925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.588 [2024-10-21 12:13:43.938959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.588 qpair failed and we were unable to recover it. 00:29:07.588 [2024-10-21 12:13:43.939314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.588 [2024-10-21 12:13:43.939357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.588 qpair failed and we were unable to recover it. 00:29:07.588 [2024-10-21 12:13:43.939606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.588 [2024-10-21 12:13:43.939639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.588 qpair failed and we were unable to recover it. 00:29:07.588 [2024-10-21 12:13:43.940003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.588 [2024-10-21 12:13:43.940033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.589 qpair failed and we were unable to recover it. 00:29:07.589 [2024-10-21 12:13:43.940404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.589 [2024-10-21 12:13:43.940437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.589 qpair failed and we were unable to recover it. 00:29:07.589 [2024-10-21 12:13:43.940784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.589 [2024-10-21 12:13:43.940816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.589 qpair failed and we were unable to recover it. 00:29:07.589 [2024-10-21 12:13:43.941068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.589 [2024-10-21 12:13:43.941098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.589 qpair failed and we were unable to recover it. 00:29:07.589 [2024-10-21 12:13:43.941513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.589 [2024-10-21 12:13:43.941546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.589 qpair failed and we were unable to recover it. 00:29:07.589 [2024-10-21 12:13:43.941920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.589 [2024-10-21 12:13:43.941951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.589 qpair failed and we were unable to recover it. 00:29:07.589 [2024-10-21 12:13:43.942337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.589 [2024-10-21 12:13:43.942369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.589 qpair failed and we were unable to recover it. 00:29:07.589 [2024-10-21 12:13:43.942771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.589 [2024-10-21 12:13:43.942802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.589 qpair failed and we were unable to recover it. 00:29:07.589 [2024-10-21 12:13:43.943050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.589 [2024-10-21 12:13:43.943083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.589 qpair failed and we were unable to recover it. 00:29:07.589 [2024-10-21 12:13:43.943394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.589 [2024-10-21 12:13:43.943426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.589 qpair failed and we were unable to recover it. 00:29:07.589 [2024-10-21 12:13:43.943818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.589 [2024-10-21 12:13:43.943855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.589 qpair failed and we were unable to recover it. 00:29:07.589 [2024-10-21 12:13:43.944208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.589 [2024-10-21 12:13:43.944241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.589 qpair failed and we were unable to recover it. 00:29:07.589 [2024-10-21 12:13:43.944494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.589 [2024-10-21 12:13:43.944526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.589 qpair failed and we were unable to recover it. 00:29:07.589 [2024-10-21 12:13:43.944877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.589 [2024-10-21 12:13:43.944909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.589 qpair failed and we were unable to recover it. 00:29:07.589 [2024-10-21 12:13:43.945270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.589 [2024-10-21 12:13:43.945300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.589 qpair failed and we were unable to recover it. 00:29:07.589 [2024-10-21 12:13:43.945698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.589 [2024-10-21 12:13:43.945729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.589 qpair failed and we were unable to recover it. 00:29:07.589 [2024-10-21 12:13:43.946132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.589 [2024-10-21 12:13:43.946162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.589 qpair failed and we were unable to recover it. 00:29:07.589 [2024-10-21 12:13:43.946513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.589 [2024-10-21 12:13:43.946546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.589 qpair failed and we were unable to recover it. 00:29:07.589 [2024-10-21 12:13:43.946916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.589 [2024-10-21 12:13:43.946946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.589 qpair failed and we were unable to recover it. 00:29:07.589 [2024-10-21 12:13:43.947309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.589 [2024-10-21 12:13:43.947352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.589 qpair failed and we were unable to recover it. 00:29:07.589 [2024-10-21 12:13:43.947773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.589 [2024-10-21 12:13:43.947803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.589 qpair failed and we were unable to recover it. 00:29:07.589 [2024-10-21 12:13:43.948160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.589 [2024-10-21 12:13:43.948191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.589 qpair failed and we were unable to recover it. 00:29:07.589 [2024-10-21 12:13:43.948531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.589 [2024-10-21 12:13:43.948562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.589 qpair failed and we were unable to recover it. 00:29:07.589 [2024-10-21 12:13:43.948925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.589 [2024-10-21 12:13:43.948955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.589 qpair failed and we were unable to recover it. 00:29:07.589 [2024-10-21 12:13:43.949343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.589 [2024-10-21 12:13:43.949376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.589 qpair failed and we were unable to recover it. 00:29:07.589 [2024-10-21 12:13:43.949646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.589 [2024-10-21 12:13:43.949676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.589 qpair failed and we were unable to recover it. 00:29:07.589 [2024-10-21 12:13:43.950042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.589 [2024-10-21 12:13:43.950072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.589 qpair failed and we were unable to recover it. 00:29:07.589 [2024-10-21 12:13:43.950446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.589 [2024-10-21 12:13:43.950478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.589 qpair failed and we were unable to recover it. 00:29:07.589 [2024-10-21 12:13:43.950899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.589 [2024-10-21 12:13:43.950929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.589 qpair failed and we were unable to recover it. 00:29:07.589 [2024-10-21 12:13:43.951286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.589 [2024-10-21 12:13:43.951317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.589 qpair failed and we were unable to recover it. 00:29:07.589 [2024-10-21 12:13:43.951730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.589 [2024-10-21 12:13:43.951763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.589 qpair failed and we were unable to recover it. 00:29:07.589 [2024-10-21 12:13:43.952042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.589 [2024-10-21 12:13:43.952072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.589 qpair failed and we were unable to recover it. 00:29:07.589 [2024-10-21 12:13:43.952454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.589 [2024-10-21 12:13:43.952486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.589 qpair failed and we were unable to recover it. 00:29:07.589 [2024-10-21 12:13:43.952862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.589 [2024-10-21 12:13:43.952894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.589 qpair failed and we were unable to recover it. 00:29:07.589 [2024-10-21 12:13:43.953262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.589 [2024-10-21 12:13:43.953293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.589 qpair failed and we were unable to recover it. 00:29:07.589 [2024-10-21 12:13:43.953713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.589 [2024-10-21 12:13:43.953745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.589 qpair failed and we were unable to recover it. 00:29:07.589 [2024-10-21 12:13:43.954150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.590 [2024-10-21 12:13:43.954182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.590 qpair failed and we were unable to recover it. 00:29:07.590 [2024-10-21 12:13:43.954477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.590 [2024-10-21 12:13:43.954509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.590 qpair failed and we were unable to recover it. 00:29:07.590 [2024-10-21 12:13:43.954833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.590 [2024-10-21 12:13:43.954863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.590 qpair failed and we were unable to recover it. 00:29:07.590 [2024-10-21 12:13:43.955289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.590 [2024-10-21 12:13:43.955332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.590 qpair failed and we were unable to recover it. 00:29:07.590 [2024-10-21 12:13:43.955735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.590 [2024-10-21 12:13:43.955766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.590 qpair failed and we were unable to recover it. 00:29:07.590 [2024-10-21 12:13:43.956147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.590 [2024-10-21 12:13:43.956178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.590 qpair failed and we were unable to recover it. 00:29:07.590 [2024-10-21 12:13:43.956528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.590 [2024-10-21 12:13:43.956564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.590 qpair failed and we were unable to recover it. 00:29:07.590 [2024-10-21 12:13:43.956816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.590 [2024-10-21 12:13:43.956847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.590 qpair failed and we were unable to recover it. 00:29:07.590 [2024-10-21 12:13:43.957210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.590 [2024-10-21 12:13:43.957241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.590 qpair failed and we were unable to recover it. 00:29:07.590 [2024-10-21 12:13:43.957602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.590 [2024-10-21 12:13:43.957637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.590 qpair failed and we were unable to recover it. 00:29:07.590 [2024-10-21 12:13:43.958028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.590 [2024-10-21 12:13:43.958059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.590 qpair failed and we were unable to recover it. 00:29:07.590 [2024-10-21 12:13:43.958247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.590 [2024-10-21 12:13:43.958280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.590 qpair failed and we were unable to recover it. 00:29:07.590 [2024-10-21 12:13:43.958594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.590 [2024-10-21 12:13:43.958625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.590 qpair failed and we were unable to recover it. 00:29:07.590 [2024-10-21 12:13:43.958998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.590 [2024-10-21 12:13:43.959028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.590 qpair failed and we were unable to recover it. 00:29:07.590 [2024-10-21 12:13:43.959448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.590 [2024-10-21 12:13:43.959480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.590 qpair failed and we were unable to recover it. 00:29:07.590 [2024-10-21 12:13:43.959853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.590 [2024-10-21 12:13:43.959884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.590 qpair failed and we were unable to recover it. 00:29:07.590 [2024-10-21 12:13:43.960318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.590 [2024-10-21 12:13:43.960385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.590 qpair failed and we were unable to recover it. 00:29:07.590 [2024-10-21 12:13:43.960626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.590 [2024-10-21 12:13:43.960659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.590 qpair failed and we were unable to recover it. 00:29:07.590 [2024-10-21 12:13:43.960905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.590 [2024-10-21 12:13:43.960936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.590 qpair failed and we were unable to recover it. 00:29:07.590 [2024-10-21 12:13:43.961266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.590 [2024-10-21 12:13:43.961298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.590 qpair failed and we were unable to recover it. 00:29:07.590 [2024-10-21 12:13:43.961672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.590 [2024-10-21 12:13:43.961704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.590 qpair failed and we were unable to recover it. 00:29:07.590 [2024-10-21 12:13:43.962078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.590 [2024-10-21 12:13:43.962108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.590 qpair failed and we were unable to recover it. 00:29:07.590 [2024-10-21 12:13:43.962343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.590 [2024-10-21 12:13:43.962375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.590 qpair failed and we were unable to recover it. 00:29:07.590 [2024-10-21 12:13:43.962772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.590 [2024-10-21 12:13:43.962803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.590 qpair failed and we were unable to recover it. 00:29:07.590 [2024-10-21 12:13:43.963216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.590 [2024-10-21 12:13:43.963248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.590 qpair failed and we were unable to recover it. 00:29:07.590 [2024-10-21 12:13:43.963634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.590 [2024-10-21 12:13:43.963668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.590 qpair failed and we were unable to recover it. 00:29:07.590 [2024-10-21 12:13:43.964037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.590 [2024-10-21 12:13:43.964067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.590 qpair failed and we were unable to recover it. 00:29:07.590 [2024-10-21 12:13:43.964492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.590 [2024-10-21 12:13:43.964524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.590 qpair failed and we were unable to recover it. 00:29:07.590 [2024-10-21 12:13:43.964897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.590 [2024-10-21 12:13:43.964931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.590 qpair failed and we were unable to recover it. 00:29:07.590 [2024-10-21 12:13:43.965291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.590 [2024-10-21 12:13:43.965332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.590 qpair failed and we were unable to recover it. 00:29:07.590 [2024-10-21 12:13:43.965698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.590 [2024-10-21 12:13:43.965731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.590 qpair failed and we were unable to recover it. 00:29:07.590 [2024-10-21 12:13:43.966114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.590 [2024-10-21 12:13:43.966147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.590 qpair failed and we were unable to recover it. 00:29:07.590 [2024-10-21 12:13:43.966504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.590 [2024-10-21 12:13:43.966537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.590 qpair failed and we were unable to recover it. 00:29:07.591 [2024-10-21 12:13:43.966916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.591 [2024-10-21 12:13:43.966948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.591 qpair failed and we were unable to recover it. 00:29:07.591 [2024-10-21 12:13:43.967211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.591 [2024-10-21 12:13:43.967242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.591 qpair failed and we were unable to recover it. 00:29:07.591 [2024-10-21 12:13:43.967602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.591 [2024-10-21 12:13:43.967634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.591 qpair failed and we were unable to recover it. 00:29:07.591 [2024-10-21 12:13:43.967990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.591 [2024-10-21 12:13:43.968021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.591 qpair failed and we were unable to recover it. 00:29:07.591 [2024-10-21 12:13:43.968381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.591 [2024-10-21 12:13:43.968414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.591 qpair failed and we were unable to recover it. 00:29:07.591 [2024-10-21 12:13:43.968788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.591 [2024-10-21 12:13:43.968818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.591 qpair failed and we were unable to recover it. 00:29:07.591 [2024-10-21 12:13:43.969062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.591 [2024-10-21 12:13:43.969092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.591 qpair failed and we were unable to recover it. 00:29:07.591 [2024-10-21 12:13:43.969357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.591 [2024-10-21 12:13:43.969389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.591 qpair failed and we were unable to recover it. 00:29:07.591 [2024-10-21 12:13:43.969747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.591 [2024-10-21 12:13:43.969783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.591 qpair failed and we were unable to recover it. 00:29:07.591 [2024-10-21 12:13:43.970130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.591 [2024-10-21 12:13:43.970164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.591 qpair failed and we were unable to recover it. 00:29:07.591 [2024-10-21 12:13:43.970511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.591 [2024-10-21 12:13:43.970543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.591 qpair failed and we were unable to recover it. 00:29:07.591 [2024-10-21 12:13:43.970875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.591 [2024-10-21 12:13:43.970907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.591 qpair failed and we were unable to recover it. 00:29:07.591 [2024-10-21 12:13:43.971241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.591 [2024-10-21 12:13:43.971271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.591 qpair failed and we were unable to recover it. 00:29:07.591 [2024-10-21 12:13:43.971641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.591 [2024-10-21 12:13:43.971674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.591 qpair failed and we were unable to recover it. 00:29:07.591 [2024-10-21 12:13:43.972039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.591 [2024-10-21 12:13:43.972069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.591 qpair failed and we were unable to recover it. 00:29:07.591 [2024-10-21 12:13:43.972376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.591 [2024-10-21 12:13:43.972408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.591 qpair failed and we were unable to recover it. 00:29:07.591 [2024-10-21 12:13:43.972787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.591 [2024-10-21 12:13:43.972818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.591 qpair failed and we were unable to recover it. 00:29:07.591 [2024-10-21 12:13:43.973186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.591 [2024-10-21 12:13:43.973217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.591 qpair failed and we were unable to recover it. 00:29:07.591 [2024-10-21 12:13:43.973511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.591 [2024-10-21 12:13:43.973542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.591 qpair failed and we were unable to recover it. 00:29:07.591 [2024-10-21 12:13:43.973939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.591 [2024-10-21 12:13:43.973972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.591 qpair failed and we were unable to recover it. 00:29:07.591 [2024-10-21 12:13:43.974358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.591 [2024-10-21 12:13:43.974392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.591 qpair failed and we were unable to recover it. 00:29:07.591 [2024-10-21 12:13:43.974776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.591 [2024-10-21 12:13:43.974808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.591 qpair failed and we were unable to recover it. 00:29:07.591 [2024-10-21 12:13:43.975066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.591 [2024-10-21 12:13:43.975096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.591 qpair failed and we were unable to recover it. 00:29:07.591 [2024-10-21 12:13:43.975438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.591 [2024-10-21 12:13:43.975471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.591 qpair failed and we were unable to recover it. 00:29:07.591 [2024-10-21 12:13:43.975918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.591 [2024-10-21 12:13:43.975950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.591 qpair failed and we were unable to recover it. 00:29:07.591 [2024-10-21 12:13:43.976304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.591 [2024-10-21 12:13:43.976348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.591 qpair failed and we were unable to recover it. 00:29:07.591 [2024-10-21 12:13:43.976769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.591 [2024-10-21 12:13:43.976800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.591 qpair failed and we were unable to recover it. 00:29:07.591 [2024-10-21 12:13:43.977181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.591 [2024-10-21 12:13:43.977213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.591 qpair failed and we were unable to recover it. 00:29:07.591 [2024-10-21 12:13:43.977636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.591 [2024-10-21 12:13:43.977669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.591 qpair failed and we were unable to recover it. 00:29:07.591 [2024-10-21 12:13:43.978050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.591 [2024-10-21 12:13:43.978081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.591 qpair failed and we were unable to recover it. 00:29:07.591 [2024-10-21 12:13:43.978491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.591 [2024-10-21 12:13:43.978523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.591 qpair failed and we were unable to recover it. 00:29:07.592 [2024-10-21 12:13:43.978947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.592 [2024-10-21 12:13:43.978978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.592 qpair failed and we were unable to recover it. 00:29:07.592 [2024-10-21 12:13:43.979238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.592 [2024-10-21 12:13:43.979268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.592 qpair failed and we were unable to recover it. 00:29:07.592 [2024-10-21 12:13:43.979682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.592 [2024-10-21 12:13:43.979713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.592 qpair failed and we were unable to recover it. 00:29:07.592 [2024-10-21 12:13:43.980076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.592 [2024-10-21 12:13:43.980107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.592 qpair failed and we were unable to recover it. 00:29:07.592 [2024-10-21 12:13:43.980466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.592 [2024-10-21 12:13:43.980500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.592 qpair failed and we were unable to recover it. 00:29:07.592 [2024-10-21 12:13:43.980839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.592 [2024-10-21 12:13:43.980870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.592 qpair failed and we were unable to recover it. 00:29:07.592 [2024-10-21 12:13:43.981084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.592 [2024-10-21 12:13:43.981114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.592 qpair failed and we were unable to recover it. 00:29:07.592 [2024-10-21 12:13:43.981519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.592 [2024-10-21 12:13:43.981551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.592 qpair failed and we were unable to recover it. 00:29:07.592 [2024-10-21 12:13:43.981917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.592 [2024-10-21 12:13:43.981948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.592 qpair failed and we were unable to recover it. 00:29:07.592 [2024-10-21 12:13:43.982305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.592 [2024-10-21 12:13:43.982349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.592 qpair failed and we were unable to recover it. 00:29:07.592 [2024-10-21 12:13:43.982748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.592 [2024-10-21 12:13:43.982779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.592 qpair failed and we were unable to recover it. 00:29:07.592 [2024-10-21 12:13:43.983138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.592 [2024-10-21 12:13:43.983170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.592 qpair failed and we were unable to recover it. 00:29:07.592 [2024-10-21 12:13:43.983547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.592 [2024-10-21 12:13:43.983581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.592 qpair failed and we were unable to recover it. 00:29:07.592 [2024-10-21 12:13:43.983942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.592 [2024-10-21 12:13:43.983972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.592 qpair failed and we were unable to recover it. 00:29:07.592 [2024-10-21 12:13:43.984369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.592 [2024-10-21 12:13:43.984402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.592 qpair failed and we were unable to recover it. 00:29:07.592 [2024-10-21 12:13:43.984801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.592 [2024-10-21 12:13:43.984831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.592 qpair failed and we were unable to recover it. 00:29:07.592 [2024-10-21 12:13:43.985182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.592 [2024-10-21 12:13:43.985215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.592 qpair failed and we were unable to recover it. 00:29:07.592 [2024-10-21 12:13:43.985577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.592 [2024-10-21 12:13:43.985616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.592 qpair failed and we were unable to recover it. 00:29:07.592 [2024-10-21 12:13:43.986015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.592 [2024-10-21 12:13:43.986046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.592 qpair failed and we were unable to recover it. 00:29:07.592 [2024-10-21 12:13:43.986401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.592 [2024-10-21 12:13:43.986435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.592 qpair failed and we were unable to recover it. 00:29:07.592 [2024-10-21 12:13:43.986813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.592 [2024-10-21 12:13:43.986844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.592 qpair failed and we were unable to recover it. 00:29:07.592 [2024-10-21 12:13:43.987083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.592 [2024-10-21 12:13:43.987115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.592 qpair failed and we were unable to recover it. 00:29:07.592 [2024-10-21 12:13:43.987460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.592 [2024-10-21 12:13:43.987493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.592 qpair failed and we were unable to recover it. 00:29:07.592 [2024-10-21 12:13:43.987863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.592 [2024-10-21 12:13:43.987895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.592 qpair failed and we were unable to recover it. 00:29:07.592 [2024-10-21 12:13:43.988186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.592 [2024-10-21 12:13:43.988217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.592 qpair failed and we were unable to recover it. 00:29:07.592 [2024-10-21 12:13:43.988640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.592 [2024-10-21 12:13:43.988672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.592 qpair failed and we were unable to recover it. 00:29:07.592 [2024-10-21 12:13:43.989024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.592 [2024-10-21 12:13:43.989055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.592 qpair failed and we were unable to recover it. 00:29:07.592 [2024-10-21 12:13:43.989246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.592 [2024-10-21 12:13:43.989276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.592 qpair failed and we were unable to recover it. 00:29:07.592 [2024-10-21 12:13:43.989741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.592 [2024-10-21 12:13:43.989774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.592 qpair failed and we were unable to recover it. 00:29:07.592 [2024-10-21 12:13:43.990096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.592 [2024-10-21 12:13:43.990129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.592 qpair failed and we were unable to recover it. 00:29:07.592 [2024-10-21 12:13:43.990451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.592 [2024-10-21 12:13:43.990483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.592 qpair failed and we were unable to recover it. 00:29:07.592 [2024-10-21 12:13:43.990887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.592 [2024-10-21 12:13:43.990919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.592 qpair failed and we were unable to recover it. 00:29:07.592 [2024-10-21 12:13:43.991270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.592 [2024-10-21 12:13:43.991300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.592 qpair failed and we were unable to recover it. 00:29:07.592 [2024-10-21 12:13:43.991686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.592 [2024-10-21 12:13:43.991717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.592 qpair failed and we were unable to recover it. 00:29:07.592 [2024-10-21 12:13:43.992062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.592 [2024-10-21 12:13:43.992095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.592 qpair failed and we were unable to recover it. 00:29:07.593 [2024-10-21 12:13:43.992444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.593 [2024-10-21 12:13:43.992476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.593 qpair failed and we were unable to recover it. 00:29:07.593 [2024-10-21 12:13:43.992816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.593 [2024-10-21 12:13:43.992846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.593 qpair failed and we were unable to recover it. 00:29:07.593 [2024-10-21 12:13:43.993213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.593 [2024-10-21 12:13:43.993246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.593 qpair failed and we were unable to recover it. 00:29:07.593 [2024-10-21 12:13:43.993485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.593 [2024-10-21 12:13:43.993516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.593 qpair failed and we were unable to recover it. 00:29:07.593 [2024-10-21 12:13:43.993866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.593 [2024-10-21 12:13:43.993897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.593 qpair failed and we were unable to recover it. 00:29:07.593 [2024-10-21 12:13:43.994255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.593 [2024-10-21 12:13:43.994286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.593 qpair failed and we were unable to recover it. 00:29:07.593 [2024-10-21 12:13:43.994705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.593 [2024-10-21 12:13:43.994737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.593 qpair failed and we were unable to recover it. 00:29:07.593 [2024-10-21 12:13:43.995087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.593 [2024-10-21 12:13:43.995118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.593 qpair failed and we were unable to recover it. 00:29:07.593 [2024-10-21 12:13:43.995400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.593 [2024-10-21 12:13:43.995432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.593 qpair failed and we were unable to recover it. 00:29:07.593 [2024-10-21 12:13:43.995708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.593 [2024-10-21 12:13:43.995739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.593 qpair failed and we were unable to recover it. 00:29:07.593 [2024-10-21 12:13:43.996080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.593 [2024-10-21 12:13:43.996112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.593 qpair failed and we were unable to recover it. 00:29:07.593 [2024-10-21 12:13:43.996486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.593 [2024-10-21 12:13:43.996518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.593 qpair failed and we were unable to recover it. 00:29:07.593 [2024-10-21 12:13:43.996899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.593 [2024-10-21 12:13:43.996930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.593 qpair failed and we were unable to recover it. 00:29:07.593 [2024-10-21 12:13:43.997170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.593 [2024-10-21 12:13:43.997201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.593 qpair failed and we were unable to recover it. 00:29:07.593 [2024-10-21 12:13:43.997434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.593 [2024-10-21 12:13:43.997465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.593 qpair failed and we were unable to recover it. 00:29:07.593 [2024-10-21 12:13:43.997860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.593 [2024-10-21 12:13:43.997890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.593 qpair failed and we were unable to recover it. 00:29:07.593 [2024-10-21 12:13:43.998304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.593 [2024-10-21 12:13:43.998359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.593 qpair failed and we were unable to recover it. 00:29:07.593 [2024-10-21 12:13:43.998748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.593 [2024-10-21 12:13:43.998779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.593 qpair failed and we were unable to recover it. 00:29:07.593 [2024-10-21 12:13:43.999144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.593 [2024-10-21 12:13:43.999176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.593 qpair failed and we were unable to recover it. 00:29:07.593 [2024-10-21 12:13:43.999566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.593 [2024-10-21 12:13:43.999599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.593 qpair failed and we were unable to recover it. 00:29:07.593 [2024-10-21 12:13:43.999946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.593 [2024-10-21 12:13:43.999978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.593 qpair failed and we were unable to recover it. 00:29:07.593 [2024-10-21 12:13:44.000361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.593 [2024-10-21 12:13:44.000393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.593 qpair failed and we were unable to recover it. 00:29:07.593 [2024-10-21 12:13:44.000778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.593 [2024-10-21 12:13:44.000815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.593 qpair failed and we were unable to recover it. 00:29:07.593 [2024-10-21 12:13:44.001169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.593 [2024-10-21 12:13:44.001202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.593 qpair failed and we were unable to recover it. 00:29:07.593 [2024-10-21 12:13:44.001646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.593 [2024-10-21 12:13:44.001679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.593 qpair failed and we were unable to recover it. 00:29:07.593 [2024-10-21 12:13:44.002038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.593 [2024-10-21 12:13:44.002070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.593 qpair failed and we were unable to recover it. 00:29:07.593 [2024-10-21 12:13:44.002318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.593 [2024-10-21 12:13:44.002363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.593 qpair failed and we were unable to recover it. 00:29:07.593 [2024-10-21 12:13:44.002831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.593 [2024-10-21 12:13:44.002862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.593 qpair failed and we were unable to recover it. 00:29:07.593 [2024-10-21 12:13:44.003219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.593 [2024-10-21 12:13:44.003251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.593 qpair failed and we were unable to recover it. 00:29:07.593 [2024-10-21 12:13:44.003657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.593 [2024-10-21 12:13:44.003690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.593 qpair failed and we were unable to recover it. 00:29:07.593 [2024-10-21 12:13:44.004100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.593 [2024-10-21 12:13:44.004130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.593 qpair failed and we were unable to recover it. 00:29:07.593 [2024-10-21 12:13:44.004522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.593 [2024-10-21 12:13:44.004554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.593 qpair failed and we were unable to recover it. 00:29:07.593 [2024-10-21 12:13:44.004906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.593 [2024-10-21 12:13:44.004938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.593 qpair failed and we were unable to recover it. 00:29:07.593 [2024-10-21 12:13:44.005295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.593 [2024-10-21 12:13:44.005337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.593 qpair failed and we were unable to recover it. 00:29:07.593 [2024-10-21 12:13:44.005598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.593 [2024-10-21 12:13:44.005628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.593 qpair failed and we were unable to recover it. 00:29:07.593 [2024-10-21 12:13:44.005977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.594 [2024-10-21 12:13:44.006007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.594 qpair failed and we were unable to recover it. 00:29:07.594 [2024-10-21 12:13:44.006161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.594 [2024-10-21 12:13:44.006191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.594 qpair failed and we were unable to recover it. 00:29:07.594 [2024-10-21 12:13:44.006651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.594 [2024-10-21 12:13:44.006683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.594 qpair failed and we were unable to recover it. 00:29:07.594 [2024-10-21 12:13:44.006922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.594 [2024-10-21 12:13:44.006952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.594 qpair failed and we were unable to recover it. 00:29:07.594 [2024-10-21 12:13:44.007302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.594 [2024-10-21 12:13:44.007347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.594 qpair failed and we were unable to recover it. 00:29:07.594 [2024-10-21 12:13:44.007751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.594 [2024-10-21 12:13:44.007782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.594 qpair failed and we were unable to recover it. 00:29:07.594 [2024-10-21 12:13:44.008147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.594 [2024-10-21 12:13:44.008177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.594 qpair failed and we were unable to recover it. 00:29:07.594 [2024-10-21 12:13:44.008568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.594 [2024-10-21 12:13:44.008600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.594 qpair failed and we were unable to recover it. 00:29:07.594 [2024-10-21 12:13:44.008947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.594 [2024-10-21 12:13:44.008980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.594 qpair failed and we were unable to recover it. 00:29:07.594 [2024-10-21 12:13:44.009400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.594 [2024-10-21 12:13:44.009431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.594 qpair failed and we were unable to recover it. 00:29:07.594 [2024-10-21 12:13:44.009811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.594 [2024-10-21 12:13:44.009842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.594 qpair failed and we were unable to recover it. 00:29:07.594 [2024-10-21 12:13:44.010088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.594 [2024-10-21 12:13:44.010118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.594 qpair failed and we were unable to recover it. 00:29:07.594 [2024-10-21 12:13:44.010438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.594 [2024-10-21 12:13:44.010470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.594 qpair failed and we were unable to recover it. 00:29:07.594 [2024-10-21 12:13:44.010823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.594 [2024-10-21 12:13:44.010854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.594 qpair failed and we were unable to recover it. 00:29:07.594 [2024-10-21 12:13:44.011213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.594 [2024-10-21 12:13:44.011244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.594 qpair failed and we were unable to recover it. 00:29:07.594 [2024-10-21 12:13:44.011488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.594 [2024-10-21 12:13:44.011520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.594 qpair failed and we were unable to recover it. 00:29:07.594 [2024-10-21 12:13:44.011928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.594 [2024-10-21 12:13:44.011959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.594 qpair failed and we were unable to recover it. 00:29:07.594 [2024-10-21 12:13:44.012316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.594 [2024-10-21 12:13:44.012362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.594 qpair failed and we were unable to recover it. 00:29:07.594 [2024-10-21 12:13:44.012729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.594 [2024-10-21 12:13:44.012761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.594 qpair failed and we were unable to recover it. 00:29:07.594 [2024-10-21 12:13:44.012948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.594 [2024-10-21 12:13:44.012979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.594 qpair failed and we were unable to recover it. 00:29:07.594 [2024-10-21 12:13:44.013409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.594 [2024-10-21 12:13:44.013441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.594 qpair failed and we were unable to recover it. 00:29:07.594 [2024-10-21 12:13:44.013751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.594 [2024-10-21 12:13:44.013782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.594 qpair failed and we were unable to recover it. 00:29:07.594 [2024-10-21 12:13:44.014136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.594 [2024-10-21 12:13:44.014167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.594 qpair failed and we were unable to recover it. 00:29:07.594 [2024-10-21 12:13:44.014515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.594 [2024-10-21 12:13:44.014548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.594 qpair failed and we were unable to recover it. 00:29:07.594 [2024-10-21 12:13:44.014926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.594 [2024-10-21 12:13:44.014957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.594 qpair failed and we were unable to recover it. 00:29:07.594 [2024-10-21 12:13:44.015307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.594 [2024-10-21 12:13:44.015351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.594 qpair failed and we were unable to recover it. 00:29:07.594 [2024-10-21 12:13:44.015754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.594 [2024-10-21 12:13:44.015784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.594 qpair failed and we were unable to recover it. 00:29:07.594 [2024-10-21 12:13:44.016035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.594 [2024-10-21 12:13:44.016072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.594 qpair failed and we were unable to recover it. 00:29:07.594 [2024-10-21 12:13:44.016449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.594 [2024-10-21 12:13:44.016481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.594 qpair failed and we were unable to recover it. 00:29:07.594 [2024-10-21 12:13:44.016860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.594 [2024-10-21 12:13:44.016891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.594 qpair failed and we were unable to recover it. 00:29:07.594 [2024-10-21 12:13:44.017255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.594 [2024-10-21 12:13:44.017286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.594 qpair failed and we were unable to recover it. 00:29:07.594 [2024-10-21 12:13:44.017680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.594 [2024-10-21 12:13:44.017712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.594 qpair failed and we were unable to recover it. 00:29:07.594 [2024-10-21 12:13:44.018081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.594 [2024-10-21 12:13:44.018113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.594 qpair failed and we were unable to recover it. 00:29:07.594 [2024-10-21 12:13:44.018484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.594 [2024-10-21 12:13:44.018517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.594 qpair failed and we were unable to recover it. 00:29:07.594 [2024-10-21 12:13:44.018898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.594 [2024-10-21 12:13:44.018928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.594 qpair failed and we were unable to recover it. 00:29:07.594 [2024-10-21 12:13:44.019308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.595 [2024-10-21 12:13:44.019349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.595 qpair failed and we were unable to recover it. 00:29:07.595 [2024-10-21 12:13:44.019713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.595 [2024-10-21 12:13:44.019744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.595 qpair failed and we were unable to recover it. 00:29:07.595 [2024-10-21 12:13:44.020102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.595 [2024-10-21 12:13:44.020133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.595 qpair failed and we were unable to recover it. 00:29:07.595 [2024-10-21 12:13:44.020509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.595 [2024-10-21 12:13:44.020540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.595 qpair failed and we were unable to recover it. 00:29:07.595 [2024-10-21 12:13:44.020906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.595 [2024-10-21 12:13:44.020937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.595 qpair failed and we were unable to recover it. 00:29:07.595 [2024-10-21 12:13:44.021305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.595 [2024-10-21 12:13:44.021346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.595 qpair failed and we were unable to recover it. 00:29:07.595 [2024-10-21 12:13:44.021736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.595 [2024-10-21 12:13:44.021768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.595 qpair failed and we were unable to recover it. 00:29:07.595 [2024-10-21 12:13:44.021971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.595 [2024-10-21 12:13:44.022001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.595 qpair failed and we were unable to recover it. 00:29:07.595 [2024-10-21 12:13:44.022382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.595 [2024-10-21 12:13:44.022416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.595 qpair failed and we were unable to recover it. 00:29:07.595 [2024-10-21 12:13:44.022690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.595 [2024-10-21 12:13:44.022721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.595 qpair failed and we were unable to recover it. 00:29:07.595 [2024-10-21 12:13:44.023077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.595 [2024-10-21 12:13:44.023107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.595 qpair failed and we were unable to recover it. 00:29:07.595 [2024-10-21 12:13:44.023481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.595 [2024-10-21 12:13:44.023514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.595 qpair failed and we were unable to recover it. 00:29:07.595 [2024-10-21 12:13:44.023779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.595 [2024-10-21 12:13:44.023809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.595 qpair failed and we were unable to recover it. 00:29:07.595 [2024-10-21 12:13:44.024064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.595 [2024-10-21 12:13:44.024096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.595 qpair failed and we were unable to recover it. 00:29:07.595 [2024-10-21 12:13:44.024341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.595 [2024-10-21 12:13:44.024373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.595 qpair failed and we were unable to recover it. 00:29:07.595 [2024-10-21 12:13:44.024748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.595 [2024-10-21 12:13:44.024779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.595 qpair failed and we were unable to recover it. 00:29:07.595 [2024-10-21 12:13:44.025128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.595 [2024-10-21 12:13:44.025159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.595 qpair failed and we were unable to recover it. 00:29:07.595 [2024-10-21 12:13:44.025562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.595 [2024-10-21 12:13:44.025594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.595 qpair failed and we were unable to recover it. 00:29:07.595 [2024-10-21 12:13:44.025947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.595 [2024-10-21 12:13:44.025980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.595 qpair failed and we were unable to recover it. 00:29:07.595 [2024-10-21 12:13:44.026344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.595 [2024-10-21 12:13:44.026377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.595 qpair failed and we were unable to recover it. 00:29:07.595 [2024-10-21 12:13:44.026752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.595 [2024-10-21 12:13:44.026782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.595 qpair failed and we were unable to recover it. 00:29:07.595 [2024-10-21 12:13:44.027039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.595 [2024-10-21 12:13:44.027069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.595 qpair failed and we were unable to recover it. 00:29:07.595 [2024-10-21 12:13:44.027451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.595 [2024-10-21 12:13:44.027484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.595 qpair failed and we were unable to recover it. 00:29:07.595 [2024-10-21 12:13:44.027841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.595 [2024-10-21 12:13:44.027872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.595 qpair failed and we were unable to recover it. 00:29:07.595 [2024-10-21 12:13:44.028225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.595 [2024-10-21 12:13:44.028256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.595 qpair failed and we were unable to recover it. 00:29:07.595 [2024-10-21 12:13:44.028661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.595 [2024-10-21 12:13:44.028693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.595 qpair failed and we were unable to recover it. 00:29:07.595 [2024-10-21 12:13:44.029066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.595 [2024-10-21 12:13:44.029096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.595 qpair failed and we were unable to recover it. 00:29:07.595 [2024-10-21 12:13:44.029383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.595 [2024-10-21 12:13:44.029415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.595 qpair failed and we were unable to recover it. 00:29:07.595 [2024-10-21 12:13:44.029803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.595 [2024-10-21 12:13:44.029833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.595 qpair failed and we were unable to recover it. 00:29:07.595 [2024-10-21 12:13:44.030183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.595 [2024-10-21 12:13:44.030214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.595 qpair failed and we were unable to recover it. 00:29:07.595 [2024-10-21 12:13:44.030493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.595 [2024-10-21 12:13:44.030525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.595 qpair failed and we were unable to recover it. 00:29:07.595 [2024-10-21 12:13:44.030896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.595 [2024-10-21 12:13:44.030926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.595 qpair failed and we were unable to recover it. 00:29:07.595 [2024-10-21 12:13:44.031274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.595 [2024-10-21 12:13:44.031311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.596 qpair failed and we were unable to recover it. 00:29:07.596 [2024-10-21 12:13:44.031736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.596 [2024-10-21 12:13:44.031768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.596 qpair failed and we were unable to recover it. 00:29:07.596 [2024-10-21 12:13:44.031915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.596 [2024-10-21 12:13:44.031948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.596 qpair failed and we were unable to recover it. 00:29:07.596 [2024-10-21 12:13:44.032310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.596 [2024-10-21 12:13:44.032354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.596 qpair failed and we were unable to recover it. 00:29:07.596 [2024-10-21 12:13:44.032736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.596 [2024-10-21 12:13:44.032768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.596 qpair failed and we were unable to recover it. 00:29:07.596 [2024-10-21 12:13:44.033017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.596 [2024-10-21 12:13:44.033048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.596 qpair failed and we were unable to recover it. 00:29:07.596 [2024-10-21 12:13:44.033442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.596 [2024-10-21 12:13:44.033474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.596 qpair failed and we were unable to recover it. 00:29:07.596 [2024-10-21 12:13:44.033774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.596 [2024-10-21 12:13:44.033806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.596 qpair failed and we were unable to recover it. 00:29:07.596 [2024-10-21 12:13:44.034161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.596 [2024-10-21 12:13:44.034191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.596 qpair failed and we were unable to recover it. 00:29:07.596 [2024-10-21 12:13:44.034441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.596 [2024-10-21 12:13:44.034473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.596 qpair failed and we were unable to recover it. 00:29:07.596 [2024-10-21 12:13:44.034830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.596 [2024-10-21 12:13:44.034862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.596 qpair failed and we were unable to recover it. 00:29:07.596 [2024-10-21 12:13:44.035237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.596 [2024-10-21 12:13:44.035266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.596 qpair failed and we were unable to recover it. 00:29:07.596 [2024-10-21 12:13:44.035639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.596 [2024-10-21 12:13:44.035671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.596 qpair failed and we were unable to recover it. 00:29:07.596 [2024-10-21 12:13:44.036042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.596 [2024-10-21 12:13:44.036073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.596 qpair failed and we were unable to recover it. 00:29:07.596 [2024-10-21 12:13:44.036356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.596 [2024-10-21 12:13:44.036388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.596 qpair failed and we were unable to recover it. 00:29:07.596 [2024-10-21 12:13:44.036768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.596 [2024-10-21 12:13:44.036798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.596 qpair failed and we were unable to recover it. 00:29:07.596 [2024-10-21 12:13:44.037152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.596 [2024-10-21 12:13:44.037184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.596 qpair failed and we were unable to recover it. 00:29:07.596 [2024-10-21 12:13:44.037568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.596 [2024-10-21 12:13:44.037602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.596 qpair failed and we were unable to recover it. 00:29:07.596 [2024-10-21 12:13:44.037952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.596 [2024-10-21 12:13:44.037983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.596 qpair failed and we were unable to recover it. 00:29:07.596 [2024-10-21 12:13:44.038238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.596 [2024-10-21 12:13:44.038269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.596 qpair failed and we were unable to recover it. 00:29:07.596 [2024-10-21 12:13:44.038684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.596 [2024-10-21 12:13:44.038716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.596 qpair failed and we were unable to recover it. 00:29:07.596 [2024-10-21 12:13:44.038948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.596 [2024-10-21 12:13:44.038982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.596 qpair failed and we were unable to recover it. 00:29:07.596 [2024-10-21 12:13:44.039232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.596 [2024-10-21 12:13:44.039263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.596 qpair failed and we were unable to recover it. 00:29:07.596 [2024-10-21 12:13:44.039540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.596 [2024-10-21 12:13:44.039571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.596 qpair failed and we were unable to recover it. 00:29:07.596 [2024-10-21 12:13:44.039846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.596 [2024-10-21 12:13:44.039876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.596 qpair failed and we were unable to recover it. 00:29:07.596 [2024-10-21 12:13:44.040224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.596 [2024-10-21 12:13:44.040255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.596 qpair failed and we were unable to recover it. 00:29:07.596 [2024-10-21 12:13:44.040607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.596 [2024-10-21 12:13:44.040642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.596 qpair failed and we were unable to recover it. 00:29:07.596 [2024-10-21 12:13:44.040889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.596 [2024-10-21 12:13:44.040920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.596 qpair failed and we were unable to recover it. 00:29:07.596 [2024-10-21 12:13:44.041197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.596 [2024-10-21 12:13:44.041226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.596 qpair failed and we were unable to recover it. 00:29:07.596 [2024-10-21 12:13:44.041390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.596 [2024-10-21 12:13:44.041426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.596 qpair failed and we were unable to recover it. 00:29:07.596 [2024-10-21 12:13:44.041823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.596 [2024-10-21 12:13:44.041855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.596 qpair failed and we were unable to recover it. 00:29:07.596 [2024-10-21 12:13:44.042206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.596 [2024-10-21 12:13:44.042236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.596 qpair failed and we were unable to recover it. 00:29:07.596 [2024-10-21 12:13:44.042585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.596 [2024-10-21 12:13:44.042619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.596 qpair failed and we were unable to recover it. 00:29:07.596 [2024-10-21 12:13:44.042989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.596 [2024-10-21 12:13:44.043019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.596 qpair failed and we were unable to recover it. 00:29:07.597 [2024-10-21 12:13:44.043438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.597 [2024-10-21 12:13:44.043471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.597 qpair failed and we were unable to recover it. 00:29:07.597 [2024-10-21 12:13:44.043850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.597 [2024-10-21 12:13:44.043881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.597 qpair failed and we were unable to recover it. 00:29:07.597 [2024-10-21 12:13:44.044106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.597 [2024-10-21 12:13:44.044136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.597 qpair failed and we were unable to recover it. 00:29:07.597 [2024-10-21 12:13:44.044557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.597 [2024-10-21 12:13:44.044590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.597 qpair failed and we were unable to recover it. 00:29:07.597 [2024-10-21 12:13:44.044956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.597 [2024-10-21 12:13:44.044986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.597 qpair failed and we were unable to recover it. 00:29:07.597 [2024-10-21 12:13:44.045373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.597 [2024-10-21 12:13:44.045407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.597 qpair failed and we were unable to recover it. 00:29:07.597 [2024-10-21 12:13:44.045745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.597 [2024-10-21 12:13:44.045783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.597 qpair failed and we were unable to recover it. 00:29:07.597 [2024-10-21 12:13:44.046161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.597 [2024-10-21 12:13:44.046192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.597 qpair failed and we were unable to recover it. 00:29:07.597 [2024-10-21 12:13:44.046454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.597 [2024-10-21 12:13:44.046489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.597 qpair failed and we were unable to recover it. 00:29:07.597 [2024-10-21 12:13:44.046836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.597 [2024-10-21 12:13:44.046867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.597 qpair failed and we were unable to recover it. 00:29:07.597 [2024-10-21 12:13:44.047100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.597 [2024-10-21 12:13:44.047129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.597 qpair failed and we were unable to recover it. 00:29:07.597 [2024-10-21 12:13:44.047504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.597 [2024-10-21 12:13:44.047536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.597 qpair failed and we were unable to recover it. 00:29:07.597 [2024-10-21 12:13:44.047895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.597 [2024-10-21 12:13:44.047926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.597 qpair failed and we were unable to recover it. 00:29:07.597 [2024-10-21 12:13:44.048273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.597 [2024-10-21 12:13:44.048304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.597 qpair failed and we were unable to recover it. 00:29:07.597 [2024-10-21 12:13:44.048697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.597 [2024-10-21 12:13:44.048729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.597 qpair failed and we were unable to recover it. 00:29:07.597 [2024-10-21 12:13:44.049087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.597 [2024-10-21 12:13:44.049119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.597 qpair failed and we were unable to recover it. 00:29:07.597 [2024-10-21 12:13:44.049403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.597 [2024-10-21 12:13:44.049435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.597 qpair failed and we were unable to recover it. 00:29:07.597 [2024-10-21 12:13:44.049797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.597 [2024-10-21 12:13:44.049828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.597 qpair failed and we were unable to recover it. 00:29:07.597 [2024-10-21 12:13:44.050190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.597 [2024-10-21 12:13:44.050222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.597 qpair failed and we were unable to recover it. 00:29:07.597 [2024-10-21 12:13:44.050483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.597 [2024-10-21 12:13:44.050516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.597 qpair failed and we were unable to recover it. 00:29:07.597 [2024-10-21 12:13:44.050888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.597 [2024-10-21 12:13:44.050920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.597 qpair failed and we were unable to recover it. 00:29:07.597 [2024-10-21 12:13:44.051288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.597 [2024-10-21 12:13:44.051332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.597 qpair failed and we were unable to recover it. 00:29:07.597 [2024-10-21 12:13:44.051704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.597 [2024-10-21 12:13:44.051736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.597 qpair failed and we were unable to recover it. 00:29:07.597 [2024-10-21 12:13:44.052112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.597 [2024-10-21 12:13:44.052143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.597 qpair failed and we were unable to recover it. 00:29:07.597 [2024-10-21 12:13:44.052368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.597 [2024-10-21 12:13:44.052400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.597 qpair failed and we were unable to recover it. 00:29:07.597 [2024-10-21 12:13:44.052652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.597 [2024-10-21 12:13:44.052683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.597 qpair failed and we were unable to recover it. 00:29:07.597 [2024-10-21 12:13:44.053071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.597 [2024-10-21 12:13:44.053102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.597 qpair failed and we were unable to recover it. 00:29:07.597 [2024-10-21 12:13:44.053400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.597 [2024-10-21 12:13:44.053435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.597 qpair failed and we were unable to recover it. 00:29:07.597 [2024-10-21 12:13:44.053698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.597 [2024-10-21 12:13:44.053731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.597 qpair failed and we were unable to recover it. 00:29:07.597 [2024-10-21 12:13:44.054025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.597 [2024-10-21 12:13:44.054057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.597 qpair failed and we were unable to recover it. 00:29:07.597 [2024-10-21 12:13:44.054456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.597 [2024-10-21 12:13:44.054488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.597 qpair failed and we were unable to recover it. 00:29:07.597 [2024-10-21 12:13:44.054749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.597 [2024-10-21 12:13:44.054781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.597 qpair failed and we were unable to recover it. 00:29:07.597 [2024-10-21 12:13:44.055156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.597 [2024-10-21 12:13:44.055188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.597 qpair failed and we were unable to recover it. 00:29:07.597 [2024-10-21 12:13:44.055541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.597 [2024-10-21 12:13:44.055574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.597 qpair failed and we were unable to recover it. 00:29:07.597 [2024-10-21 12:13:44.055912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.597 [2024-10-21 12:13:44.055944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.597 qpair failed and we were unable to recover it. 00:29:07.597 [2024-10-21 12:13:44.056313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.598 [2024-10-21 12:13:44.056356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.598 qpair failed and we were unable to recover it. 00:29:07.598 [2024-10-21 12:13:44.056730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.598 [2024-10-21 12:13:44.056760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.598 qpair failed and we were unable to recover it. 00:29:07.598 [2024-10-21 12:13:44.057132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.598 [2024-10-21 12:13:44.057163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.598 qpair failed and we were unable to recover it. 00:29:07.598 [2024-10-21 12:13:44.057534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.598 [2024-10-21 12:13:44.057568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.598 qpair failed and we were unable to recover it. 00:29:07.598 [2024-10-21 12:13:44.057941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.598 [2024-10-21 12:13:44.057972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.598 qpair failed and we were unable to recover it. 00:29:07.598 [2024-10-21 12:13:44.058358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.598 [2024-10-21 12:13:44.058392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.598 qpair failed and we were unable to recover it. 00:29:07.598 [2024-10-21 12:13:44.058740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.598 [2024-10-21 12:13:44.058771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.598 qpair failed and we were unable to recover it. 00:29:07.598 [2024-10-21 12:13:44.059124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.598 [2024-10-21 12:13:44.059155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.598 qpair failed and we were unable to recover it. 00:29:07.598 [2024-10-21 12:13:44.059414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.598 [2024-10-21 12:13:44.059444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.598 qpair failed and we were unable to recover it. 00:29:07.598 [2024-10-21 12:13:44.059798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.598 [2024-10-21 12:13:44.059829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.598 qpair failed and we were unable to recover it. 00:29:07.598 [2024-10-21 12:13:44.060162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.598 [2024-10-21 12:13:44.060193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.598 qpair failed and we were unable to recover it. 00:29:07.598 [2024-10-21 12:13:44.060546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.598 [2024-10-21 12:13:44.060586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.598 qpair failed and we were unable to recover it. 00:29:07.598 [2024-10-21 12:13:44.060927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.598 [2024-10-21 12:13:44.060958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.598 qpair failed and we were unable to recover it. 00:29:07.598 [2024-10-21 12:13:44.061311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.598 [2024-10-21 12:13:44.061358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.598 qpair failed and we were unable to recover it. 00:29:07.598 [2024-10-21 12:13:44.061614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.598 [2024-10-21 12:13:44.061645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.598 qpair failed and we were unable to recover it. 00:29:07.598 [2024-10-21 12:13:44.061981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.598 [2024-10-21 12:13:44.062011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.598 qpair failed and we were unable to recover it. 00:29:07.598 [2024-10-21 12:13:44.062350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.598 [2024-10-21 12:13:44.062384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.598 qpair failed and we were unable to recover it. 00:29:07.598 [2024-10-21 12:13:44.062743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.598 [2024-10-21 12:13:44.062774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.598 qpair failed and we were unable to recover it. 00:29:07.598 [2024-10-21 12:13:44.063041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.598 [2024-10-21 12:13:44.063070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.598 qpair failed and we were unable to recover it. 00:29:07.598 [2024-10-21 12:13:44.063223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.598 [2024-10-21 12:13:44.063257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.598 qpair failed and we were unable to recover it. 00:29:07.598 [2024-10-21 12:13:44.063611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.598 [2024-10-21 12:13:44.063643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.598 qpair failed and we were unable to recover it. 00:29:07.598 [2024-10-21 12:13:44.064006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.598 [2024-10-21 12:13:44.064037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.598 qpair failed and we were unable to recover it. 00:29:07.598 [2024-10-21 12:13:44.064367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.598 [2024-10-21 12:13:44.064399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.598 qpair failed and we were unable to recover it. 00:29:07.598 [2024-10-21 12:13:44.064771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.598 [2024-10-21 12:13:44.064802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.598 qpair failed and we were unable to recover it. 00:29:07.598 [2024-10-21 12:13:44.065160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.598 [2024-10-21 12:13:44.065190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.598 qpair failed and we were unable to recover it. 00:29:07.598 [2024-10-21 12:13:44.065588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.598 [2024-10-21 12:13:44.065620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.598 qpair failed and we were unable to recover it. 00:29:07.598 [2024-10-21 12:13:44.065981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.598 [2024-10-21 12:13:44.066013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.598 qpair failed and we were unable to recover it. 00:29:07.598 [2024-10-21 12:13:44.066359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.598 [2024-10-21 12:13:44.066392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.598 qpair failed and we were unable to recover it. 00:29:07.598 [2024-10-21 12:13:44.066764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.598 [2024-10-21 12:13:44.066794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.598 qpair failed and we were unable to recover it. 00:29:07.598 [2024-10-21 12:13:44.067154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.598 [2024-10-21 12:13:44.067185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.598 qpair failed and we were unable to recover it. 00:29:07.598 [2024-10-21 12:13:44.067551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.598 [2024-10-21 12:13:44.067585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.598 qpair failed and we were unable to recover it. 00:29:07.598 [2024-10-21 12:13:44.067937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.598 [2024-10-21 12:13:44.067968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.598 qpair failed and we were unable to recover it. 00:29:07.598 [2024-10-21 12:13:44.068210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.598 [2024-10-21 12:13:44.068240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.598 qpair failed and we were unable to recover it. 00:29:07.598 [2024-10-21 12:13:44.068549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.598 [2024-10-21 12:13:44.068582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.598 qpair failed and we were unable to recover it. 00:29:07.598 [2024-10-21 12:13:44.068935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.598 [2024-10-21 12:13:44.068966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.598 qpair failed and we were unable to recover it. 00:29:07.598 [2024-10-21 12:13:44.069316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.599 [2024-10-21 12:13:44.069357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.599 qpair failed and we were unable to recover it. 00:29:07.599 [2024-10-21 12:13:44.069707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.599 [2024-10-21 12:13:44.069738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.599 qpair failed and we were unable to recover it. 00:29:07.599 [2024-10-21 12:13:44.070100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.599 [2024-10-21 12:13:44.070133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.599 qpair failed and we were unable to recover it. 00:29:07.599 [2024-10-21 12:13:44.070395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.599 [2024-10-21 12:13:44.070429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.599 qpair failed and we were unable to recover it. 00:29:07.599 [2024-10-21 12:13:44.070676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.599 [2024-10-21 12:13:44.070706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.599 qpair failed and we were unable to recover it. 00:29:07.599 [2024-10-21 12:13:44.071060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.599 [2024-10-21 12:13:44.071092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.599 qpair failed and we were unable to recover it. 00:29:07.599 [2024-10-21 12:13:44.071463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.599 [2024-10-21 12:13:44.071496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.599 qpair failed and we were unable to recover it. 00:29:07.599 [2024-10-21 12:13:44.071868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.599 [2024-10-21 12:13:44.071898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.599 qpair failed and we were unable to recover it. 00:29:07.599 [2024-10-21 12:13:44.072279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.599 [2024-10-21 12:13:44.072311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.599 qpair failed and we were unable to recover it. 00:29:07.599 [2024-10-21 12:13:44.072678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.599 [2024-10-21 12:13:44.072711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.599 qpair failed and we were unable to recover it. 00:29:07.599 [2024-10-21 12:13:44.072997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.599 [2024-10-21 12:13:44.073028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.599 qpair failed and we were unable to recover it. 00:29:07.599 [2024-10-21 12:13:44.073265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.599 [2024-10-21 12:13:44.073296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.599 qpair failed and we were unable to recover it. 00:29:07.599 [2024-10-21 12:13:44.073643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.599 [2024-10-21 12:13:44.073675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.599 qpair failed and we were unable to recover it. 00:29:07.599 [2024-10-21 12:13:44.074030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.599 [2024-10-21 12:13:44.074062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.599 qpair failed and we were unable to recover it. 00:29:07.599 [2024-10-21 12:13:44.074383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.599 [2024-10-21 12:13:44.074417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.599 qpair failed and we were unable to recover it. 00:29:07.599 [2024-10-21 12:13:44.074660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.599 [2024-10-21 12:13:44.074689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.599 qpair failed and we were unable to recover it. 00:29:07.599 [2024-10-21 12:13:44.075031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.599 [2024-10-21 12:13:44.075066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.599 qpair failed and we were unable to recover it. 00:29:07.599 [2024-10-21 12:13:44.075445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.599 [2024-10-21 12:13:44.075478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.599 qpair failed and we were unable to recover it. 00:29:07.599 [2024-10-21 12:13:44.075869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.599 [2024-10-21 12:13:44.075900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.599 qpair failed and we were unable to recover it. 00:29:07.599 [2024-10-21 12:13:44.076142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.599 [2024-10-21 12:13:44.076172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.599 qpair failed and we were unable to recover it. 00:29:07.599 [2024-10-21 12:13:44.076635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.599 [2024-10-21 12:13:44.076667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.599 qpair failed and we were unable to recover it. 00:29:07.599 [2024-10-21 12:13:44.076899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.599 [2024-10-21 12:13:44.076932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.599 qpair failed and we were unable to recover it. 00:29:07.599 [2024-10-21 12:13:44.077291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.599 [2024-10-21 12:13:44.077333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.599 qpair failed and we were unable to recover it. 00:29:07.599 [2024-10-21 12:13:44.077730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.599 [2024-10-21 12:13:44.077761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.599 qpair failed and we were unable to recover it. 00:29:07.599 [2024-10-21 12:13:44.078117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.599 [2024-10-21 12:13:44.078148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.599 qpair failed and we were unable to recover it. 00:29:07.599 [2024-10-21 12:13:44.078503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.599 [2024-10-21 12:13:44.078537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.599 qpair failed and we were unable to recover it. 00:29:07.599 [2024-10-21 12:13:44.078876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.599 [2024-10-21 12:13:44.078907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.599 qpair failed and we were unable to recover it. 00:29:07.599 [2024-10-21 12:13:44.079294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.599 [2024-10-21 12:13:44.079346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.599 qpair failed and we were unable to recover it. 00:29:07.599 [2024-10-21 12:13:44.079725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.599 [2024-10-21 12:13:44.079756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.599 qpair failed and we were unable to recover it. 00:29:07.599 [2024-10-21 12:13:44.080139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.599 [2024-10-21 12:13:44.080170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.599 qpair failed and we were unable to recover it. 00:29:07.599 [2024-10-21 12:13:44.080424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.599 [2024-10-21 12:13:44.080456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.599 qpair failed and we were unable to recover it. 00:29:07.599 [2024-10-21 12:13:44.080729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.599 [2024-10-21 12:13:44.080759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.599 qpair failed and we were unable to recover it. 00:29:07.599 [2024-10-21 12:13:44.081139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.599 [2024-10-21 12:13:44.081169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.599 qpair failed and we were unable to recover it. 00:29:07.599 [2024-10-21 12:13:44.081515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.599 [2024-10-21 12:13:44.081546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.599 qpair failed and we were unable to recover it. 00:29:07.599 [2024-10-21 12:13:44.081971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.599 [2024-10-21 12:13:44.082002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.600 qpair failed and we were unable to recover it. 00:29:07.600 [2024-10-21 12:13:44.082361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.600 [2024-10-21 12:13:44.082394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.600 qpair failed and we were unable to recover it. 00:29:07.600 [2024-10-21 12:13:44.082746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.600 [2024-10-21 12:13:44.082776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.600 qpair failed and we were unable to recover it. 00:29:07.600 [2024-10-21 12:13:44.083041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.600 [2024-10-21 12:13:44.083072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.600 qpair failed and we were unable to recover it. 00:29:07.600 [2024-10-21 12:13:44.083430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.600 [2024-10-21 12:13:44.083461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.600 qpair failed and we were unable to recover it. 00:29:07.600 [2024-10-21 12:13:44.083723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.600 [2024-10-21 12:13:44.083752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.600 qpair failed and we were unable to recover it. 00:29:07.600 [2024-10-21 12:13:44.084098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.600 [2024-10-21 12:13:44.084129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.600 qpair failed and we were unable to recover it. 00:29:07.600 [2024-10-21 12:13:44.084476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.600 [2024-10-21 12:13:44.084509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.600 qpair failed and we were unable to recover it. 00:29:07.600 [2024-10-21 12:13:44.084883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.600 [2024-10-21 12:13:44.084914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.600 qpair failed and we were unable to recover it. 00:29:07.600 [2024-10-21 12:13:44.085271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.600 [2024-10-21 12:13:44.085302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.600 qpair failed and we were unable to recover it. 00:29:07.600 [2024-10-21 12:13:44.085718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.600 [2024-10-21 12:13:44.085750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.600 qpair failed and we were unable to recover it. 00:29:07.600 [2024-10-21 12:13:44.085975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.600 [2024-10-21 12:13:44.086004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.600 qpair failed and we were unable to recover it. 00:29:07.600 [2024-10-21 12:13:44.086346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.600 [2024-10-21 12:13:44.086379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.600 qpair failed and we were unable to recover it. 00:29:07.600 [2024-10-21 12:13:44.086761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.600 [2024-10-21 12:13:44.086792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.600 qpair failed and we were unable to recover it. 00:29:07.600 [2024-10-21 12:13:44.087087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.600 [2024-10-21 12:13:44.087121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.600 qpair failed and we were unable to recover it. 00:29:07.600 [2024-10-21 12:13:44.087469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.600 [2024-10-21 12:13:44.087502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.600 qpair failed and we were unable to recover it. 00:29:07.600 [2024-10-21 12:13:44.087879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.600 [2024-10-21 12:13:44.087909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.600 qpair failed and we were unable to recover it. 00:29:07.600 [2024-10-21 12:13:44.088319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.600 [2024-10-21 12:13:44.088362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.600 qpair failed and we were unable to recover it. 00:29:07.600 [2024-10-21 12:13:44.088759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.600 [2024-10-21 12:13:44.088790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.600 qpair failed and we were unable to recover it. 00:29:07.600 [2024-10-21 12:13:44.089038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.600 [2024-10-21 12:13:44.089069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.600 qpair failed and we were unable to recover it. 00:29:07.600 [2024-10-21 12:13:44.089379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.600 [2024-10-21 12:13:44.089412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.600 qpair failed and we were unable to recover it. 00:29:07.600 [2024-10-21 12:13:44.089785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.600 [2024-10-21 12:13:44.089816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.600 qpair failed and we were unable to recover it. 00:29:07.600 [2024-10-21 12:13:44.090166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.600 [2024-10-21 12:13:44.090205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.600 qpair failed and we were unable to recover it. 00:29:07.600 [2024-10-21 12:13:44.090466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.600 [2024-10-21 12:13:44.090497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.600 qpair failed and we were unable to recover it. 00:29:07.600 [2024-10-21 12:13:44.090859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.600 [2024-10-21 12:13:44.090890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.600 qpair failed and we were unable to recover it. 00:29:07.600 [2024-10-21 12:13:44.091128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.600 [2024-10-21 12:13:44.091159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.600 qpair failed and we were unable to recover it. 00:29:07.600 [2024-10-21 12:13:44.091497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.600 [2024-10-21 12:13:44.091528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.600 qpair failed and we were unable to recover it. 00:29:07.600 [2024-10-21 12:13:44.091892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.600 [2024-10-21 12:13:44.091923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.600 qpair failed and we were unable to recover it. 00:29:07.600 [2024-10-21 12:13:44.092268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.600 [2024-10-21 12:13:44.092299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.600 qpair failed and we were unable to recover it. 00:29:07.600 [2024-10-21 12:13:44.092661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.600 [2024-10-21 12:13:44.092696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.600 qpair failed and we were unable to recover it. 00:29:07.600 [2024-10-21 12:13:44.092939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.600 [2024-10-21 12:13:44.092970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.600 qpair failed and we were unable to recover it. 00:29:07.600 [2024-10-21 12:13:44.093206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.601 [2024-10-21 12:13:44.093243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.601 qpair failed and we were unable to recover it. 00:29:07.601 [2024-10-21 12:13:44.093617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.601 [2024-10-21 12:13:44.093651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.601 qpair failed and we were unable to recover it. 00:29:07.601 [2024-10-21 12:13:44.094034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.601 [2024-10-21 12:13:44.094066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.601 qpair failed and we were unable to recover it. 00:29:07.601 [2024-10-21 12:13:44.094408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.601 [2024-10-21 12:13:44.094440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.601 qpair failed and we were unable to recover it. 00:29:07.601 [2024-10-21 12:13:44.094821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.601 [2024-10-21 12:13:44.094853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.601 qpair failed and we were unable to recover it. 00:29:07.601 [2024-10-21 12:13:44.095222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.601 [2024-10-21 12:13:44.095254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.601 qpair failed and we were unable to recover it. 00:29:07.601 [2024-10-21 12:13:44.095655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.601 [2024-10-21 12:13:44.095687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.601 qpair failed and we were unable to recover it. 00:29:07.601 [2024-10-21 12:13:44.095923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.601 [2024-10-21 12:13:44.095954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.601 qpair failed and we were unable to recover it. 00:29:07.601 [2024-10-21 12:13:44.096342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.601 [2024-10-21 12:13:44.096375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.601 qpair failed and we were unable to recover it. 00:29:07.601 [2024-10-21 12:13:44.096769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.601 [2024-10-21 12:13:44.096799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.601 qpair failed and we were unable to recover it. 00:29:07.601 [2024-10-21 12:13:44.097166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.601 [2024-10-21 12:13:44.097196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.601 qpair failed and we were unable to recover it. 00:29:07.601 [2024-10-21 12:13:44.097607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.601 [2024-10-21 12:13:44.097640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.601 qpair failed and we were unable to recover it. 00:29:07.601 [2024-10-21 12:13:44.097914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.601 [2024-10-21 12:13:44.097944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.601 qpair failed and we were unable to recover it. 00:29:07.601 [2024-10-21 12:13:44.098313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.601 [2024-10-21 12:13:44.098359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.601 qpair failed and we were unable to recover it. 00:29:07.601 [2024-10-21 12:13:44.098735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.601 [2024-10-21 12:13:44.098767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.601 qpair failed and we were unable to recover it. 00:29:07.601 [2024-10-21 12:13:44.099144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.601 [2024-10-21 12:13:44.099173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.601 qpair failed and we were unable to recover it. 00:29:07.601 [2024-10-21 12:13:44.099463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.601 [2024-10-21 12:13:44.099496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.601 qpair failed and we were unable to recover it. 00:29:07.601 [2024-10-21 12:13:44.099904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.601 [2024-10-21 12:13:44.099935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.601 qpair failed and we were unable to recover it. 00:29:07.601 [2024-10-21 12:13:44.100302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.601 [2024-10-21 12:13:44.100346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.601 qpair failed and we were unable to recover it. 00:29:07.601 [2024-10-21 12:13:44.100733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.601 [2024-10-21 12:13:44.100765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.601 qpair failed and we were unable to recover it. 00:29:07.601 [2024-10-21 12:13:44.101013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.601 [2024-10-21 12:13:44.101046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.601 qpair failed and we were unable to recover it. 00:29:07.601 [2024-10-21 12:13:44.101332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.601 [2024-10-21 12:13:44.101363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.601 qpair failed and we were unable to recover it. 00:29:07.601 [2024-10-21 12:13:44.101639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.601 [2024-10-21 12:13:44.101670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.601 qpair failed and we were unable to recover it. 00:29:07.601 [2024-10-21 12:13:44.102024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.601 [2024-10-21 12:13:44.102058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.601 qpair failed and we were unable to recover it. 00:29:07.601 [2024-10-21 12:13:44.102364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.601 [2024-10-21 12:13:44.102396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.601 qpair failed and we were unable to recover it. 00:29:07.601 [2024-10-21 12:13:44.102896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.601 [2024-10-21 12:13:44.102927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.601 qpair failed and we were unable to recover it. 00:29:07.601 [2024-10-21 12:13:44.103300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.601 [2024-10-21 12:13:44.103345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.601 qpair failed and we were unable to recover it. 00:29:07.601 [2024-10-21 12:13:44.103744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.601 [2024-10-21 12:13:44.103774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.601 qpair failed and we were unable to recover it. 00:29:07.601 [2024-10-21 12:13:44.104079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.601 [2024-10-21 12:13:44.104109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.601 qpair failed and we were unable to recover it. 00:29:07.601 [2024-10-21 12:13:44.104409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.601 [2024-10-21 12:13:44.104441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.601 qpair failed and we were unable to recover it. 00:29:07.601 [2024-10-21 12:13:44.104723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.601 [2024-10-21 12:13:44.104753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.601 qpair failed and we were unable to recover it. 00:29:07.601 [2024-10-21 12:13:44.105002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.601 [2024-10-21 12:13:44.105038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.601 qpair failed and we were unable to recover it. 00:29:07.601 [2024-10-21 12:13:44.105395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.601 [2024-10-21 12:13:44.105430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.601 qpair failed and we were unable to recover it. 00:29:07.601 [2024-10-21 12:13:44.105687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.601 [2024-10-21 12:13:44.105718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.601 qpair failed and we were unable to recover it. 00:29:07.601 [2024-10-21 12:13:44.105982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.601 [2024-10-21 12:13:44.106012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.601 qpair failed and we were unable to recover it. 00:29:07.602 [2024-10-21 12:13:44.106420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.602 [2024-10-21 12:13:44.106452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.602 qpair failed and we were unable to recover it. 00:29:07.602 [2024-10-21 12:13:44.106850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.602 [2024-10-21 12:13:44.106882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.602 qpair failed and we were unable to recover it. 00:29:07.602 [2024-10-21 12:13:44.107067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.602 [2024-10-21 12:13:44.107098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.602 qpair failed and we were unable to recover it. 00:29:07.602 [2024-10-21 12:13:44.107459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.602 [2024-10-21 12:13:44.107492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.602 qpair failed and we were unable to recover it. 00:29:07.602 [2024-10-21 12:13:44.107877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.602 [2024-10-21 12:13:44.107908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.602 qpair failed and we were unable to recover it. 00:29:07.602 [2024-10-21 12:13:44.108291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.602 [2024-10-21 12:13:44.108336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.602 qpair failed and we were unable to recover it. 00:29:07.602 [2024-10-21 12:13:44.108710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.602 [2024-10-21 12:13:44.108741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.602 qpair failed and we were unable to recover it. 00:29:07.602 [2024-10-21 12:13:44.108994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.602 [2024-10-21 12:13:44.109023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.602 qpair failed and we were unable to recover it. 00:29:07.602 [2024-10-21 12:13:44.109377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.602 [2024-10-21 12:13:44.109410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.602 qpair failed and we were unable to recover it. 00:29:07.602 [2024-10-21 12:13:44.109792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.602 [2024-10-21 12:13:44.109823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.602 qpair failed and we were unable to recover it. 00:29:07.602 [2024-10-21 12:13:44.110059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.602 [2024-10-21 12:13:44.110095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.602 qpair failed and we were unable to recover it. 00:29:07.602 [2024-10-21 12:13:44.110479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.602 [2024-10-21 12:13:44.110512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.602 qpair failed and we were unable to recover it. 00:29:07.602 [2024-10-21 12:13:44.110882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.602 [2024-10-21 12:13:44.110912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.602 qpair failed and we were unable to recover it. 00:29:07.602 [2024-10-21 12:13:44.111270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.602 [2024-10-21 12:13:44.111301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.602 qpair failed and we were unable to recover it. 00:29:07.602 [2024-10-21 12:13:44.111684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.602 [2024-10-21 12:13:44.111717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.602 qpair failed and we were unable to recover it. 00:29:07.602 [2024-10-21 12:13:44.112076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.602 [2024-10-21 12:13:44.112106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.602 qpair failed and we were unable to recover it. 00:29:07.602 [2024-10-21 12:13:44.112502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.602 [2024-10-21 12:13:44.112534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.602 qpair failed and we were unable to recover it. 00:29:07.602 [2024-10-21 12:13:44.112948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.602 [2024-10-21 12:13:44.112978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.602 qpair failed and we were unable to recover it. 00:29:07.602 [2024-10-21 12:13:44.113345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.602 [2024-10-21 12:13:44.113378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.602 qpair failed and we were unable to recover it. 00:29:07.602 [2024-10-21 12:13:44.113651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.602 [2024-10-21 12:13:44.113681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.602 qpair failed and we were unable to recover it. 00:29:07.602 [2024-10-21 12:13:44.114143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.602 [2024-10-21 12:13:44.114174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.602 qpair failed and we were unable to recover it. 00:29:07.602 [2024-10-21 12:13:44.114419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.602 [2024-10-21 12:13:44.114451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.602 qpair failed and we were unable to recover it. 00:29:07.602 [2024-10-21 12:13:44.114813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.602 [2024-10-21 12:13:44.114842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.602 qpair failed and we were unable to recover it. 00:29:07.602 [2024-10-21 12:13:44.115210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.602 [2024-10-21 12:13:44.115241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.602 qpair failed and we were unable to recover it. 00:29:07.602 [2024-10-21 12:13:44.115629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.602 [2024-10-21 12:13:44.115661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.602 qpair failed and we were unable to recover it. 00:29:07.602 [2024-10-21 12:13:44.115868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.602 [2024-10-21 12:13:44.115897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.602 qpair failed and we were unable to recover it. 00:29:07.602 [2024-10-21 12:13:44.116266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.602 [2024-10-21 12:13:44.116297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.602 qpair failed and we were unable to recover it. 00:29:07.602 [2024-10-21 12:13:44.116696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.602 [2024-10-21 12:13:44.116729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.602 qpair failed and we were unable to recover it. 00:29:07.602 [2024-10-21 12:13:44.117074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.602 [2024-10-21 12:13:44.117105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.602 qpair failed and we were unable to recover it. 00:29:07.602 [2024-10-21 12:13:44.117365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.602 [2024-10-21 12:13:44.117397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.602 qpair failed and we were unable to recover it. 00:29:07.602 [2024-10-21 12:13:44.117767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.602 [2024-10-21 12:13:44.117798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.602 qpair failed and we were unable to recover it. 00:29:07.602 [2024-10-21 12:13:44.118042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.602 [2024-10-21 12:13:44.118071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.602 qpair failed and we were unable to recover it. 00:29:07.602 [2024-10-21 12:13:44.118300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.602 [2024-10-21 12:13:44.118346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.602 qpair failed and we were unable to recover it. 00:29:07.602 [2024-10-21 12:13:44.118706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.602 [2024-10-21 12:13:44.118737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.602 qpair failed and we were unable to recover it. 00:29:07.602 [2024-10-21 12:13:44.119098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.602 [2024-10-21 12:13:44.119129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.602 qpair failed and we were unable to recover it. 00:29:07.603 [2024-10-21 12:13:44.119387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.603 [2024-10-21 12:13:44.119418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.603 qpair failed and we were unable to recover it. 00:29:07.603 [2024-10-21 12:13:44.119812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.603 [2024-10-21 12:13:44.119847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.603 qpair failed and we were unable to recover it. 00:29:07.603 [2024-10-21 12:13:44.120231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.603 [2024-10-21 12:13:44.120262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.603 qpair failed and we were unable to recover it. 00:29:07.603 [2024-10-21 12:13:44.120647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.603 [2024-10-21 12:13:44.120680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.603 qpair failed and we were unable to recover it. 00:29:07.603 [2024-10-21 12:13:44.121020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.603 [2024-10-21 12:13:44.121052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.603 qpair failed and we were unable to recover it. 00:29:07.603 [2024-10-21 12:13:44.121310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.603 [2024-10-21 12:13:44.121354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.603 qpair failed and we were unable to recover it. 00:29:07.603 [2024-10-21 12:13:44.121747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.603 [2024-10-21 12:13:44.121778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.603 qpair failed and we were unable to recover it. 00:29:07.603 [2024-10-21 12:13:44.122146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.603 [2024-10-21 12:13:44.122177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.603 qpair failed and we were unable to recover it. 00:29:07.603 [2024-10-21 12:13:44.122422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.603 [2024-10-21 12:13:44.122454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.603 qpair failed and we were unable to recover it. 00:29:07.603 [2024-10-21 12:13:44.122813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.603 [2024-10-21 12:13:44.122844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.603 qpair failed and we were unable to recover it. 00:29:07.603 [2024-10-21 12:13:44.123185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.603 [2024-10-21 12:13:44.123216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.603 qpair failed and we were unable to recover it. 00:29:07.603 [2024-10-21 12:13:44.123588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.603 [2024-10-21 12:13:44.123621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.603 qpair failed and we were unable to recover it. 00:29:07.603 [2024-10-21 12:13:44.123978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.603 [2024-10-21 12:13:44.124010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.603 qpair failed and we were unable to recover it. 00:29:07.603 [2024-10-21 12:13:44.124187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.603 [2024-10-21 12:13:44.124218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.603 qpair failed and we were unable to recover it. 00:29:07.603 [2024-10-21 12:13:44.124422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.603 [2024-10-21 12:13:44.124452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.603 qpair failed and we were unable to recover it. 00:29:07.603 [2024-10-21 12:13:44.124836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.603 [2024-10-21 12:13:44.124866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.603 qpair failed and we were unable to recover it. 00:29:07.603 [2024-10-21 12:13:44.125236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.603 [2024-10-21 12:13:44.125267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.603 qpair failed and we were unable to recover it. 00:29:07.603 [2024-10-21 12:13:44.125566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.603 [2024-10-21 12:13:44.125598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.603 qpair failed and we were unable to recover it. 00:29:07.603 [2024-10-21 12:13:44.125947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.603 [2024-10-21 12:13:44.125979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.603 qpair failed and we were unable to recover it. 00:29:07.603 [2024-10-21 12:13:44.126338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.603 [2024-10-21 12:13:44.126371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.603 qpair failed and we were unable to recover it. 00:29:07.603 [2024-10-21 12:13:44.126761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.603 [2024-10-21 12:13:44.126792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.603 qpair failed and we were unable to recover it. 00:29:07.603 [2024-10-21 12:13:44.127053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.603 [2024-10-21 12:13:44.127083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.603 qpair failed and we were unable to recover it. 00:29:07.603 [2024-10-21 12:13:44.127356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.603 [2024-10-21 12:13:44.127388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.603 qpair failed and we were unable to recover it. 00:29:07.603 [2024-10-21 12:13:44.127648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.603 [2024-10-21 12:13:44.127680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.603 qpair failed and we were unable to recover it. 00:29:07.603 [2024-10-21 12:13:44.127923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.603 [2024-10-21 12:13:44.127953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.603 qpair failed and we were unable to recover it. 00:29:07.603 [2024-10-21 12:13:44.128315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.603 [2024-10-21 12:13:44.128360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.603 qpair failed and we were unable to recover it. 00:29:07.603 [2024-10-21 12:13:44.128737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.603 [2024-10-21 12:13:44.128768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.603 qpair failed and we were unable to recover it. 00:29:07.603 [2024-10-21 12:13:44.129121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.603 [2024-10-21 12:13:44.129153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.603 qpair failed and we were unable to recover it. 00:29:07.603 [2024-10-21 12:13:44.129413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.603 [2024-10-21 12:13:44.129447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.603 qpair failed and we were unable to recover it. 00:29:07.603 [2024-10-21 12:13:44.129701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.603 [2024-10-21 12:13:44.129730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.603 qpair failed and we were unable to recover it. 00:29:07.603 [2024-10-21 12:13:44.129978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.603 [2024-10-21 12:13:44.130012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.603 qpair failed and we were unable to recover it. 00:29:07.603 [2024-10-21 12:13:44.130452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.603 [2024-10-21 12:13:44.130485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.603 qpair failed and we were unable to recover it. 00:29:07.603 [2024-10-21 12:13:44.130865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.603 [2024-10-21 12:13:44.130896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.603 qpair failed and we were unable to recover it. 00:29:07.603 [2024-10-21 12:13:44.131266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.603 [2024-10-21 12:13:44.131296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.603 qpair failed and we were unable to recover it. 00:29:07.603 [2024-10-21 12:13:44.131565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.603 [2024-10-21 12:13:44.131596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.603 qpair failed and we were unable to recover it. 00:29:07.604 [2024-10-21 12:13:44.131818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.604 [2024-10-21 12:13:44.131851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.604 qpair failed and we were unable to recover it. 00:29:07.604 [2024-10-21 12:13:44.132184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.604 [2024-10-21 12:13:44.132216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.604 qpair failed and we were unable to recover it. 00:29:07.604 [2024-10-21 12:13:44.132419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.604 [2024-10-21 12:13:44.132450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.604 qpair failed and we were unable to recover it. 00:29:07.604 [2024-10-21 12:13:44.132682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.604 [2024-10-21 12:13:44.132711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.604 qpair failed and we were unable to recover it. 00:29:07.604 [2024-10-21 12:13:44.132879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.604 [2024-10-21 12:13:44.132910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.604 qpair failed and we were unable to recover it. 00:29:07.604 [2024-10-21 12:13:44.133315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.604 [2024-10-21 12:13:44.133358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.604 qpair failed and we were unable to recover it. 00:29:07.604 [2024-10-21 12:13:44.133593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.604 [2024-10-21 12:13:44.133630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.604 qpair failed and we were unable to recover it. 00:29:07.604 [2024-10-21 12:13:44.134045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.604 [2024-10-21 12:13:44.134076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.604 qpair failed and we were unable to recover it. 00:29:07.604 [2024-10-21 12:13:44.134297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.604 [2024-10-21 12:13:44.134336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.604 qpair failed and we were unable to recover it. 00:29:07.604 [2024-10-21 12:13:44.134727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.604 [2024-10-21 12:13:44.134756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.604 qpair failed and we were unable to recover it. 00:29:07.604 [2024-10-21 12:13:44.135100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.604 [2024-10-21 12:13:44.135132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.604 qpair failed and we were unable to recover it. 00:29:07.604 [2024-10-21 12:13:44.135535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.604 [2024-10-21 12:13:44.135566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.604 qpair failed and we were unable to recover it. 00:29:07.604 [2024-10-21 12:13:44.135947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.604 [2024-10-21 12:13:44.135978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.604 qpair failed and we were unable to recover it. 00:29:07.604 [2024-10-21 12:13:44.136248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.604 [2024-10-21 12:13:44.136278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.604 qpair failed and we were unable to recover it. 00:29:07.604 [2024-10-21 12:13:44.136655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.604 [2024-10-21 12:13:44.136686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.604 qpair failed and we were unable to recover it. 00:29:07.604 [2024-10-21 12:13:44.137059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.604 [2024-10-21 12:13:44.137089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.604 qpair failed and we were unable to recover it. 00:29:07.604 [2024-10-21 12:13:44.137459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.604 [2024-10-21 12:13:44.137494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.604 qpair failed and we were unable to recover it. 00:29:07.604 [2024-10-21 12:13:44.137908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.604 [2024-10-21 12:13:44.137939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.604 qpair failed and we were unable to recover it. 00:29:07.604 [2024-10-21 12:13:44.138301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.604 [2024-10-21 12:13:44.138360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.604 qpair failed and we were unable to recover it. 00:29:07.604 [2024-10-21 12:13:44.138729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.604 [2024-10-21 12:13:44.138760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.604 qpair failed and we were unable to recover it. 00:29:07.604 [2024-10-21 12:13:44.139132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.604 [2024-10-21 12:13:44.139163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.604 qpair failed and we were unable to recover it. 00:29:07.604 [2024-10-21 12:13:44.139459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.604 [2024-10-21 12:13:44.139490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.604 qpair failed and we were unable to recover it. 00:29:07.604 [2024-10-21 12:13:44.139773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.604 [2024-10-21 12:13:44.139804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.604 qpair failed and we were unable to recover it. 00:29:07.604 [2024-10-21 12:13:44.140038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.604 [2024-10-21 12:13:44.140069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.604 qpair failed and we were unable to recover it. 00:29:07.604 [2024-10-21 12:13:44.140346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.604 [2024-10-21 12:13:44.140378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.604 qpair failed and we were unable to recover it. 00:29:07.604 [2024-10-21 12:13:44.140762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.604 [2024-10-21 12:13:44.140793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.604 qpair failed and we were unable to recover it. 00:29:07.604 [2024-10-21 12:13:44.141041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.604 [2024-10-21 12:13:44.141070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.604 qpair failed and we were unable to recover it. 00:29:07.604 [2024-10-21 12:13:44.141421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.604 [2024-10-21 12:13:44.141454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.604 qpair failed and we were unable to recover it. 00:29:07.604 [2024-10-21 12:13:44.141847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.604 [2024-10-21 12:13:44.141878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.604 qpair failed and we were unable to recover it. 00:29:07.604 [2024-10-21 12:13:44.142227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.604 [2024-10-21 12:13:44.142260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.604 qpair failed and we were unable to recover it. 00:29:07.604 [2024-10-21 12:13:44.142677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.604 [2024-10-21 12:13:44.142709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.604 qpair failed and we were unable to recover it. 00:29:07.604 [2024-10-21 12:13:44.143061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.604 [2024-10-21 12:13:44.143091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.604 qpair failed and we were unable to recover it. 00:29:07.604 [2024-10-21 12:13:44.143238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.604 [2024-10-21 12:13:44.143268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.604 qpair failed and we were unable to recover it. 00:29:07.604 [2024-10-21 12:13:44.143404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.604 [2024-10-21 12:13:44.143434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.604 qpair failed and we were unable to recover it. 00:29:07.604 [2024-10-21 12:13:44.143807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.604 [2024-10-21 12:13:44.143837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.604 qpair failed and we were unable to recover it. 00:29:07.604 [2024-10-21 12:13:44.144186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.605 [2024-10-21 12:13:44.144218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.605 qpair failed and we were unable to recover it. 00:29:07.605 [2024-10-21 12:13:44.144775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.605 [2024-10-21 12:13:44.144806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.605 qpair failed and we were unable to recover it. 00:29:07.605 [2024-10-21 12:13:44.145220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.605 [2024-10-21 12:13:44.145251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.605 qpair failed and we were unable to recover it. 00:29:07.605 [2024-10-21 12:13:44.145625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.605 [2024-10-21 12:13:44.145658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.605 qpair failed and we were unable to recover it. 00:29:07.605 [2024-10-21 12:13:44.146096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.605 [2024-10-21 12:13:44.146127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.605 qpair failed and we were unable to recover it. 00:29:07.605 [2024-10-21 12:13:44.146510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.605 [2024-10-21 12:13:44.146542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.605 qpair failed and we were unable to recover it. 00:29:07.605 [2024-10-21 12:13:44.146976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.605 [2024-10-21 12:13:44.147006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.605 qpair failed and we were unable to recover it. 00:29:07.605 [2024-10-21 12:13:44.147409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.605 [2024-10-21 12:13:44.147441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.605 qpair failed and we were unable to recover it. 00:29:07.605 [2024-10-21 12:13:44.147801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.605 [2024-10-21 12:13:44.147831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.605 qpair failed and we were unable to recover it. 00:29:07.605 [2024-10-21 12:13:44.148167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.605 [2024-10-21 12:13:44.148198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.605 qpair failed and we were unable to recover it. 00:29:07.605 [2024-10-21 12:13:44.148450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.605 [2024-10-21 12:13:44.148480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.605 qpair failed and we were unable to recover it. 00:29:07.605 [2024-10-21 12:13:44.148913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.605 [2024-10-21 12:13:44.148943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.605 qpair failed and we were unable to recover it. 00:29:07.605 [2024-10-21 12:13:44.149203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.605 [2024-10-21 12:13:44.149232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.605 qpair failed and we were unable to recover it. 00:29:07.605 [2024-10-21 12:13:44.149713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.605 [2024-10-21 12:13:44.149745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.605 qpair failed and we were unable to recover it. 00:29:07.605 [2024-10-21 12:13:44.150102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.605 [2024-10-21 12:13:44.150134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.605 qpair failed and we were unable to recover it. 00:29:07.605 [2024-10-21 12:13:44.150497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.605 [2024-10-21 12:13:44.150529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.605 qpair failed and we were unable to recover it. 00:29:07.605 [2024-10-21 12:13:44.150836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.605 [2024-10-21 12:13:44.150866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.605 qpair failed and we were unable to recover it. 00:29:07.605 [2024-10-21 12:13:44.151116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.605 [2024-10-21 12:13:44.151147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.605 qpair failed and we were unable to recover it. 00:29:07.605 [2024-10-21 12:13:44.151307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.605 [2024-10-21 12:13:44.151350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.605 qpair failed and we were unable to recover it. 00:29:07.605 [2024-10-21 12:13:44.151720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.605 [2024-10-21 12:13:44.151750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.605 qpair failed and we were unable to recover it. 00:29:07.605 [2024-10-21 12:13:44.152100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.605 [2024-10-21 12:13:44.152129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.605 qpair failed and we were unable to recover it. 00:29:07.605 [2024-10-21 12:13:44.152507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.605 [2024-10-21 12:13:44.152541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.605 qpair failed and we were unable to recover it. 00:29:07.605 [2024-10-21 12:13:44.152896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.605 [2024-10-21 12:13:44.152928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.605 qpair failed and we were unable to recover it. 00:29:07.605 [2024-10-21 12:13:44.153285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.605 [2024-10-21 12:13:44.153316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.605 qpair failed and we were unable to recover it. 00:29:07.605 [2024-10-21 12:13:44.153704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.605 [2024-10-21 12:13:44.153734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.605 qpair failed and we were unable to recover it. 00:29:07.605 [2024-10-21 12:13:44.154098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.605 [2024-10-21 12:13:44.154129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.605 qpair failed and we were unable to recover it. 00:29:07.605 [2024-10-21 12:13:44.154368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.605 [2024-10-21 12:13:44.154400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.605 qpair failed and we were unable to recover it. 00:29:07.605 [2024-10-21 12:13:44.154760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.605 [2024-10-21 12:13:44.154791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.605 qpair failed and we were unable to recover it. 00:29:07.605 [2024-10-21 12:13:44.155124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.605 [2024-10-21 12:13:44.155157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.605 qpair failed and we were unable to recover it. 00:29:07.605 [2024-10-21 12:13:44.155539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.605 [2024-10-21 12:13:44.155571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.605 qpair failed and we were unable to recover it. 00:29:07.605 [2024-10-21 12:13:44.155962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.605 [2024-10-21 12:13:44.155992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.605 qpair failed and we were unable to recover it. 00:29:07.605 [2024-10-21 12:13:44.156366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.605 [2024-10-21 12:13:44.156398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.605 qpair failed and we were unable to recover it. 00:29:07.605 [2024-10-21 12:13:44.156677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.605 [2024-10-21 12:13:44.156708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.605 qpair failed and we were unable to recover it. 00:29:07.605 [2024-10-21 12:13:44.157062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.606 [2024-10-21 12:13:44.157093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.606 qpair failed and we were unable to recover it. 00:29:07.606 [2024-10-21 12:13:44.157452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.606 [2024-10-21 12:13:44.157484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.606 qpair failed and we were unable to recover it. 00:29:07.606 [2024-10-21 12:13:44.157852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.606 [2024-10-21 12:13:44.157883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.606 qpair failed and we were unable to recover it. 00:29:07.606 [2024-10-21 12:13:44.158251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.606 [2024-10-21 12:13:44.158281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.606 qpair failed and we were unable to recover it. 00:29:07.606 [2024-10-21 12:13:44.158649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.606 [2024-10-21 12:13:44.158683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.606 qpair failed and we were unable to recover it. 00:29:07.606 [2024-10-21 12:13:44.158917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.606 [2024-10-21 12:13:44.158957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.606 qpair failed and we were unable to recover it. 00:29:07.606 [2024-10-21 12:13:44.159308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.606 [2024-10-21 12:13:44.159357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.606 qpair failed and we were unable to recover it. 00:29:07.606 [2024-10-21 12:13:44.159720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.606 [2024-10-21 12:13:44.159751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.606 qpair failed and we were unable to recover it. 00:29:07.606 [2024-10-21 12:13:44.159991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.606 [2024-10-21 12:13:44.160021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.606 qpair failed and we were unable to recover it. 00:29:07.606 [2024-10-21 12:13:44.160394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.606 [2024-10-21 12:13:44.160428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.606 qpair failed and we were unable to recover it. 00:29:07.606 [2024-10-21 12:13:44.160807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.606 [2024-10-21 12:13:44.160838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.606 qpair failed and we were unable to recover it. 00:29:07.606 [2024-10-21 12:13:44.161201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.606 [2024-10-21 12:13:44.161232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.606 qpair failed and we were unable to recover it. 00:29:07.606 [2024-10-21 12:13:44.161582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.606 [2024-10-21 12:13:44.161614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.606 qpair failed and we were unable to recover it. 00:29:07.606 [2024-10-21 12:13:44.161982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.606 [2024-10-21 12:13:44.162012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.606 qpair failed and we were unable to recover it. 00:29:07.606 [2024-10-21 12:13:44.162357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.606 [2024-10-21 12:13:44.162393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.606 qpair failed and we were unable to recover it. 00:29:07.606 [2024-10-21 12:13:44.162754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.606 [2024-10-21 12:13:44.162784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.606 qpair failed and we were unable to recover it. 00:29:07.606 [2024-10-21 12:13:44.163145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.606 [2024-10-21 12:13:44.163177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.606 qpair failed and we were unable to recover it. 00:29:07.606 [2024-10-21 12:13:44.163553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.606 [2024-10-21 12:13:44.163585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.606 qpair failed and we were unable to recover it. 00:29:07.606 [2024-10-21 12:13:44.163824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.606 [2024-10-21 12:13:44.163853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.606 qpair failed and we were unable to recover it. 00:29:07.606 [2024-10-21 12:13:44.164218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.606 [2024-10-21 12:13:44.164249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.606 qpair failed and we were unable to recover it. 00:29:07.606 [2024-10-21 12:13:44.164591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.606 [2024-10-21 12:13:44.164627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.606 qpair failed and we were unable to recover it. 00:29:07.606 [2024-10-21 12:13:44.164974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.606 [2024-10-21 12:13:44.165004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.606 qpair failed and we were unable to recover it. 00:29:07.606 [2024-10-21 12:13:44.165379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.606 [2024-10-21 12:13:44.165412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.606 qpair failed and we were unable to recover it. 00:29:07.606 [2024-10-21 12:13:44.165786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.606 [2024-10-21 12:13:44.165818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.606 qpair failed and we were unable to recover it. 00:29:07.606 [2024-10-21 12:13:44.166188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.606 [2024-10-21 12:13:44.166219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.606 qpair failed and we were unable to recover it. 00:29:07.606 [2024-10-21 12:13:44.166440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.606 [2024-10-21 12:13:44.166471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.606 qpair failed and we were unable to recover it. 00:29:07.606 [2024-10-21 12:13:44.166820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.606 [2024-10-21 12:13:44.166850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.606 qpair failed and we were unable to recover it. 00:29:07.606 [2024-10-21 12:13:44.167208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.606 [2024-10-21 12:13:44.167238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.606 qpair failed and we were unable to recover it. 00:29:07.606 [2024-10-21 12:13:44.169253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.606 [2024-10-21 12:13:44.169314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.606 qpair failed and we were unable to recover it. 00:29:07.606 [2024-10-21 12:13:44.169620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.606 [2024-10-21 12:13:44.169656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.606 qpair failed and we were unable to recover it. 00:29:07.606 [2024-10-21 12:13:44.170039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.606 [2024-10-21 12:13:44.170073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.606 qpair failed and we were unable to recover it. 00:29:07.892 [2024-10-21 12:13:44.170403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.892 [2024-10-21 12:13:44.170440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.892 qpair failed and we were unable to recover it. 00:29:07.892 [2024-10-21 12:13:44.170811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.892 [2024-10-21 12:13:44.170843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.892 qpair failed and we were unable to recover it. 00:29:07.892 [2024-10-21 12:13:44.171205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.892 [2024-10-21 12:13:44.171239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.892 qpair failed and we were unable to recover it. 00:29:07.892 [2024-10-21 12:13:44.171454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.892 [2024-10-21 12:13:44.171485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.892 qpair failed and we were unable to recover it. 00:29:07.892 [2024-10-21 12:13:44.171814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.892 [2024-10-21 12:13:44.171845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.892 qpair failed and we were unable to recover it. 00:29:07.892 [2024-10-21 12:13:44.172202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.892 [2024-10-21 12:13:44.172234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.892 qpair failed and we were unable to recover it. 00:29:07.892 [2024-10-21 12:13:44.172483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.892 [2024-10-21 12:13:44.172517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.892 qpair failed and we were unable to recover it. 00:29:07.892 [2024-10-21 12:13:44.172881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.892 [2024-10-21 12:13:44.172914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.892 qpair failed and we were unable to recover it. 00:29:07.892 [2024-10-21 12:13:44.173262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.892 [2024-10-21 12:13:44.173293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.892 qpair failed and we were unable to recover it. 00:29:07.892 [2024-10-21 12:13:44.173673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.892 [2024-10-21 12:13:44.173705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.892 qpair failed and we were unable to recover it. 00:29:07.892 [2024-10-21 12:13:44.174069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.892 [2024-10-21 12:13:44.174100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.892 qpair failed and we were unable to recover it. 00:29:07.892 [2024-10-21 12:13:44.174449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.892 [2024-10-21 12:13:44.174482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.892 qpair failed and we were unable to recover it. 00:29:07.892 [2024-10-21 12:13:44.174833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.892 [2024-10-21 12:13:44.174864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.892 qpair failed and we were unable to recover it. 00:29:07.892 [2024-10-21 12:13:44.175228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.892 [2024-10-21 12:13:44.175257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.892 qpair failed and we were unable to recover it. 00:29:07.892 [2024-10-21 12:13:44.175618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.892 [2024-10-21 12:13:44.175659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.892 qpair failed and we were unable to recover it. 00:29:07.892 [2024-10-21 12:13:44.176006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.892 [2024-10-21 12:13:44.176036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.892 qpair failed and we were unable to recover it. 00:29:07.892 [2024-10-21 12:13:44.176407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.892 [2024-10-21 12:13:44.176440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.892 qpair failed and we were unable to recover it. 00:29:07.892 [2024-10-21 12:13:44.176809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.892 [2024-10-21 12:13:44.176839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.892 qpair failed and we were unable to recover it. 00:29:07.892 [2024-10-21 12:13:44.177178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.892 [2024-10-21 12:13:44.177208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.892 qpair failed and we were unable to recover it. 00:29:07.892 [2024-10-21 12:13:44.177554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.892 [2024-10-21 12:13:44.177589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.892 qpair failed and we were unable to recover it. 00:29:07.892 [2024-10-21 12:13:44.177937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.892 [2024-10-21 12:13:44.177967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.892 qpair failed and we were unable to recover it. 00:29:07.892 [2024-10-21 12:13:44.178343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.892 [2024-10-21 12:13:44.178377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.892 qpair failed and we were unable to recover it. 00:29:07.892 [2024-10-21 12:13:44.178671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.892 [2024-10-21 12:13:44.178703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.892 qpair failed and we were unable to recover it. 00:29:07.892 [2024-10-21 12:13:44.179050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.892 [2024-10-21 12:13:44.179083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.892 qpair failed and we were unable to recover it. 00:29:07.892 [2024-10-21 12:13:44.179439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.892 [2024-10-21 12:13:44.179472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.892 qpair failed and we were unable to recover it. 00:29:07.892 [2024-10-21 12:13:44.179838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.892 [2024-10-21 12:13:44.179869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.892 qpair failed and we were unable to recover it. 00:29:07.892 [2024-10-21 12:13:44.180228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.893 [2024-10-21 12:13:44.180259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.893 qpair failed and we were unable to recover it. 00:29:07.893 [2024-10-21 12:13:44.180629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.893 [2024-10-21 12:13:44.180661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.893 qpair failed and we were unable to recover it. 00:29:07.893 [2024-10-21 12:13:44.181022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.893 [2024-10-21 12:13:44.181054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.893 qpair failed and we were unable to recover it. 00:29:07.893 [2024-10-21 12:13:44.181398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.893 [2024-10-21 12:13:44.181431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.893 qpair failed and we were unable to recover it. 00:29:07.893 [2024-10-21 12:13:44.181820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.893 [2024-10-21 12:13:44.181851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.893 qpair failed and we were unable to recover it. 00:29:07.893 [2024-10-21 12:13:44.182197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.893 [2024-10-21 12:13:44.182229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.893 qpair failed and we were unable to recover it. 00:29:07.893 [2024-10-21 12:13:44.182643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.893 [2024-10-21 12:13:44.182675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.893 qpair failed and we were unable to recover it. 00:29:07.893 [2024-10-21 12:13:44.183030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.893 [2024-10-21 12:13:44.183060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.893 qpair failed and we were unable to recover it. 00:29:07.893 [2024-10-21 12:13:44.183398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.893 [2024-10-21 12:13:44.183431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.893 qpair failed and we were unable to recover it. 00:29:07.893 [2024-10-21 12:13:44.183800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.893 [2024-10-21 12:13:44.183833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.893 qpair failed and we were unable to recover it. 00:29:07.893 [2024-10-21 12:13:44.184192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.893 [2024-10-21 12:13:44.184222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.893 qpair failed and we were unable to recover it. 00:29:07.893 [2024-10-21 12:13:44.184575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.893 [2024-10-21 12:13:44.184606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.893 qpair failed and we were unable to recover it. 00:29:07.893 [2024-10-21 12:13:44.184971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.893 [2024-10-21 12:13:44.185002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.893 qpair failed and we were unable to recover it. 00:29:07.893 [2024-10-21 12:13:44.185367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.893 [2024-10-21 12:13:44.185400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.893 qpair failed and we were unable to recover it. 00:29:07.893 [2024-10-21 12:13:44.185769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.893 [2024-10-21 12:13:44.185801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.893 qpair failed and we were unable to recover it. 00:29:07.893 [2024-10-21 12:13:44.186174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.893 [2024-10-21 12:13:44.186207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.893 qpair failed and we were unable to recover it. 00:29:07.893 [2024-10-21 12:13:44.186569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.893 [2024-10-21 12:13:44.186601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.893 qpair failed and we were unable to recover it. 00:29:07.893 [2024-10-21 12:13:44.186958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.893 [2024-10-21 12:13:44.186988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.893 qpair failed and we were unable to recover it. 00:29:07.893 [2024-10-21 12:13:44.187381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.893 [2024-10-21 12:13:44.187413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.893 qpair failed and we were unable to recover it. 00:29:07.893 [2024-10-21 12:13:44.187772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.893 [2024-10-21 12:13:44.187805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.893 qpair failed and we were unable to recover it. 00:29:07.893 [2024-10-21 12:13:44.188166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.893 [2024-10-21 12:13:44.188196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.893 qpair failed and we were unable to recover it. 00:29:07.893 [2024-10-21 12:13:44.188569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.893 [2024-10-21 12:13:44.188605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.893 qpair failed and we were unable to recover it. 00:29:07.893 [2024-10-21 12:13:44.188954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.893 [2024-10-21 12:13:44.188985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.893 qpair failed and we were unable to recover it. 00:29:07.893 [2024-10-21 12:13:44.189361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.893 [2024-10-21 12:13:44.189394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.893 qpair failed and we were unable to recover it. 00:29:07.893 [2024-10-21 12:13:44.189742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.893 [2024-10-21 12:13:44.189771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.893 qpair failed and we were unable to recover it. 00:29:07.893 [2024-10-21 12:13:44.190125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.893 [2024-10-21 12:13:44.190156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.893 qpair failed and we were unable to recover it. 00:29:07.893 [2024-10-21 12:13:44.190537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.893 [2024-10-21 12:13:44.190567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.893 qpair failed and we were unable to recover it. 00:29:07.893 [2024-10-21 12:13:44.190933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.893 [2024-10-21 12:13:44.190964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.893 qpair failed and we were unable to recover it. 00:29:07.893 [2024-10-21 12:13:44.191308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.893 [2024-10-21 12:13:44.191360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.893 qpair failed and we were unable to recover it. 00:29:07.893 [2024-10-21 12:13:44.191745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.893 [2024-10-21 12:13:44.191776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.893 qpair failed and we were unable to recover it. 00:29:07.893 [2024-10-21 12:13:44.192172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.893 [2024-10-21 12:13:44.192203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.893 qpair failed and we were unable to recover it. 00:29:07.893 [2024-10-21 12:13:44.192603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.893 [2024-10-21 12:13:44.192636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.893 qpair failed and we were unable to recover it. 00:29:07.893 [2024-10-21 12:13:44.192984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.893 [2024-10-21 12:13:44.193017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.893 qpair failed and we were unable to recover it. 00:29:07.893 [2024-10-21 12:13:44.193381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.893 [2024-10-21 12:13:44.193413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.893 qpair failed and we were unable to recover it. 00:29:07.893 [2024-10-21 12:13:44.193792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.893 [2024-10-21 12:13:44.193822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.893 qpair failed and we were unable to recover it. 00:29:07.893 [2024-10-21 12:13:44.195827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.894 [2024-10-21 12:13:44.195890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.894 qpair failed and we were unable to recover it. 00:29:07.894 [2024-10-21 12:13:44.196260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.894 [2024-10-21 12:13:44.196298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.894 qpair failed and we were unable to recover it. 00:29:07.894 [2024-10-21 12:13:44.196698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.894 [2024-10-21 12:13:44.196730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.894 qpair failed and we were unable to recover it. 00:29:07.894 [2024-10-21 12:13:44.197100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.894 [2024-10-21 12:13:44.197131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.894 qpair failed and we were unable to recover it. 00:29:07.894 [2024-10-21 12:13:44.197493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.894 [2024-10-21 12:13:44.197526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.894 qpair failed and we were unable to recover it. 00:29:07.894 [2024-10-21 12:13:44.197881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.894 [2024-10-21 12:13:44.197910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.894 qpair failed and we were unable to recover it. 00:29:07.894 [2024-10-21 12:13:44.198261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.894 [2024-10-21 12:13:44.198292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.894 qpair failed and we were unable to recover it. 00:29:07.894 [2024-10-21 12:13:44.198687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.894 [2024-10-21 12:13:44.198720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.894 qpair failed and we were unable to recover it. 00:29:07.894 [2024-10-21 12:13:44.199060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.894 [2024-10-21 12:13:44.199093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.894 qpair failed and we were unable to recover it. 00:29:07.894 [2024-10-21 12:13:44.199446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.894 [2024-10-21 12:13:44.199478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.894 qpair failed and we were unable to recover it. 00:29:07.894 [2024-10-21 12:13:44.199838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.894 [2024-10-21 12:13:44.199870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.894 qpair failed and we were unable to recover it. 00:29:07.894 [2024-10-21 12:13:44.200271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.894 [2024-10-21 12:13:44.200302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.894 qpair failed and we were unable to recover it. 00:29:07.894 [2024-10-21 12:13:44.200699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.894 [2024-10-21 12:13:44.200732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.894 qpair failed and we were unable to recover it. 00:29:07.894 [2024-10-21 12:13:44.201097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.894 [2024-10-21 12:13:44.201127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.894 qpair failed and we were unable to recover it. 00:29:07.894 [2024-10-21 12:13:44.201492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.894 [2024-10-21 12:13:44.201527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.894 qpair failed and we were unable to recover it. 00:29:07.894 [2024-10-21 12:13:44.201915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.894 [2024-10-21 12:13:44.201947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.894 qpair failed and we were unable to recover it. 00:29:07.894 [2024-10-21 12:13:44.202314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.894 [2024-10-21 12:13:44.202358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.894 qpair failed and we were unable to recover it. 00:29:07.894 [2024-10-21 12:13:44.202712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.894 [2024-10-21 12:13:44.202744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.894 qpair failed and we were unable to recover it. 00:29:07.894 [2024-10-21 12:13:44.203091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.894 [2024-10-21 12:13:44.203123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.894 qpair failed and we were unable to recover it. 00:29:07.894 [2024-10-21 12:13:44.203477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.894 [2024-10-21 12:13:44.203508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.894 qpair failed and we were unable to recover it. 00:29:07.894 [2024-10-21 12:13:44.203868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.894 [2024-10-21 12:13:44.203899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.894 qpair failed and we were unable to recover it. 00:29:07.894 [2024-10-21 12:13:44.204254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.894 [2024-10-21 12:13:44.204287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.894 qpair failed and we were unable to recover it. 00:29:07.894 [2024-10-21 12:13:44.204676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.894 [2024-10-21 12:13:44.204708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.894 qpair failed and we were unable to recover it. 00:29:07.894 [2024-10-21 12:13:44.205087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.894 [2024-10-21 12:13:44.205118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.894 qpair failed and we were unable to recover it. 00:29:07.894 [2024-10-21 12:13:44.205486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.894 [2024-10-21 12:13:44.205521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.894 qpair failed and we were unable to recover it. 00:29:07.894 [2024-10-21 12:13:44.205872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.894 [2024-10-21 12:13:44.205904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.894 qpair failed and we were unable to recover it. 00:29:07.894 [2024-10-21 12:13:44.206248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.894 [2024-10-21 12:13:44.206281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.894 qpair failed and we were unable to recover it. 00:29:07.894 [2024-10-21 12:13:44.206675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.894 [2024-10-21 12:13:44.206709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.894 qpair failed and we were unable to recover it. 00:29:07.894 [2024-10-21 12:13:44.207075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.894 [2024-10-21 12:13:44.207106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.894 qpair failed and we were unable to recover it. 00:29:07.894 [2024-10-21 12:13:44.207460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.894 [2024-10-21 12:13:44.207492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.894 qpair failed and we were unable to recover it. 00:29:07.894 [2024-10-21 12:13:44.207871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.894 [2024-10-21 12:13:44.207902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.894 qpair failed and we were unable to recover it. 00:29:07.894 [2024-10-21 12:13:44.208250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.894 [2024-10-21 12:13:44.208282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.894 qpair failed and we were unable to recover it. 00:29:07.894 [2024-10-21 12:13:44.208690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.894 [2024-10-21 12:13:44.208725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.894 qpair failed and we were unable to recover it. 00:29:07.894 [2024-10-21 12:13:44.209061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.894 [2024-10-21 12:13:44.209098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.894 qpair failed and we were unable to recover it. 00:29:07.894 [2024-10-21 12:13:44.209448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.894 [2024-10-21 12:13:44.209482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.894 qpair failed and we were unable to recover it. 00:29:07.895 [2024-10-21 12:13:44.209841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.895 [2024-10-21 12:13:44.209873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.895 qpair failed and we were unable to recover it. 00:29:07.895 [2024-10-21 12:13:44.210232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.895 [2024-10-21 12:13:44.210264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.895 qpair failed and we were unable to recover it. 00:29:07.895 [2024-10-21 12:13:44.210624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.895 [2024-10-21 12:13:44.210657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.895 qpair failed and we were unable to recover it. 00:29:07.895 [2024-10-21 12:13:44.211008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.895 [2024-10-21 12:13:44.211041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.895 qpair failed and we were unable to recover it. 00:29:07.895 [2024-10-21 12:13:44.211405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.895 [2024-10-21 12:13:44.211436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.895 qpair failed and we were unable to recover it. 00:29:07.895 [2024-10-21 12:13:44.211819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.895 [2024-10-21 12:13:44.211850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.895 qpair failed and we were unable to recover it. 00:29:07.895 [2024-10-21 12:13:44.212210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.895 [2024-10-21 12:13:44.212242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.895 qpair failed and we were unable to recover it. 00:29:07.895 [2024-10-21 12:13:44.212612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.895 [2024-10-21 12:13:44.212645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.895 qpair failed and we were unable to recover it. 00:29:07.895 [2024-10-21 12:13:44.213001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.895 [2024-10-21 12:13:44.213032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.895 qpair failed and we were unable to recover it. 00:29:07.895 [2024-10-21 12:13:44.213414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.895 [2024-10-21 12:13:44.213468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.895 qpair failed and we were unable to recover it. 00:29:07.895 [2024-10-21 12:13:44.213867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.895 [2024-10-21 12:13:44.213899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.895 qpair failed and we were unable to recover it. 00:29:07.895 [2024-10-21 12:13:44.214250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.895 [2024-10-21 12:13:44.214281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.895 qpair failed and we were unable to recover it. 00:29:07.895 [2024-10-21 12:13:44.214663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.895 [2024-10-21 12:13:44.214696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.895 qpair failed and we were unable to recover it. 00:29:07.895 [2024-10-21 12:13:44.215049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.895 [2024-10-21 12:13:44.215081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.895 qpair failed and we were unable to recover it. 00:29:07.895 [2024-10-21 12:13:44.215447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.895 [2024-10-21 12:13:44.215478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.895 qpair failed and we were unable to recover it. 00:29:07.895 [2024-10-21 12:13:44.215826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.895 [2024-10-21 12:13:44.215858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.895 qpair failed and we were unable to recover it. 00:29:07.895 [2024-10-21 12:13:44.216212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.895 [2024-10-21 12:13:44.216243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.895 qpair failed and we were unable to recover it. 00:29:07.895 [2024-10-21 12:13:44.216597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.895 [2024-10-21 12:13:44.216630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.895 qpair failed and we were unable to recover it. 00:29:07.895 [2024-10-21 12:13:44.216879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.895 [2024-10-21 12:13:44.216912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.895 qpair failed and we were unable to recover it. 00:29:07.895 [2024-10-21 12:13:44.217265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.895 [2024-10-21 12:13:44.217296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.895 qpair failed and we were unable to recover it. 00:29:07.895 [2024-10-21 12:13:44.217686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.895 [2024-10-21 12:13:44.217719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.895 qpair failed and we were unable to recover it. 00:29:07.895 [2024-10-21 12:13:44.218073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.895 [2024-10-21 12:13:44.218104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.895 qpair failed and we were unable to recover it. 00:29:07.895 [2024-10-21 12:13:44.218346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.895 [2024-10-21 12:13:44.218382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.895 qpair failed and we were unable to recover it. 00:29:07.895 [2024-10-21 12:13:44.218740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.895 [2024-10-21 12:13:44.218771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.895 qpair failed and we were unable to recover it. 00:29:07.895 [2024-10-21 12:13:44.219128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.895 [2024-10-21 12:13:44.219157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.895 qpair failed and we were unable to recover it. 00:29:07.895 [2024-10-21 12:13:44.219500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.895 [2024-10-21 12:13:44.219533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.895 qpair failed and we were unable to recover it. 00:29:07.895 [2024-10-21 12:13:44.219896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.895 [2024-10-21 12:13:44.219927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.895 qpair failed and we were unable to recover it. 00:29:07.895 [2024-10-21 12:13:44.220287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.895 [2024-10-21 12:13:44.220330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.895 qpair failed and we were unable to recover it. 00:29:07.895 [2024-10-21 12:13:44.220699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.895 [2024-10-21 12:13:44.220730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.895 qpair failed and we were unable to recover it. 00:29:07.895 [2024-10-21 12:13:44.220911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.895 [2024-10-21 12:13:44.220944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.895 qpair failed and we were unable to recover it. 00:29:07.895 [2024-10-21 12:13:44.221304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.895 [2024-10-21 12:13:44.221365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.895 qpair failed and we were unable to recover it. 00:29:07.895 [2024-10-21 12:13:44.221710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.895 [2024-10-21 12:13:44.221741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.895 qpair failed and we were unable to recover it. 00:29:07.895 [2024-10-21 12:13:44.221975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.895 [2024-10-21 12:13:44.222008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.895 qpair failed and we were unable to recover it. 00:29:07.895 [2024-10-21 12:13:44.222264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.895 [2024-10-21 12:13:44.222298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.895 qpair failed and we were unable to recover it. 00:29:07.895 [2024-10-21 12:13:44.224121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.896 [2024-10-21 12:13:44.224179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.896 qpair failed and we were unable to recover it. 00:29:07.896 [2024-10-21 12:13:44.224560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.896 [2024-10-21 12:13:44.224596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.896 qpair failed and we were unable to recover it. 00:29:07.896 [2024-10-21 12:13:44.224946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.896 [2024-10-21 12:13:44.224978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.896 qpair failed and we were unable to recover it. 00:29:07.896 [2024-10-21 12:13:44.225343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.896 [2024-10-21 12:13:44.225376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.896 qpair failed and we were unable to recover it. 00:29:07.896 [2024-10-21 12:13:44.225764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.896 [2024-10-21 12:13:44.225804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.896 qpair failed and we were unable to recover it. 00:29:07.896 [2024-10-21 12:13:44.226157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.896 [2024-10-21 12:13:44.226188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.896 qpair failed and we were unable to recover it. 00:29:07.896 [2024-10-21 12:13:44.226434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.896 [2024-10-21 12:13:44.226468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.896 qpair failed and we were unable to recover it. 00:29:07.896 [2024-10-21 12:13:44.226830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.896 [2024-10-21 12:13:44.226861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.896 qpair failed and we were unable to recover it. 00:29:07.896 [2024-10-21 12:13:44.227216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.896 [2024-10-21 12:13:44.227247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.896 qpair failed and we were unable to recover it. 00:29:07.896 [2024-10-21 12:13:44.227593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.896 [2024-10-21 12:13:44.227625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.896 qpair failed and we were unable to recover it. 00:29:07.896 [2024-10-21 12:13:44.227976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.896 [2024-10-21 12:13:44.228008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.896 qpair failed and we were unable to recover it. 00:29:07.896 [2024-10-21 12:13:44.228368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.896 [2024-10-21 12:13:44.228403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.896 qpair failed and we were unable to recover it. 00:29:07.896 [2024-10-21 12:13:44.228762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.896 [2024-10-21 12:13:44.228794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.896 qpair failed and we were unable to recover it. 00:29:07.896 [2024-10-21 12:13:44.229167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.896 [2024-10-21 12:13:44.229199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.896 qpair failed and we were unable to recover it. 00:29:07.896 [2024-10-21 12:13:44.229572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.896 [2024-10-21 12:13:44.229605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.896 qpair failed and we were unable to recover it. 00:29:07.896 [2024-10-21 12:13:44.229962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.896 [2024-10-21 12:13:44.229993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.896 qpair failed and we were unable to recover it. 00:29:07.896 [2024-10-21 12:13:44.230350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.896 [2024-10-21 12:13:44.230383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.896 qpair failed and we were unable to recover it. 00:29:07.896 [2024-10-21 12:13:44.230768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.896 [2024-10-21 12:13:44.230800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.896 qpair failed and we were unable to recover it. 00:29:07.896 [2024-10-21 12:13:44.231156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.896 [2024-10-21 12:13:44.231188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.896 qpair failed and we were unable to recover it. 00:29:07.896 [2024-10-21 12:13:44.231550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.896 [2024-10-21 12:13:44.231585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.896 qpair failed and we were unable to recover it. 00:29:07.896 [2024-10-21 12:13:44.231943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.896 [2024-10-21 12:13:44.231974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.896 qpair failed and we were unable to recover it. 00:29:07.896 [2024-10-21 12:13:44.233679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.896 [2024-10-21 12:13:44.233742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.896 qpair failed and we were unable to recover it. 00:29:07.896 [2024-10-21 12:13:44.234136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.896 [2024-10-21 12:13:44.234173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.896 qpair failed and we were unable to recover it. 00:29:07.896 [2024-10-21 12:13:44.234559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.896 [2024-10-21 12:13:44.234592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.896 qpair failed and we were unable to recover it. 00:29:07.896 [2024-10-21 12:13:44.234952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.896 [2024-10-21 12:13:44.234982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.896 qpair failed and we were unable to recover it. 00:29:07.896 [2024-10-21 12:13:44.235346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.896 [2024-10-21 12:13:44.235378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.896 qpair failed and we were unable to recover it. 00:29:07.896 [2024-10-21 12:13:44.235733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.896 [2024-10-21 12:13:44.235764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.896 qpair failed and we were unable to recover it. 00:29:07.896 [2024-10-21 12:13:44.236006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.896 [2024-10-21 12:13:44.236041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.896 qpair failed and we were unable to recover it. 00:29:07.896 [2024-10-21 12:13:44.236395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.896 [2024-10-21 12:13:44.236428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.896 qpair failed and we were unable to recover it. 00:29:07.896 [2024-10-21 12:13:44.236784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.896 [2024-10-21 12:13:44.236818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.896 qpair failed and we were unable to recover it. 00:29:07.896 [2024-10-21 12:13:44.237174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.896 [2024-10-21 12:13:44.237205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.896 qpair failed and we were unable to recover it. 00:29:07.896 [2024-10-21 12:13:44.237572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.896 [2024-10-21 12:13:44.237606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.896 qpair failed and we were unable to recover it. 00:29:07.896 [2024-10-21 12:13:44.238014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.896 [2024-10-21 12:13:44.238048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.896 qpair failed and we were unable to recover it. 00:29:07.896 [2024-10-21 12:13:44.238389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.896 [2024-10-21 12:13:44.238422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.896 qpair failed and we were unable to recover it. 00:29:07.896 [2024-10-21 12:13:44.238822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.896 [2024-10-21 12:13:44.238855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.896 qpair failed and we were unable to recover it. 00:29:07.897 [2024-10-21 12:13:44.239203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.897 [2024-10-21 12:13:44.239235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.897 qpair failed and we were unable to recover it. 00:29:07.897 [2024-10-21 12:13:44.239613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.897 [2024-10-21 12:13:44.239646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.897 qpair failed and we were unable to recover it. 00:29:07.897 [2024-10-21 12:13:44.239993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.897 [2024-10-21 12:13:44.240024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.897 qpair failed and we were unable to recover it. 00:29:07.897 [2024-10-21 12:13:44.240378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.897 [2024-10-21 12:13:44.240410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.897 qpair failed and we were unable to recover it. 00:29:07.897 [2024-10-21 12:13:44.240786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.897 [2024-10-21 12:13:44.240818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.897 qpair failed and we were unable to recover it. 00:29:07.897 [2024-10-21 12:13:44.241173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.897 [2024-10-21 12:13:44.241205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.897 qpair failed and we were unable to recover it. 00:29:07.897 [2024-10-21 12:13:44.241538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.897 [2024-10-21 12:13:44.241573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.897 qpair failed and we were unable to recover it. 00:29:07.897 [2024-10-21 12:13:44.241919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.897 [2024-10-21 12:13:44.241952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.897 qpair failed and we were unable to recover it. 00:29:07.897 [2024-10-21 12:13:44.242317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.897 [2024-10-21 12:13:44.242364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.897 qpair failed and we were unable to recover it. 00:29:07.897 [2024-10-21 12:13:44.242696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.897 [2024-10-21 12:13:44.242736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.897 qpair failed and we were unable to recover it. 00:29:07.897 [2024-10-21 12:13:44.243123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.897 [2024-10-21 12:13:44.243155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.897 qpair failed and we were unable to recover it. 00:29:07.897 [2024-10-21 12:13:44.243514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.897 [2024-10-21 12:13:44.243548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.897 qpair failed and we were unable to recover it. 00:29:07.897 [2024-10-21 12:13:44.243913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.897 [2024-10-21 12:13:44.243944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.897 qpair failed and we were unable to recover it. 00:29:07.897 [2024-10-21 12:13:44.244313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.897 [2024-10-21 12:13:44.244359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.897 qpair failed and we were unable to recover it. 00:29:07.897 [2024-10-21 12:13:44.244734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.897 [2024-10-21 12:13:44.244765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.897 qpair failed and we were unable to recover it. 00:29:07.897 [2024-10-21 12:13:44.245124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.897 [2024-10-21 12:13:44.245155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.897 qpair failed and we were unable to recover it. 00:29:07.897 [2024-10-21 12:13:44.245503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.897 [2024-10-21 12:13:44.245538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.897 qpair failed and we were unable to recover it. 00:29:07.897 [2024-10-21 12:13:44.245902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.897 [2024-10-21 12:13:44.245932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.897 qpair failed and we were unable to recover it. 00:29:07.897 [2024-10-21 12:13:44.246297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.897 [2024-10-21 12:13:44.246338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.897 qpair failed and we were unable to recover it. 00:29:07.897 [2024-10-21 12:13:44.246732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.897 [2024-10-21 12:13:44.246764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.897 qpair failed and we were unable to recover it. 00:29:07.897 [2024-10-21 12:13:44.247117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.897 [2024-10-21 12:13:44.247150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.897 qpair failed and we were unable to recover it. 00:29:07.897 [2024-10-21 12:13:44.247486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.897 [2024-10-21 12:13:44.247518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.897 qpair failed and we were unable to recover it. 00:29:07.897 [2024-10-21 12:13:44.247951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.897 [2024-10-21 12:13:44.247982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.897 qpair failed and we were unable to recover it. 00:29:07.897 [2024-10-21 12:13:44.248342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.897 [2024-10-21 12:13:44.248376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.897 qpair failed and we were unable to recover it. 00:29:07.897 [2024-10-21 12:13:44.248755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.897 [2024-10-21 12:13:44.248788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.897 qpair failed and we were unable to recover it. 00:29:07.897 [2024-10-21 12:13:44.249152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.897 [2024-10-21 12:13:44.249185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.897 qpair failed and we were unable to recover it. 00:29:07.897 [2024-10-21 12:13:44.249543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.897 [2024-10-21 12:13:44.249576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.897 qpair failed and we were unable to recover it. 00:29:07.897 [2024-10-21 12:13:44.249946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.897 [2024-10-21 12:13:44.249979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.897 qpair failed and we were unable to recover it. 00:29:07.897 [2024-10-21 12:13:44.250355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.897 [2024-10-21 12:13:44.250388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.897 qpair failed and we were unable to recover it. 00:29:07.897 [2024-10-21 12:13:44.250785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.898 [2024-10-21 12:13:44.250815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.898 qpair failed and we were unable to recover it. 00:29:07.898 [2024-10-21 12:13:44.251163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.898 [2024-10-21 12:13:44.251196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.898 qpair failed and we were unable to recover it. 00:29:07.898 [2024-10-21 12:13:44.251567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.898 [2024-10-21 12:13:44.251601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.898 qpair failed and we were unable to recover it. 00:29:07.898 [2024-10-21 12:13:44.251948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.898 [2024-10-21 12:13:44.251980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.898 qpair failed and we were unable to recover it. 00:29:07.898 [2024-10-21 12:13:44.252345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.898 [2024-10-21 12:13:44.252380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.898 qpair failed and we were unable to recover it. 00:29:07.898 [2024-10-21 12:13:44.252752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.898 [2024-10-21 12:13:44.252783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.898 qpair failed and we were unable to recover it. 00:29:07.898 [2024-10-21 12:13:44.253168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.898 [2024-10-21 12:13:44.253200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.898 qpair failed and we were unable to recover it. 00:29:07.898 [2024-10-21 12:13:44.253630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.898 [2024-10-21 12:13:44.253663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.898 qpair failed and we were unable to recover it. 00:29:07.898 [2024-10-21 12:13:44.254046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.898 [2024-10-21 12:13:44.254078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.898 qpair failed and we were unable to recover it. 00:29:07.898 [2024-10-21 12:13:44.254436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.898 [2024-10-21 12:13:44.254469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.898 qpair failed and we were unable to recover it. 00:29:07.898 [2024-10-21 12:13:44.254839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.898 [2024-10-21 12:13:44.254869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.898 qpair failed and we were unable to recover it. 00:29:07.898 [2024-10-21 12:13:44.255229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.898 [2024-10-21 12:13:44.255260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.898 qpair failed and we were unable to recover it. 00:29:07.898 [2024-10-21 12:13:44.255646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.898 [2024-10-21 12:13:44.255678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.898 qpair failed and we were unable to recover it. 00:29:07.898 [2024-10-21 12:13:44.255930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.898 [2024-10-21 12:13:44.255960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.898 qpair failed and we were unable to recover it. 00:29:07.898 [2024-10-21 12:13:44.256346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.898 [2024-10-21 12:13:44.256381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.898 qpair failed and we were unable to recover it. 00:29:07.898 [2024-10-21 12:13:44.258123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.898 [2024-10-21 12:13:44.258180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.898 qpair failed and we were unable to recover it. 00:29:07.898 [2024-10-21 12:13:44.258566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.898 [2024-10-21 12:13:44.258602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.898 qpair failed and we were unable to recover it. 00:29:07.898 [2024-10-21 12:13:44.260356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.898 [2024-10-21 12:13:44.260413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.898 qpair failed and we were unable to recover it. 00:29:07.898 [2024-10-21 12:13:44.260848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.898 [2024-10-21 12:13:44.260882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.898 qpair failed and we were unable to recover it. 00:29:07.898 [2024-10-21 12:13:44.262645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.898 [2024-10-21 12:13:44.262704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.898 qpair failed and we were unable to recover it. 00:29:07.898 [2024-10-21 12:13:44.263110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.898 [2024-10-21 12:13:44.263155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.898 qpair failed and we were unable to recover it. 00:29:07.898 [2024-10-21 12:13:44.263469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.898 [2024-10-21 12:13:44.263505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.898 qpair failed and we were unable to recover it. 00:29:07.898 [2024-10-21 12:13:44.263890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.898 [2024-10-21 12:13:44.263922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.898 qpair failed and we were unable to recover it. 00:29:07.898 [2024-10-21 12:13:44.264277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.898 [2024-10-21 12:13:44.264308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.898 qpair failed and we were unable to recover it. 00:29:07.898 [2024-10-21 12:13:44.264668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.898 [2024-10-21 12:13:44.264701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.898 qpair failed and we were unable to recover it. 00:29:07.898 [2024-10-21 12:13:44.265039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.898 [2024-10-21 12:13:44.265070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.898 qpair failed and we were unable to recover it. 00:29:07.898 [2024-10-21 12:13:44.265445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.898 [2024-10-21 12:13:44.265477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.898 qpair failed and we were unable to recover it. 00:29:07.898 [2024-10-21 12:13:44.265838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.898 [2024-10-21 12:13:44.265870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.898 qpair failed and we were unable to recover it. 00:29:07.898 [2024-10-21 12:13:44.266261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.898 [2024-10-21 12:13:44.266292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.898 qpair failed and we were unable to recover it. 00:29:07.898 [2024-10-21 12:13:44.267981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.898 [2024-10-21 12:13:44.268037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.898 qpair failed and we were unable to recover it. 00:29:07.898 [2024-10-21 12:13:44.268418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.898 [2024-10-21 12:13:44.268453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.898 qpair failed and we were unable to recover it. 00:29:07.898 [2024-10-21 12:13:44.268811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.898 [2024-10-21 12:13:44.268841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.898 qpair failed and we were unable to recover it. 00:29:07.898 [2024-10-21 12:13:44.269201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.898 [2024-10-21 12:13:44.269232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.898 qpair failed and we were unable to recover it. 00:29:07.898 [2024-10-21 12:13:44.269478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.898 [2024-10-21 12:13:44.269509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.898 qpair failed and we were unable to recover it. 00:29:07.898 [2024-10-21 12:13:44.269905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.898 [2024-10-21 12:13:44.269936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.898 qpair failed and we were unable to recover it. 00:29:07.898 [2024-10-21 12:13:44.270269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.899 [2024-10-21 12:13:44.270300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.899 qpair failed and we were unable to recover it. 00:29:07.899 [2024-10-21 12:13:44.270705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.899 [2024-10-21 12:13:44.270738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.899 qpair failed and we were unable to recover it. 00:29:07.899 [2024-10-21 12:13:44.272378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.899 [2024-10-21 12:13:44.272434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.899 qpair failed and we were unable to recover it. 00:29:07.899 [2024-10-21 12:13:44.272870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.899 [2024-10-21 12:13:44.272903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.899 qpair failed and we were unable to recover it. 00:29:07.899 [2024-10-21 12:13:44.275295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.899 [2024-10-21 12:13:44.275377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.899 qpair failed and we were unable to recover it. 00:29:07.899 [2024-10-21 12:13:44.275788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.899 [2024-10-21 12:13:44.275825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.899 qpair failed and we were unable to recover it. 00:29:07.899 [2024-10-21 12:13:44.276209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.899 [2024-10-21 12:13:44.276242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.899 qpair failed and we were unable to recover it. 00:29:07.899 [2024-10-21 12:13:44.276603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.899 [2024-10-21 12:13:44.276637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.899 qpair failed and we were unable to recover it. 00:29:07.899 [2024-10-21 12:13:44.276868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.899 [2024-10-21 12:13:44.276897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.899 qpair failed and we were unable to recover it. 00:29:07.899 [2024-10-21 12:13:44.277259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.899 [2024-10-21 12:13:44.277290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.899 qpair failed and we were unable to recover it. 00:29:07.899 [2024-10-21 12:13:44.277678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.899 [2024-10-21 12:13:44.277712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.899 qpair failed and we were unable to recover it. 00:29:07.899 [2024-10-21 12:13:44.278050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.899 [2024-10-21 12:13:44.278083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.899 qpair failed and we were unable to recover it. 00:29:07.899 [2024-10-21 12:13:44.278474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.899 [2024-10-21 12:13:44.278508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.899 qpair failed and we were unable to recover it. 00:29:07.899 [2024-10-21 12:13:44.278865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.899 [2024-10-21 12:13:44.278895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.899 qpair failed and we were unable to recover it. 00:29:07.899 [2024-10-21 12:13:44.279259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.899 [2024-10-21 12:13:44.279291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.899 qpair failed and we were unable to recover it. 00:29:07.899 [2024-10-21 12:13:44.279689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.899 [2024-10-21 12:13:44.279722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.899 qpair failed and we were unable to recover it. 00:29:07.899 [2024-10-21 12:13:44.279948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.899 [2024-10-21 12:13:44.279979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.899 qpair failed and we were unable to recover it. 00:29:07.899 [2024-10-21 12:13:44.280349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.899 [2024-10-21 12:13:44.280382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.899 qpair failed and we were unable to recover it. 00:29:07.899 [2024-10-21 12:13:44.282040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.899 [2024-10-21 12:13:44.282098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.899 qpair failed and we were unable to recover it. 00:29:07.899 [2024-10-21 12:13:44.282404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.899 [2024-10-21 12:13:44.282443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.899 qpair failed and we were unable to recover it. 00:29:07.899 [2024-10-21 12:13:44.282854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.899 [2024-10-21 12:13:44.282886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.899 qpair failed and we were unable to recover it. 00:29:07.899 [2024-10-21 12:13:44.283249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.899 [2024-10-21 12:13:44.283280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.899 qpair failed and we were unable to recover it. 00:29:07.899 [2024-10-21 12:13:44.283668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.899 [2024-10-21 12:13:44.283702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.899 qpair failed and we were unable to recover it. 00:29:07.899 [2024-10-21 12:13:44.284053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.899 [2024-10-21 12:13:44.284084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.899 qpair failed and we were unable to recover it. 00:29:07.899 [2024-10-21 12:13:44.284439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.899 [2024-10-21 12:13:44.284471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.899 qpair failed and we were unable to recover it. 00:29:07.899 [2024-10-21 12:13:44.284842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.899 [2024-10-21 12:13:44.284882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.899 qpair failed and we were unable to recover it. 00:29:07.899 [2024-10-21 12:13:44.285228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.899 [2024-10-21 12:13:44.285259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.899 qpair failed and we were unable to recover it. 00:29:07.899 [2024-10-21 12:13:44.285622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.899 [2024-10-21 12:13:44.285655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.899 qpair failed and we were unable to recover it. 00:29:07.899 [2024-10-21 12:13:44.285992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.899 [2024-10-21 12:13:44.286025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.899 qpair failed and we were unable to recover it. 00:29:07.899 [2024-10-21 12:13:44.286382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.899 [2024-10-21 12:13:44.286415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.899 qpair failed and we were unable to recover it. 00:29:07.899 [2024-10-21 12:13:44.286818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.899 [2024-10-21 12:13:44.286848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.899 qpair failed and we were unable to recover it. 00:29:07.899 [2024-10-21 12:13:44.287124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.899 [2024-10-21 12:13:44.287157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.899 qpair failed and we were unable to recover it. 00:29:07.899 [2024-10-21 12:13:44.287547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.899 [2024-10-21 12:13:44.287579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.899 qpair failed and we were unable to recover it. 00:29:07.899 [2024-10-21 12:13:44.287939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.899 [2024-10-21 12:13:44.287971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.899 qpair failed and we were unable to recover it. 00:29:07.899 [2024-10-21 12:13:44.288345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.899 [2024-10-21 12:13:44.288379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.899 qpair failed and we were unable to recover it. 00:29:07.899 [2024-10-21 12:13:44.288744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.900 [2024-10-21 12:13:44.288774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.900 qpair failed and we were unable to recover it. 00:29:07.900 [2024-10-21 12:13:44.289178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.900 [2024-10-21 12:13:44.289208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.900 qpair failed and we were unable to recover it. 00:29:07.900 [2024-10-21 12:13:44.289573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.900 [2024-10-21 12:13:44.289606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.900 qpair failed and we were unable to recover it. 00:29:07.900 [2024-10-21 12:13:44.289970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.900 [2024-10-21 12:13:44.290000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.900 qpair failed and we were unable to recover it. 00:29:07.900 [2024-10-21 12:13:44.290265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.900 [2024-10-21 12:13:44.290295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.900 qpair failed and we were unable to recover it. 00:29:07.900 [2024-10-21 12:13:44.290693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.900 [2024-10-21 12:13:44.290727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.900 qpair failed and we were unable to recover it. 00:29:07.900 [2024-10-21 12:13:44.291076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.900 [2024-10-21 12:13:44.291107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.900 qpair failed and we were unable to recover it. 00:29:07.900 [2024-10-21 12:13:44.291357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.900 [2024-10-21 12:13:44.291392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.900 qpair failed and we were unable to recover it. 00:29:07.900 [2024-10-21 12:13:44.291750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.900 [2024-10-21 12:13:44.291782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.900 qpair failed and we were unable to recover it. 00:29:07.900 [2024-10-21 12:13:44.292048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.900 [2024-10-21 12:13:44.292079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.900 qpair failed and we were unable to recover it. 00:29:07.900 [2024-10-21 12:13:44.292433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.900 [2024-10-21 12:13:44.292467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.900 qpair failed and we were unable to recover it. 00:29:07.900 [2024-10-21 12:13:44.294590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.900 [2024-10-21 12:13:44.294658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.900 qpair failed and we were unable to recover it. 00:29:07.900 [2024-10-21 12:13:44.295109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.900 [2024-10-21 12:13:44.295146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.900 qpair failed and we were unable to recover it. 00:29:07.900 [2024-10-21 12:13:44.295550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.900 [2024-10-21 12:13:44.295583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.900 qpair failed and we were unable to recover it. 00:29:07.900 [2024-10-21 12:13:44.295961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.900 [2024-10-21 12:13:44.295991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.900 qpair failed and we were unable to recover it. 00:29:07.900 [2024-10-21 12:13:44.296352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.900 [2024-10-21 12:13:44.296382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.900 qpair failed and we were unable to recover it. 00:29:07.900 [2024-10-21 12:13:44.296757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.900 [2024-10-21 12:13:44.296788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.900 qpair failed and we were unable to recover it. 00:29:07.900 [2024-10-21 12:13:44.297153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.900 [2024-10-21 12:13:44.297184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.900 qpair failed and we were unable to recover it. 00:29:07.900 [2024-10-21 12:13:44.297545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.900 [2024-10-21 12:13:44.297577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.900 qpair failed and we were unable to recover it. 00:29:07.900 [2024-10-21 12:13:44.298399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.900 [2024-10-21 12:13:44.298443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.900 qpair failed and we were unable to recover it. 00:29:07.900 [2024-10-21 12:13:44.298824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.900 [2024-10-21 12:13:44.298863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.900 qpair failed and we were unable to recover it. 00:29:07.900 [2024-10-21 12:13:44.299239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.900 [2024-10-21 12:13:44.299277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.900 qpair failed and we were unable to recover it. 00:29:07.900 [2024-10-21 12:13:44.299669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.900 [2024-10-21 12:13:44.299704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.900 qpair failed and we were unable to recover it. 00:29:07.900 [2024-10-21 12:13:44.300066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.900 [2024-10-21 12:13:44.300102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.900 qpair failed and we were unable to recover it. 00:29:07.900 [2024-10-21 12:13:44.300471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.900 [2024-10-21 12:13:44.300507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.900 qpair failed and we were unable to recover it. 00:29:07.900 [2024-10-21 12:13:44.300868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.900 [2024-10-21 12:13:44.300899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.900 qpair failed and we were unable to recover it. 00:29:07.900 [2024-10-21 12:13:44.301262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.900 [2024-10-21 12:13:44.301293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.900 qpair failed and we were unable to recover it. 00:29:07.900 [2024-10-21 12:13:44.301570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.900 [2024-10-21 12:13:44.301600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.900 qpair failed and we were unable to recover it. 00:29:07.900 [2024-10-21 12:13:44.301958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.900 [2024-10-21 12:13:44.301988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.900 qpair failed and we were unable to recover it. 00:29:07.900 [2024-10-21 12:13:44.302370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.900 [2024-10-21 12:13:44.302405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.900 qpair failed and we were unable to recover it. 00:29:07.900 [2024-10-21 12:13:44.302770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.900 [2024-10-21 12:13:44.302810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.900 qpair failed and we were unable to recover it. 00:29:07.900 [2024-10-21 12:13:44.305082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.900 [2024-10-21 12:13:44.305145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.900 qpair failed and we were unable to recover it. 00:29:07.900 [2024-10-21 12:13:44.305548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.900 [2024-10-21 12:13:44.305584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.900 qpair failed and we were unable to recover it. 00:29:07.900 [2024-10-21 12:13:44.305932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.900 [2024-10-21 12:13:44.305964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.900 qpair failed and we were unable to recover it. 00:29:07.900 [2024-10-21 12:13:44.306215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.900 [2024-10-21 12:13:44.306249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.901 qpair failed and we were unable to recover it. 00:29:07.901 [2024-10-21 12:13:44.306606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.901 [2024-10-21 12:13:44.306639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.901 qpair failed and we were unable to recover it. 00:29:07.901 [2024-10-21 12:13:44.306997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.901 [2024-10-21 12:13:44.307029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.901 qpair failed and we were unable to recover it. 00:29:07.901 [2024-10-21 12:13:44.307377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.901 [2024-10-21 12:13:44.307408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.901 qpair failed and we were unable to recover it. 00:29:07.901 [2024-10-21 12:13:44.307765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.901 [2024-10-21 12:13:44.307796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.901 qpair failed and we were unable to recover it. 00:29:07.901 [2024-10-21 12:13:44.308152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.901 [2024-10-21 12:13:44.308186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.901 qpair failed and we were unable to recover it. 00:29:07.901 [2024-10-21 12:13:44.308548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.901 [2024-10-21 12:13:44.308579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.901 qpair failed and we were unable to recover it. 00:29:07.901 [2024-10-21 12:13:44.308951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.901 [2024-10-21 12:13:44.308982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.901 qpair failed and we were unable to recover it. 00:29:07.901 [2024-10-21 12:13:44.309353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.901 [2024-10-21 12:13:44.309387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.901 qpair failed and we were unable to recover it. 00:29:07.901 [2024-10-21 12:13:44.309772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.901 [2024-10-21 12:13:44.309802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.901 qpair failed and we were unable to recover it. 00:29:07.901 [2024-10-21 12:13:44.310156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.901 [2024-10-21 12:13:44.310188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.901 qpair failed and we were unable to recover it. 00:29:07.901 [2024-10-21 12:13:44.311949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.901 [2024-10-21 12:13:44.312008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.901 qpair failed and we were unable to recover it. 00:29:07.901 [2024-10-21 12:13:44.312430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.901 [2024-10-21 12:13:44.312467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.901 qpair failed and we were unable to recover it. 00:29:07.901 [2024-10-21 12:13:44.312819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.901 [2024-10-21 12:13:44.312851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.901 qpair failed and we were unable to recover it. 00:29:07.901 [2024-10-21 12:13:44.313205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.901 [2024-10-21 12:13:44.313238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.901 qpair failed and we were unable to recover it. 00:29:07.901 [2024-10-21 12:13:44.313594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.901 [2024-10-21 12:13:44.313627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.901 qpair failed and we were unable to recover it. 00:29:07.901 [2024-10-21 12:13:44.313777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.901 [2024-10-21 12:13:44.313813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.901 qpair failed and we were unable to recover it. 00:29:07.901 [2024-10-21 12:13:44.314197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.901 [2024-10-21 12:13:44.314227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.901 qpair failed and we were unable to recover it. 00:29:07.901 [2024-10-21 12:13:44.314560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.901 [2024-10-21 12:13:44.314595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.901 qpair failed and we were unable to recover it. 00:29:07.901 [2024-10-21 12:13:44.314955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.901 [2024-10-21 12:13:44.314988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.901 qpair failed and we were unable to recover it. 00:29:07.901 [2024-10-21 12:13:44.315369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.901 [2024-10-21 12:13:44.315404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.901 qpair failed and we were unable to recover it. 00:29:07.901 [2024-10-21 12:13:44.317194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.901 [2024-10-21 12:13:44.317251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.901 qpair failed and we were unable to recover it. 00:29:07.901 [2024-10-21 12:13:44.317657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.901 [2024-10-21 12:13:44.317692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.901 qpair failed and we were unable to recover it. 00:29:07.901 [2024-10-21 12:13:44.318080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.901 [2024-10-21 12:13:44.318112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.901 qpair failed and we were unable to recover it. 00:29:07.901 [2024-10-21 12:13:44.318484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.901 [2024-10-21 12:13:44.318519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.901 qpair failed and we were unable to recover it. 00:29:07.901 [2024-10-21 12:13:44.318867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.901 [2024-10-21 12:13:44.318898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.901 qpair failed and we were unable to recover it. 00:29:07.901 [2024-10-21 12:13:44.319290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.901 [2024-10-21 12:13:44.319331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.901 qpair failed and we were unable to recover it. 00:29:07.901 [2024-10-21 12:13:44.319630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.901 [2024-10-21 12:13:44.319664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.901 qpair failed and we were unable to recover it. 00:29:07.901 [2024-10-21 12:13:44.319895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.901 [2024-10-21 12:13:44.319926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.901 qpair failed and we were unable to recover it. 00:29:07.901 [2024-10-21 12:13:44.320296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.901 [2024-10-21 12:13:44.320338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.901 qpair failed and we were unable to recover it. 00:29:07.901 [2024-10-21 12:13:44.320727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.901 [2024-10-21 12:13:44.320757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.901 qpair failed and we were unable to recover it. 00:29:07.901 [2024-10-21 12:13:44.321136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.901 [2024-10-21 12:13:44.321168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.901 qpair failed and we were unable to recover it. 00:29:07.901 [2024-10-21 12:13:44.321527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.901 [2024-10-21 12:13:44.321563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.901 qpair failed and we were unable to recover it. 00:29:07.901 [2024-10-21 12:13:44.321914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.901 [2024-10-21 12:13:44.321947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.901 qpair failed and we were unable to recover it. 00:29:07.901 [2024-10-21 12:13:44.322308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.901 [2024-10-21 12:13:44.322352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.901 qpair failed and we were unable to recover it. 00:29:07.901 [2024-10-21 12:13:44.322619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.902 [2024-10-21 12:13:44.322649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.902 qpair failed and we were unable to recover it. 00:29:07.902 [2024-10-21 12:13:44.323010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.902 [2024-10-21 12:13:44.323047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.902 qpair failed and we were unable to recover it. 00:29:07.902 [2024-10-21 12:13:44.323296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.902 [2024-10-21 12:13:44.323344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.902 qpair failed and we were unable to recover it. 00:29:07.902 [2024-10-21 12:13:44.323730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.902 [2024-10-21 12:13:44.323760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.902 qpair failed and we were unable to recover it. 00:29:07.902 [2024-10-21 12:13:44.324118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.902 [2024-10-21 12:13:44.324149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.902 qpair failed and we were unable to recover it. 00:29:07.902 [2024-10-21 12:13:44.324378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.902 [2024-10-21 12:13:44.324410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.902 qpair failed and we were unable to recover it. 00:29:07.902 [2024-10-21 12:13:44.324796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.902 [2024-10-21 12:13:44.324827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.902 qpair failed and we were unable to recover it. 00:29:07.902 [2024-10-21 12:13:44.325153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.902 [2024-10-21 12:13:44.325185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.902 qpair failed and we were unable to recover it. 00:29:07.902 [2024-10-21 12:13:44.325573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.902 [2024-10-21 12:13:44.325605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.902 qpair failed and we were unable to recover it. 00:29:07.902 [2024-10-21 12:13:44.325984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.902 [2024-10-21 12:13:44.326015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.902 qpair failed and we were unable to recover it. 00:29:07.902 [2024-10-21 12:13:44.326262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.902 [2024-10-21 12:13:44.326291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.902 qpair failed and we were unable to recover it. 00:29:07.902 [2024-10-21 12:13:44.326679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.902 [2024-10-21 12:13:44.326712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.902 qpair failed and we were unable to recover it. 00:29:07.902 [2024-10-21 12:13:44.326945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.902 [2024-10-21 12:13:44.326976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.902 qpair failed and we were unable to recover it. 00:29:07.902 [2024-10-21 12:13:44.327341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.902 [2024-10-21 12:13:44.327374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.902 qpair failed and we were unable to recover it. 00:29:07.902 [2024-10-21 12:13:44.327754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.902 [2024-10-21 12:13:44.327786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.902 qpair failed and we were unable to recover it. 00:29:07.902 [2024-10-21 12:13:44.328024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.902 [2024-10-21 12:13:44.328054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.902 qpair failed and we were unable to recover it. 00:29:07.902 [2024-10-21 12:13:44.328280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.902 [2024-10-21 12:13:44.328311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.902 qpair failed and we were unable to recover it. 00:29:07.902 [2024-10-21 12:13:44.328565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.902 [2024-10-21 12:13:44.328601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.902 qpair failed and we were unable to recover it. 00:29:07.902 [2024-10-21 12:13:44.329009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.902 [2024-10-21 12:13:44.329042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.902 qpair failed and we were unable to recover it. 00:29:07.902 [2024-10-21 12:13:44.329439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.902 [2024-10-21 12:13:44.329473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.902 qpair failed and we were unable to recover it. 00:29:07.902 [2024-10-21 12:13:44.329841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.902 [2024-10-21 12:13:44.329872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.902 qpair failed and we were unable to recover it. 00:29:07.902 [2024-10-21 12:13:44.330237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.902 [2024-10-21 12:13:44.330270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.902 qpair failed and we were unable to recover it. 00:29:07.902 [2024-10-21 12:13:44.330732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.902 [2024-10-21 12:13:44.330767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.902 qpair failed and we were unable to recover it. 00:29:07.902 [2024-10-21 12:13:44.331114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.902 [2024-10-21 12:13:44.331145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.902 qpair failed and we were unable to recover it. 00:29:07.902 [2024-10-21 12:13:44.331499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.902 [2024-10-21 12:13:44.331534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.902 qpair failed and we were unable to recover it. 00:29:07.902 [2024-10-21 12:13:44.331901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.902 [2024-10-21 12:13:44.331931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.902 qpair failed and we were unable to recover it. 00:29:07.902 [2024-10-21 12:13:44.332307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.902 [2024-10-21 12:13:44.332348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.902 qpair failed and we were unable to recover it. 00:29:07.902 [2024-10-21 12:13:44.332640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.902 [2024-10-21 12:13:44.332671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.902 qpair failed and we were unable to recover it. 00:29:07.902 [2024-10-21 12:13:44.333039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.902 [2024-10-21 12:13:44.333070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.902 qpair failed and we were unable to recover it. 00:29:07.902 [2024-10-21 12:13:44.333399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.902 [2024-10-21 12:13:44.333433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.902 qpair failed and we were unable to recover it. 00:29:07.902 [2024-10-21 12:13:44.333806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.902 [2024-10-21 12:13:44.333837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.903 qpair failed and we were unable to recover it. 00:29:07.903 [2024-10-21 12:13:44.334192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.903 [2024-10-21 12:13:44.334224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.903 qpair failed and we were unable to recover it. 00:29:07.903 [2024-10-21 12:13:44.334503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.903 [2024-10-21 12:13:44.334534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.903 qpair failed and we were unable to recover it. 00:29:07.903 [2024-10-21 12:13:44.334876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.903 [2024-10-21 12:13:44.334909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.903 qpair failed and we were unable to recover it. 00:29:07.903 [2024-10-21 12:13:44.335155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.903 [2024-10-21 12:13:44.335186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.903 qpair failed and we were unable to recover it. 00:29:07.903 [2024-10-21 12:13:44.335558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.903 [2024-10-21 12:13:44.335590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.903 qpair failed and we were unable to recover it. 00:29:07.903 [2024-10-21 12:13:44.335945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.903 [2024-10-21 12:13:44.335976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.903 qpair failed and we were unable to recover it. 00:29:07.903 [2024-10-21 12:13:44.336370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.903 [2024-10-21 12:13:44.336405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.903 qpair failed and we were unable to recover it. 00:29:07.903 [2024-10-21 12:13:44.336652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.903 [2024-10-21 12:13:44.336683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.903 qpair failed and we were unable to recover it. 00:29:07.903 [2024-10-21 12:13:44.337049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.903 [2024-10-21 12:13:44.337081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.903 qpair failed and we were unable to recover it. 00:29:07.903 [2024-10-21 12:13:44.337449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.903 [2024-10-21 12:13:44.337482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.903 qpair failed and we were unable to recover it. 00:29:07.903 [2024-10-21 12:13:44.337844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.903 [2024-10-21 12:13:44.337883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.903 qpair failed and we were unable to recover it. 00:29:07.903 [2024-10-21 12:13:44.338223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.903 [2024-10-21 12:13:44.338254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.903 qpair failed and we were unable to recover it. 00:29:07.903 [2024-10-21 12:13:44.338688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.903 [2024-10-21 12:13:44.338719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.903 qpair failed and we were unable to recover it. 00:29:07.903 [2024-10-21 12:13:44.339107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.903 [2024-10-21 12:13:44.339139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.903 qpair failed and we were unable to recover it. 00:29:07.903 [2024-10-21 12:13:44.339534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.903 [2024-10-21 12:13:44.339567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.903 qpair failed and we were unable to recover it. 00:29:07.903 [2024-10-21 12:13:44.339925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.903 [2024-10-21 12:13:44.339955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.903 qpair failed and we were unable to recover it. 00:29:07.903 [2024-10-21 12:13:44.340316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.903 [2024-10-21 12:13:44.340375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.903 qpair failed and we were unable to recover it. 00:29:07.903 [2024-10-21 12:13:44.340764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.903 [2024-10-21 12:13:44.340794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.903 qpair failed and we were unable to recover it. 00:29:07.903 [2024-10-21 12:13:44.341193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.903 [2024-10-21 12:13:44.341223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.903 qpair failed and we were unable to recover it. 00:29:07.903 [2024-10-21 12:13:44.341650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.903 [2024-10-21 12:13:44.341683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.903 qpair failed and we were unable to recover it. 00:29:07.903 [2024-10-21 12:13:44.342050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.903 [2024-10-21 12:13:44.342081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.903 qpair failed and we were unable to recover it. 00:29:07.903 [2024-10-21 12:13:44.342334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.903 [2024-10-21 12:13:44.342368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.903 qpair failed and we were unable to recover it. 00:29:07.903 [2024-10-21 12:13:44.342743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.903 [2024-10-21 12:13:44.342775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.903 qpair failed and we were unable to recover it. 00:29:07.903 [2024-10-21 12:13:44.343135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.903 [2024-10-21 12:13:44.343166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.903 qpair failed and we were unable to recover it. 00:29:07.903 [2024-10-21 12:13:44.343511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.903 [2024-10-21 12:13:44.343543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.903 qpair failed and we were unable to recover it. 00:29:07.903 [2024-10-21 12:13:44.343810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.903 [2024-10-21 12:13:44.343841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.903 qpair failed and we were unable to recover it. 00:29:07.903 [2024-10-21 12:13:44.344192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.903 [2024-10-21 12:13:44.344222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.903 qpair failed and we were unable to recover it. 00:29:07.903 [2024-10-21 12:13:44.344619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.903 [2024-10-21 12:13:44.344653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.903 qpair failed and we were unable to recover it. 00:29:07.903 [2024-10-21 12:13:44.344897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.903 [2024-10-21 12:13:44.344927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.903 qpair failed and we were unable to recover it. 00:29:07.903 [2024-10-21 12:13:44.345346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.903 [2024-10-21 12:13:44.345380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.903 qpair failed and we were unable to recover it. 00:29:07.903 [2024-10-21 12:13:44.345736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.903 [2024-10-21 12:13:44.345766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.903 qpair failed and we were unable to recover it. 00:29:07.903 [2024-10-21 12:13:44.346139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.903 [2024-10-21 12:13:44.346170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.903 qpair failed and we were unable to recover it. 00:29:07.903 [2024-10-21 12:13:44.346642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.903 [2024-10-21 12:13:44.346676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.903 qpair failed and we were unable to recover it. 00:29:07.903 [2024-10-21 12:13:44.347005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.903 [2024-10-21 12:13:44.347037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.903 qpair failed and we were unable to recover it. 00:29:07.903 [2024-10-21 12:13:44.347406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.904 [2024-10-21 12:13:44.347438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.904 qpair failed and we were unable to recover it. 00:29:07.904 [2024-10-21 12:13:44.347781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.904 [2024-10-21 12:13:44.347813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.904 qpair failed and we were unable to recover it. 00:29:07.904 [2024-10-21 12:13:44.348147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.904 [2024-10-21 12:13:44.348178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.904 qpair failed and we were unable to recover it. 00:29:07.904 [2024-10-21 12:13:44.348537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.904 [2024-10-21 12:13:44.348572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.904 qpair failed and we were unable to recover it. 00:29:07.904 [2024-10-21 12:13:44.348922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.904 [2024-10-21 12:13:44.348952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.904 qpair failed and we were unable to recover it. 00:29:07.904 [2024-10-21 12:13:44.349199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.904 [2024-10-21 12:13:44.349230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.904 qpair failed and we were unable to recover it. 00:29:07.904 [2024-10-21 12:13:44.349582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.904 [2024-10-21 12:13:44.349614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.904 qpair failed and we were unable to recover it. 00:29:07.904 [2024-10-21 12:13:44.349998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.904 [2024-10-21 12:13:44.350029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.904 qpair failed and we were unable to recover it. 00:29:07.904 [2024-10-21 12:13:44.350409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.904 [2024-10-21 12:13:44.350441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.904 qpair failed and we were unable to recover it. 00:29:07.904 [2024-10-21 12:13:44.350795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.904 [2024-10-21 12:13:44.350826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.904 qpair failed and we were unable to recover it. 00:29:07.904 [2024-10-21 12:13:44.351068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.904 [2024-10-21 12:13:44.351102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.904 qpair failed and we were unable to recover it. 00:29:07.904 [2024-10-21 12:13:44.351397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.904 [2024-10-21 12:13:44.351430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.904 qpair failed and we were unable to recover it. 00:29:07.904 [2024-10-21 12:13:44.351829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.904 [2024-10-21 12:13:44.351860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.904 qpair failed and we were unable to recover it. 00:29:07.904 [2024-10-21 12:13:44.352223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.904 [2024-10-21 12:13:44.352256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.904 qpair failed and we were unable to recover it. 00:29:07.904 [2024-10-21 12:13:44.352642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.904 [2024-10-21 12:13:44.352676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.904 qpair failed and we were unable to recover it. 00:29:07.904 [2024-10-21 12:13:44.353030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.904 [2024-10-21 12:13:44.353063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.904 qpair failed and we were unable to recover it. 00:29:07.904 [2024-10-21 12:13:44.353316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.904 [2024-10-21 12:13:44.353369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.904 qpair failed and we were unable to recover it. 00:29:07.904 [2024-10-21 12:13:44.353757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.904 [2024-10-21 12:13:44.353789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.904 qpair failed and we were unable to recover it. 00:29:07.904 [2024-10-21 12:13:44.354150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.904 [2024-10-21 12:13:44.354179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.904 qpair failed and we were unable to recover it. 00:29:07.904 [2024-10-21 12:13:44.354519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.904 [2024-10-21 12:13:44.354551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.904 qpair failed and we were unable to recover it. 00:29:07.904 [2024-10-21 12:13:44.354912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.904 [2024-10-21 12:13:44.354944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.904 qpair failed and we were unable to recover it. 00:29:07.904 [2024-10-21 12:13:44.355307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.904 [2024-10-21 12:13:44.355352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.904 qpair failed and we were unable to recover it. 00:29:07.904 [2024-10-21 12:13:44.355723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.904 [2024-10-21 12:13:44.355754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.904 qpair failed and we were unable to recover it. 00:29:07.904 [2024-10-21 12:13:44.356120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.904 [2024-10-21 12:13:44.356152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.904 qpair failed and we were unable to recover it. 00:29:07.904 [2024-10-21 12:13:44.356525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.904 [2024-10-21 12:13:44.356558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.904 qpair failed and we were unable to recover it. 00:29:07.904 [2024-10-21 12:13:44.356927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.904 [2024-10-21 12:13:44.356958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.904 qpair failed and we were unable to recover it. 00:29:07.904 [2024-10-21 12:13:44.357338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.904 [2024-10-21 12:13:44.357370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.904 qpair failed and we were unable to recover it. 00:29:07.904 [2024-10-21 12:13:44.357732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.904 [2024-10-21 12:13:44.357764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.904 qpair failed and we were unable to recover it. 00:29:07.904 [2024-10-21 12:13:44.358123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.904 [2024-10-21 12:13:44.358155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.904 qpair failed and we were unable to recover it. 00:29:07.904 [2024-10-21 12:13:44.358404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.904 [2024-10-21 12:13:44.358438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.904 qpair failed and we were unable to recover it. 00:29:07.904 [2024-10-21 12:13:44.358831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.904 [2024-10-21 12:13:44.358862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.904 qpair failed and we were unable to recover it. 00:29:07.904 [2024-10-21 12:13:44.359218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.904 [2024-10-21 12:13:44.359251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.904 qpair failed and we were unable to recover it. 00:29:07.904 [2024-10-21 12:13:44.359618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.904 [2024-10-21 12:13:44.359654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.904 qpair failed and we were unable to recover it. 00:29:07.904 [2024-10-21 12:13:44.360026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.904 [2024-10-21 12:13:44.360057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.904 qpair failed and we were unable to recover it. 00:29:07.904 [2024-10-21 12:13:44.360398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.904 [2024-10-21 12:13:44.360431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.904 qpair failed and we were unable to recover it. 00:29:07.905 [2024-10-21 12:13:44.360814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.905 [2024-10-21 12:13:44.360843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.905 qpair failed and we were unable to recover it. 00:29:07.905 [2024-10-21 12:13:44.361199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.905 [2024-10-21 12:13:44.361229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.905 qpair failed and we were unable to recover it. 00:29:07.905 [2024-10-21 12:13:44.361476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.905 [2024-10-21 12:13:44.361510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.905 qpair failed and we were unable to recover it. 00:29:07.905 [2024-10-21 12:13:44.361883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.905 [2024-10-21 12:13:44.361913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.905 qpair failed and we were unable to recover it. 00:29:07.905 [2024-10-21 12:13:44.362266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.905 [2024-10-21 12:13:44.362299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.905 qpair failed and we were unable to recover it. 00:29:07.905 [2024-10-21 12:13:44.362675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.905 [2024-10-21 12:13:44.362707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.905 qpair failed and we were unable to recover it. 00:29:07.905 [2024-10-21 12:13:44.363064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.905 [2024-10-21 12:13:44.363095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.905 qpair failed and we were unable to recover it. 00:29:07.905 [2024-10-21 12:13:44.363449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.905 [2024-10-21 12:13:44.363482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.905 qpair failed and we were unable to recover it. 00:29:07.905 [2024-10-21 12:13:44.363884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.905 [2024-10-21 12:13:44.363916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.905 qpair failed and we were unable to recover it. 00:29:07.905 [2024-10-21 12:13:44.364261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.905 [2024-10-21 12:13:44.364292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.905 qpair failed and we were unable to recover it. 00:29:07.905 [2024-10-21 12:13:44.364724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.905 [2024-10-21 12:13:44.364759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.905 qpair failed and we were unable to recover it. 00:29:07.905 [2024-10-21 12:13:44.365104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.905 [2024-10-21 12:13:44.365135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.905 qpair failed and we were unable to recover it. 00:29:07.905 [2024-10-21 12:13:44.365496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.905 [2024-10-21 12:13:44.365528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.905 qpair failed and we were unable to recover it. 00:29:07.905 [2024-10-21 12:13:44.365889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.905 [2024-10-21 12:13:44.365920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.905 qpair failed and we were unable to recover it. 00:29:07.905 [2024-10-21 12:13:44.366166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.905 [2024-10-21 12:13:44.366197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.905 qpair failed and we were unable to recover it. 00:29:07.905 [2024-10-21 12:13:44.366562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.905 [2024-10-21 12:13:44.366594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.905 qpair failed and we were unable to recover it. 00:29:07.905 [2024-10-21 12:13:44.366949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.905 [2024-10-21 12:13:44.366982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.905 qpair failed and we were unable to recover it. 00:29:07.905 [2024-10-21 12:13:44.367346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.905 [2024-10-21 12:13:44.367380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.905 qpair failed and we were unable to recover it. 00:29:07.905 [2024-10-21 12:13:44.367756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.905 [2024-10-21 12:13:44.367787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.905 qpair failed and we were unable to recover it. 00:29:07.905 [2024-10-21 12:13:44.368153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.905 [2024-10-21 12:13:44.368185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.905 qpair failed and we were unable to recover it. 00:29:07.905 [2024-10-21 12:13:44.368557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.905 [2024-10-21 12:13:44.368589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.905 qpair failed and we were unable to recover it. 00:29:07.905 [2024-10-21 12:13:44.368955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.905 [2024-10-21 12:13:44.368987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.905 qpair failed and we were unable to recover it. 00:29:07.905 [2024-10-21 12:13:44.369359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.905 [2024-10-21 12:13:44.369392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.905 qpair failed and we were unable to recover it. 00:29:07.905 [2024-10-21 12:13:44.369755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.905 [2024-10-21 12:13:44.369786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.905 qpair failed and we were unable to recover it. 00:29:07.905 [2024-10-21 12:13:44.370146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.905 [2024-10-21 12:13:44.370177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.905 qpair failed and we were unable to recover it. 00:29:07.905 [2024-10-21 12:13:44.370553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.905 [2024-10-21 12:13:44.370586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.905 qpair failed and we were unable to recover it. 00:29:07.905 [2024-10-21 12:13:44.370945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.905 [2024-10-21 12:13:44.370977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.905 qpair failed and we were unable to recover it. 00:29:07.905 [2024-10-21 12:13:44.371341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.905 [2024-10-21 12:13:44.371375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.905 qpair failed and we were unable to recover it. 00:29:07.905 [2024-10-21 12:13:44.371732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.905 [2024-10-21 12:13:44.371763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.905 qpair failed and we were unable to recover it. 00:29:07.905 [2024-10-21 12:13:44.372005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.905 [2024-10-21 12:13:44.372035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.905 qpair failed and we were unable to recover it. 00:29:07.905 [2024-10-21 12:13:44.372396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.905 [2024-10-21 12:13:44.372429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.905 qpair failed and we were unable to recover it. 00:29:07.905 [2024-10-21 12:13:44.372773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.905 [2024-10-21 12:13:44.372803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.905 qpair failed and we were unable to recover it. 00:29:07.905 [2024-10-21 12:13:44.373064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.905 [2024-10-21 12:13:44.373098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.905 qpair failed and we were unable to recover it. 00:29:07.905 [2024-10-21 12:13:44.373450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.905 [2024-10-21 12:13:44.373482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.905 qpair failed and we were unable to recover it. 00:29:07.905 [2024-10-21 12:13:44.373850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.906 [2024-10-21 12:13:44.373882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.906 qpair failed and we were unable to recover it. 00:29:07.906 [2024-10-21 12:13:44.374251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.906 [2024-10-21 12:13:44.374283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.906 qpair failed and we were unable to recover it. 00:29:07.906 [2024-10-21 12:13:44.374686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.906 [2024-10-21 12:13:44.374719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.906 qpair failed and we were unable to recover it. 00:29:07.906 [2024-10-21 12:13:44.375087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.906 [2024-10-21 12:13:44.375117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.906 qpair failed and we were unable to recover it. 00:29:07.906 [2024-10-21 12:13:44.375482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.906 [2024-10-21 12:13:44.375514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.906 qpair failed and we were unable to recover it. 00:29:07.906 [2024-10-21 12:13:44.375870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.906 [2024-10-21 12:13:44.375900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.906 qpair failed and we were unable to recover it. 00:29:07.906 [2024-10-21 12:13:44.376261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.906 [2024-10-21 12:13:44.376294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.906 qpair failed and we were unable to recover it. 00:29:07.906 [2024-10-21 12:13:44.376675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.906 [2024-10-21 12:13:44.376708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.906 qpair failed and we were unable to recover it. 00:29:07.906 [2024-10-21 12:13:44.377088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.906 [2024-10-21 12:13:44.377121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.906 qpair failed and we were unable to recover it. 00:29:07.906 [2024-10-21 12:13:44.377481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.906 [2024-10-21 12:13:44.377514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.906 qpair failed and we were unable to recover it. 00:29:07.906 [2024-10-21 12:13:44.377877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.906 [2024-10-21 12:13:44.377908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.906 qpair failed and we were unable to recover it. 00:29:07.906 [2024-10-21 12:13:44.378274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.906 [2024-10-21 12:13:44.378307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.906 qpair failed and we were unable to recover it. 00:29:07.906 [2024-10-21 12:13:44.378545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.906 [2024-10-21 12:13:44.378577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.906 qpair failed and we were unable to recover it. 00:29:07.906 [2024-10-21 12:13:44.378919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.906 [2024-10-21 12:13:44.378953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.906 qpair failed and we were unable to recover it. 00:29:07.906 [2024-10-21 12:13:44.379313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.906 [2024-10-21 12:13:44.379362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.906 qpair failed and we were unable to recover it. 00:29:07.906 [2024-10-21 12:13:44.379767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.906 [2024-10-21 12:13:44.379801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.906 qpair failed and we were unable to recover it. 00:29:07.906 [2024-10-21 12:13:44.380160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.906 [2024-10-21 12:13:44.380193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.906 qpair failed and we were unable to recover it. 00:29:07.906 [2024-10-21 12:13:44.380551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.906 [2024-10-21 12:13:44.380583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.906 qpair failed and we were unable to recover it. 00:29:07.906 [2024-10-21 12:13:44.380935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.906 [2024-10-21 12:13:44.380967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.906 qpair failed and we were unable to recover it. 00:29:07.906 [2024-10-21 12:13:44.381346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.906 [2024-10-21 12:13:44.381379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.906 qpair failed and we were unable to recover it. 00:29:07.906 [2024-10-21 12:13:44.381731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.906 [2024-10-21 12:13:44.381763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.906 qpair failed and we were unable to recover it. 00:29:07.906 [2024-10-21 12:13:44.382136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.906 [2024-10-21 12:13:44.382167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.906 qpair failed and we were unable to recover it. 00:29:07.906 [2024-10-21 12:13:44.382527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.906 [2024-10-21 12:13:44.382562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.906 qpair failed and we were unable to recover it. 00:29:07.906 [2024-10-21 12:13:44.382947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.906 [2024-10-21 12:13:44.382977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.906 qpair failed and we were unable to recover it. 00:29:07.906 [2024-10-21 12:13:44.383344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.906 [2024-10-21 12:13:44.383378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.906 qpair failed and we were unable to recover it. 00:29:07.906 [2024-10-21 12:13:44.383614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.906 [2024-10-21 12:13:44.383645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.906 qpair failed and we were unable to recover it. 00:29:07.906 [2024-10-21 12:13:44.384036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.906 [2024-10-21 12:13:44.384068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.906 qpair failed and we were unable to recover it. 00:29:07.906 [2024-10-21 12:13:44.384441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.906 [2024-10-21 12:13:44.384475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.906 qpair failed and we were unable to recover it. 00:29:07.906 [2024-10-21 12:13:44.384701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.906 [2024-10-21 12:13:44.384732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.906 qpair failed and we were unable to recover it. 00:29:07.906 [2024-10-21 12:13:44.384970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.906 [2024-10-21 12:13:44.385003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.906 qpair failed and we were unable to recover it. 00:29:07.906 [2024-10-21 12:13:44.385362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.906 [2024-10-21 12:13:44.385395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.906 qpair failed and we were unable to recover it. 00:29:07.906 [2024-10-21 12:13:44.385786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.906 [2024-10-21 12:13:44.385818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.906 qpair failed and we were unable to recover it. 00:29:07.906 [2024-10-21 12:13:44.386186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.906 [2024-10-21 12:13:44.386216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.906 qpair failed and we were unable to recover it. 00:29:07.906 [2024-10-21 12:13:44.386588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.906 [2024-10-21 12:13:44.386622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.906 qpair failed and we were unable to recover it. 00:29:07.906 [2024-10-21 12:13:44.386980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.906 [2024-10-21 12:13:44.387012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.906 qpair failed and we were unable to recover it. 00:29:07.906 [2024-10-21 12:13:44.387384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.907 [2024-10-21 12:13:44.387417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.907 qpair failed and we were unable to recover it. 00:29:07.907 [2024-10-21 12:13:44.387824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.907 [2024-10-21 12:13:44.387855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.907 qpair failed and we were unable to recover it. 00:29:07.907 [2024-10-21 12:13:44.388213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.907 [2024-10-21 12:13:44.388245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.907 qpair failed and we were unable to recover it. 00:29:07.907 [2024-10-21 12:13:44.388501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.907 [2024-10-21 12:13:44.388535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.907 qpair failed and we were unable to recover it. 00:29:07.907 [2024-10-21 12:13:44.388926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.907 [2024-10-21 12:13:44.388956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.907 qpair failed and we were unable to recover it. 00:29:07.907 [2024-10-21 12:13:44.389335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.907 [2024-10-21 12:13:44.389365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.907 qpair failed and we were unable to recover it. 00:29:07.907 [2024-10-21 12:13:44.389727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.907 [2024-10-21 12:13:44.389758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.907 qpair failed and we were unable to recover it. 00:29:07.907 [2024-10-21 12:13:44.390117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.907 [2024-10-21 12:13:44.390149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.907 qpair failed and we were unable to recover it. 00:29:07.907 [2024-10-21 12:13:44.390395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.907 [2024-10-21 12:13:44.390432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.907 qpair failed and we were unable to recover it. 00:29:07.907 [2024-10-21 12:13:44.390807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.907 [2024-10-21 12:13:44.390837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.907 qpair failed and we were unable to recover it. 00:29:07.907 [2024-10-21 12:13:44.391213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.907 [2024-10-21 12:13:44.391244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.907 qpair failed and we were unable to recover it. 00:29:07.907 [2024-10-21 12:13:44.391497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.907 [2024-10-21 12:13:44.391529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.907 qpair failed and we were unable to recover it. 00:29:07.907 [2024-10-21 12:13:44.391914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.907 [2024-10-21 12:13:44.391945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.907 qpair failed and we were unable to recover it. 00:29:07.907 [2024-10-21 12:13:44.392337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.907 [2024-10-21 12:13:44.392369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.907 qpair failed and we were unable to recover it. 00:29:07.907 [2024-10-21 12:13:44.392732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.907 [2024-10-21 12:13:44.392762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.907 qpair failed and we were unable to recover it. 00:29:07.907 [2024-10-21 12:13:44.393123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.907 [2024-10-21 12:13:44.393153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.907 qpair failed and we were unable to recover it. 00:29:07.907 [2024-10-21 12:13:44.393525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.907 [2024-10-21 12:13:44.393557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.907 qpair failed and we were unable to recover it. 00:29:07.907 [2024-10-21 12:13:44.393783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.907 [2024-10-21 12:13:44.393812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.907 qpair failed and we were unable to recover it. 00:29:07.907 [2024-10-21 12:13:44.394257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.907 [2024-10-21 12:13:44.394288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.907 qpair failed and we were unable to recover it. 00:29:07.907 [2024-10-21 12:13:44.394688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.907 [2024-10-21 12:13:44.394727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.907 qpair failed and we were unable to recover it. 00:29:07.907 [2024-10-21 12:13:44.395078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.907 [2024-10-21 12:13:44.395110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.907 qpair failed and we were unable to recover it. 00:29:07.907 [2024-10-21 12:13:44.395529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.907 [2024-10-21 12:13:44.395562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.907 qpair failed and we were unable to recover it. 00:29:07.907 [2024-10-21 12:13:44.395939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.907 [2024-10-21 12:13:44.395969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.907 qpair failed and we were unable to recover it. 00:29:07.907 [2024-10-21 12:13:44.396340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.907 [2024-10-21 12:13:44.396373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.907 qpair failed and we were unable to recover it. 00:29:07.907 [2024-10-21 12:13:44.396774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.907 [2024-10-21 12:13:44.396806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.907 qpair failed and we were unable to recover it. 00:29:07.907 [2024-10-21 12:13:44.397158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.907 [2024-10-21 12:13:44.397192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.907 qpair failed and we were unable to recover it. 00:29:07.907 [2024-10-21 12:13:44.397559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.907 [2024-10-21 12:13:44.397591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.907 qpair failed and we were unable to recover it. 00:29:07.907 [2024-10-21 12:13:44.397951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.907 [2024-10-21 12:13:44.397982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.907 qpair failed and we were unable to recover it. 00:29:07.907 [2024-10-21 12:13:44.398361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.907 [2024-10-21 12:13:44.398393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.907 qpair failed and we were unable to recover it. 00:29:07.907 [2024-10-21 12:13:44.398806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.907 [2024-10-21 12:13:44.398837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.907 qpair failed and we were unable to recover it. 00:29:07.907 [2024-10-21 12:13:44.399191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.908 [2024-10-21 12:13:44.399222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.908 qpair failed and we were unable to recover it. 00:29:07.908 [2024-10-21 12:13:44.399432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.908 [2024-10-21 12:13:44.399467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.908 qpair failed and we were unable to recover it. 00:29:07.908 [2024-10-21 12:13:44.399845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.908 [2024-10-21 12:13:44.399876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.908 qpair failed and we were unable to recover it. 00:29:07.908 [2024-10-21 12:13:44.400233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.908 [2024-10-21 12:13:44.400265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.908 qpair failed and we were unable to recover it. 00:29:07.908 [2024-10-21 12:13:44.400765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.908 [2024-10-21 12:13:44.400798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.908 qpair failed and we were unable to recover it. 00:29:07.908 [2024-10-21 12:13:44.401150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.908 [2024-10-21 12:13:44.401181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.908 qpair failed and we were unable to recover it. 00:29:07.908 [2024-10-21 12:13:44.401372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.908 [2024-10-21 12:13:44.401402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.908 qpair failed and we were unable to recover it. 00:29:07.908 [2024-10-21 12:13:44.401828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.908 [2024-10-21 12:13:44.401859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.908 qpair failed and we were unable to recover it. 00:29:07.908 [2024-10-21 12:13:44.402286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.908 [2024-10-21 12:13:44.402318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.908 qpair failed and we were unable to recover it. 00:29:07.908 [2024-10-21 12:13:44.402694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.908 [2024-10-21 12:13:44.402724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.908 qpair failed and we were unable to recover it. 00:29:07.908 [2024-10-21 12:13:44.403090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.908 [2024-10-21 12:13:44.403122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.908 qpair failed and we were unable to recover it. 00:29:07.908 [2024-10-21 12:13:44.403479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.908 [2024-10-21 12:13:44.403512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.908 qpair failed and we were unable to recover it. 00:29:07.908 [2024-10-21 12:13:44.403863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.908 [2024-10-21 12:13:44.403894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.908 qpair failed and we were unable to recover it. 00:29:07.908 [2024-10-21 12:13:44.404253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.908 [2024-10-21 12:13:44.404286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.908 qpair failed and we were unable to recover it. 00:29:07.908 [2024-10-21 12:13:44.404550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.908 [2024-10-21 12:13:44.404586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.908 qpair failed and we were unable to recover it. 00:29:07.908 [2024-10-21 12:13:44.404938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.908 [2024-10-21 12:13:44.404969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.908 qpair failed and we were unable to recover it. 00:29:07.908 [2024-10-21 12:13:44.405403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.908 [2024-10-21 12:13:44.405436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.908 qpair failed and we were unable to recover it. 00:29:07.908 [2024-10-21 12:13:44.405809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.908 [2024-10-21 12:13:44.405841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.908 qpair failed and we were unable to recover it. 00:29:07.908 [2024-10-21 12:13:44.406202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.908 [2024-10-21 12:13:44.406235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.908 qpair failed and we were unable to recover it. 00:29:07.908 [2024-10-21 12:13:44.406541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.908 [2024-10-21 12:13:44.406574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.908 qpair failed and we were unable to recover it. 00:29:07.908 [2024-10-21 12:13:44.408851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.908 [2024-10-21 12:13:44.408919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.908 qpair failed and we were unable to recover it. 00:29:07.908 [2024-10-21 12:13:44.409316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.908 [2024-10-21 12:13:44.409370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.908 qpair failed and we were unable to recover it. 00:29:07.908 [2024-10-21 12:13:44.409748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.908 [2024-10-21 12:13:44.409781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.908 qpair failed and we were unable to recover it. 00:29:07.908 [2024-10-21 12:13:44.410156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.908 [2024-10-21 12:13:44.410187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.908 qpair failed and we were unable to recover it. 00:29:07.908 [2024-10-21 12:13:44.410542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.908 [2024-10-21 12:13:44.410574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.908 qpair failed and we were unable to recover it. 00:29:07.908 [2024-10-21 12:13:44.410934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.908 [2024-10-21 12:13:44.410966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.908 qpair failed and we were unable to recover it. 00:29:07.908 [2024-10-21 12:13:44.411344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.908 [2024-10-21 12:13:44.411377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.908 qpair failed and we were unable to recover it. 00:29:07.908 [2024-10-21 12:13:44.411730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.908 [2024-10-21 12:13:44.411760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.908 qpair failed and we were unable to recover it. 00:29:07.908 [2024-10-21 12:13:44.413596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.908 [2024-10-21 12:13:44.413654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.908 qpair failed and we were unable to recover it. 00:29:07.908 [2024-10-21 12:13:44.414046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.908 [2024-10-21 12:13:44.414093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.908 qpair failed and we were unable to recover it. 00:29:07.908 [2024-10-21 12:13:44.414486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.908 [2024-10-21 12:13:44.414521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.908 qpair failed and we were unable to recover it. 00:29:07.908 [2024-10-21 12:13:44.414880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.908 [2024-10-21 12:13:44.414910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.908 qpair failed and we were unable to recover it. 00:29:07.908 [2024-10-21 12:13:44.415250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.908 [2024-10-21 12:13:44.415279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.908 qpair failed and we were unable to recover it. 00:29:07.908 [2024-10-21 12:13:44.415668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.908 [2024-10-21 12:13:44.415703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.908 qpair failed and we were unable to recover it. 00:29:07.908 [2024-10-21 12:13:44.416141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.908 [2024-10-21 12:13:44.416172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.908 qpair failed and we were unable to recover it. 00:29:07.908 [2024-10-21 12:13:44.416503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.908 [2024-10-21 12:13:44.416537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.909 qpair failed and we were unable to recover it. 00:29:07.909 [2024-10-21 12:13:44.416892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.909 [2024-10-21 12:13:44.416924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.909 qpair failed and we were unable to recover it. 00:29:07.909 [2024-10-21 12:13:44.417151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.909 [2024-10-21 12:13:44.417181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.909 qpair failed and we were unable to recover it. 00:29:07.909 [2024-10-21 12:13:44.417567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.909 [2024-10-21 12:13:44.417598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.909 qpair failed and we were unable to recover it. 00:29:07.909 [2024-10-21 12:13:44.417928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.909 [2024-10-21 12:13:44.417961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.909 qpair failed and we were unable to recover it. 00:29:07.909 [2024-10-21 12:13:44.418310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.909 [2024-10-21 12:13:44.418356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.909 qpair failed and we were unable to recover it. 00:29:07.909 [2024-10-21 12:13:44.418734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.909 [2024-10-21 12:13:44.418764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.909 qpair failed and we were unable to recover it. 00:29:07.909 [2024-10-21 12:13:44.419128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.909 [2024-10-21 12:13:44.419161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.909 qpair failed and we were unable to recover it. 00:29:07.909 [2024-10-21 12:13:44.419506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.909 [2024-10-21 12:13:44.419540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.909 qpair failed and we were unable to recover it. 00:29:07.909 [2024-10-21 12:13:44.419913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.909 [2024-10-21 12:13:44.419944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.909 qpair failed and we were unable to recover it. 00:29:07.909 [2024-10-21 12:13:44.420279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.909 [2024-10-21 12:13:44.420312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.909 qpair failed and we were unable to recover it. 00:29:07.909 [2024-10-21 12:13:44.420689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.909 [2024-10-21 12:13:44.420720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.909 qpair failed and we were unable to recover it. 00:29:07.909 [2024-10-21 12:13:44.421067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.909 [2024-10-21 12:13:44.421098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.909 qpair failed and we were unable to recover it. 00:29:07.909 [2024-10-21 12:13:44.421445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.909 [2024-10-21 12:13:44.421479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.909 qpair failed and we were unable to recover it. 00:29:07.909 [2024-10-21 12:13:44.421861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.909 [2024-10-21 12:13:44.421892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.909 qpair failed and we were unable to recover it. 00:29:07.909 [2024-10-21 12:13:44.422246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.909 [2024-10-21 12:13:44.422278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.909 qpair failed and we were unable to recover it. 00:29:07.909 [2024-10-21 12:13:44.422654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.909 [2024-10-21 12:13:44.422686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.909 qpair failed and we were unable to recover it. 00:29:07.909 [2024-10-21 12:13:44.422932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.909 [2024-10-21 12:13:44.422967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.909 qpair failed and we were unable to recover it. 00:29:07.909 [2024-10-21 12:13:44.423351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.909 [2024-10-21 12:13:44.423384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.909 qpair failed and we were unable to recover it. 00:29:07.909 [2024-10-21 12:13:44.423741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.909 [2024-10-21 12:13:44.423771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.909 qpair failed and we were unable to recover it. 00:29:07.909 [2024-10-21 12:13:44.424130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.909 [2024-10-21 12:13:44.424160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.909 qpair failed and we were unable to recover it. 00:29:07.909 [2024-10-21 12:13:44.424556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.909 [2024-10-21 12:13:44.424588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.909 qpair failed and we were unable to recover it. 00:29:07.909 [2024-10-21 12:13:44.424953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.909 [2024-10-21 12:13:44.424984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.909 qpair failed and we were unable to recover it. 00:29:07.909 [2024-10-21 12:13:44.425248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.909 [2024-10-21 12:13:44.425278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.909 qpair failed and we were unable to recover it. 00:29:07.909 [2024-10-21 12:13:44.425652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.909 [2024-10-21 12:13:44.425684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.909 qpair failed and we were unable to recover it. 00:29:07.909 [2024-10-21 12:13:44.425934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.909 [2024-10-21 12:13:44.425964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.909 qpair failed and we were unable to recover it. 00:29:07.909 [2024-10-21 12:13:44.426334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.909 [2024-10-21 12:13:44.426365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.909 qpair failed and we were unable to recover it. 00:29:07.909 [2024-10-21 12:13:44.426738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.909 [2024-10-21 12:13:44.426770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.909 qpair failed and we were unable to recover it. 00:29:07.909 [2024-10-21 12:13:44.427118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.909 [2024-10-21 12:13:44.427149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.909 qpair failed and we were unable to recover it. 00:29:07.909 [2024-10-21 12:13:44.427492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.909 [2024-10-21 12:13:44.427528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.909 qpair failed and we were unable to recover it. 00:29:07.909 [2024-10-21 12:13:44.427896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.909 [2024-10-21 12:13:44.427927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.909 qpair failed and we were unable to recover it. 00:29:07.909 [2024-10-21 12:13:44.428286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.909 [2024-10-21 12:13:44.428317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.909 qpair failed and we were unable to recover it. 00:29:07.909 [2024-10-21 12:13:44.428685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.909 [2024-10-21 12:13:44.428717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.909 qpair failed and we were unable to recover it. 00:29:07.909 [2024-10-21 12:13:44.429072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.909 [2024-10-21 12:13:44.429104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.909 qpair failed and we were unable to recover it. 00:29:07.909 [2024-10-21 12:13:44.429484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.909 [2024-10-21 12:13:44.429521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.909 qpair failed and we were unable to recover it. 00:29:07.909 [2024-10-21 12:13:44.429883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.910 [2024-10-21 12:13:44.429915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.910 qpair failed and we were unable to recover it. 00:29:07.910 [2024-10-21 12:13:44.430164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.910 [2024-10-21 12:13:44.430194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.910 qpair failed and we were unable to recover it. 00:29:07.910 [2024-10-21 12:13:44.430620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.910 [2024-10-21 12:13:44.430651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.910 qpair failed and we were unable to recover it. 00:29:07.910 [2024-10-21 12:13:44.431004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.910 [2024-10-21 12:13:44.431036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.910 qpair failed and we were unable to recover it. 00:29:07.910 [2024-10-21 12:13:44.431432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.910 [2024-10-21 12:13:44.431465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.910 qpair failed and we were unable to recover it. 00:29:07.910 [2024-10-21 12:13:44.431833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.910 [2024-10-21 12:13:44.431863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.910 qpair failed and we were unable to recover it. 00:29:07.910 [2024-10-21 12:13:44.432277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.910 [2024-10-21 12:13:44.432308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.910 qpair failed and we were unable to recover it. 00:29:07.910 [2024-10-21 12:13:44.432733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.910 [2024-10-21 12:13:44.432765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.910 qpair failed and we were unable to recover it. 00:29:07.910 [2024-10-21 12:13:44.433134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.910 [2024-10-21 12:13:44.433167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.910 qpair failed and we were unable to recover it. 00:29:07.910 [2024-10-21 12:13:44.433442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.910 [2024-10-21 12:13:44.433473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.910 qpair failed and we were unable to recover it. 00:29:07.910 [2024-10-21 12:13:44.433812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.910 [2024-10-21 12:13:44.433843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.910 qpair failed and we were unable to recover it. 00:29:07.910 [2024-10-21 12:13:44.434182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.910 [2024-10-21 12:13:44.434214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.910 qpair failed and we were unable to recover it. 00:29:07.910 [2024-10-21 12:13:44.434559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.910 [2024-10-21 12:13:44.434590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.910 qpair failed and we were unable to recover it. 00:29:07.910 [2024-10-21 12:13:44.434980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.910 [2024-10-21 12:13:44.435012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.910 qpair failed and we were unable to recover it. 00:29:07.910 [2024-10-21 12:13:44.435357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.910 [2024-10-21 12:13:44.435390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.910 qpair failed and we were unable to recover it. 00:29:07.910 [2024-10-21 12:13:44.435798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.910 [2024-10-21 12:13:44.435828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.910 qpair failed and we were unable to recover it. 00:29:07.910 [2024-10-21 12:13:44.436175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.910 [2024-10-21 12:13:44.436205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.910 qpair failed and we were unable to recover it. 00:29:07.910 [2024-10-21 12:13:44.436649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.910 [2024-10-21 12:13:44.436682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.910 qpair failed and we were unable to recover it. 00:29:07.910 [2024-10-21 12:13:44.437035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.910 [2024-10-21 12:13:44.437066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.910 qpair failed and we were unable to recover it. 00:29:07.910 [2024-10-21 12:13:44.437393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.910 [2024-10-21 12:13:44.437425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.910 qpair failed and we were unable to recover it. 00:29:07.910 [2024-10-21 12:13:44.437779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.910 [2024-10-21 12:13:44.437811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.910 qpair failed and we were unable to recover it. 00:29:07.910 [2024-10-21 12:13:44.438157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.910 [2024-10-21 12:13:44.438187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.910 qpair failed and we were unable to recover it. 00:29:07.910 [2024-10-21 12:13:44.438579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.910 [2024-10-21 12:13:44.438611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.910 qpair failed and we were unable to recover it. 00:29:07.910 [2024-10-21 12:13:44.438955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.910 [2024-10-21 12:13:44.438986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.910 qpair failed and we were unable to recover it. 00:29:07.910 [2024-10-21 12:13:44.439348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.910 [2024-10-21 12:13:44.439381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.910 qpair failed and we were unable to recover it. 00:29:07.910 [2024-10-21 12:13:44.439735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.910 [2024-10-21 12:13:44.439766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.910 qpair failed and we were unable to recover it. 00:29:07.910 [2024-10-21 12:13:44.440134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.910 [2024-10-21 12:13:44.440166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.910 qpair failed and we were unable to recover it. 00:29:07.910 [2024-10-21 12:13:44.440422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.910 [2024-10-21 12:13:44.440453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.910 qpair failed and we were unable to recover it. 00:29:07.910 [2024-10-21 12:13:44.440844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.910 [2024-10-21 12:13:44.440874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.910 qpair failed and we were unable to recover it. 00:29:07.910 [2024-10-21 12:13:44.441211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.910 [2024-10-21 12:13:44.441243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.910 qpair failed and we were unable to recover it. 00:29:07.910 [2024-10-21 12:13:44.441561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.910 [2024-10-21 12:13:44.441592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.910 qpair failed and we were unable to recover it. 00:29:07.910 [2024-10-21 12:13:44.441758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.910 [2024-10-21 12:13:44.441793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.910 qpair failed and we were unable to recover it. 00:29:07.910 [2024-10-21 12:13:44.442037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.910 [2024-10-21 12:13:44.442069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.910 qpair failed and we were unable to recover it. 00:29:07.910 [2024-10-21 12:13:44.442407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.910 [2024-10-21 12:13:44.442440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.910 qpair failed and we were unable to recover it. 00:29:07.910 [2024-10-21 12:13:44.442820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.910 [2024-10-21 12:13:44.442850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.910 qpair failed and we were unable to recover it. 00:29:07.910 [2024-10-21 12:13:44.443220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.911 [2024-10-21 12:13:44.443250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.911 qpair failed and we were unable to recover it. 00:29:07.911 [2024-10-21 12:13:44.443571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.911 [2024-10-21 12:13:44.443603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.911 qpair failed and we were unable to recover it. 00:29:07.911 [2024-10-21 12:13:44.443955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.911 [2024-10-21 12:13:44.443985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.911 qpair failed and we were unable to recover it. 00:29:07.911 [2024-10-21 12:13:44.444360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.911 [2024-10-21 12:13:44.444392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.911 qpair failed and we were unable to recover it. 00:29:07.911 [2024-10-21 12:13:44.444767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.911 [2024-10-21 12:13:44.444805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.911 qpair failed and we were unable to recover it. 00:29:07.911 [2024-10-21 12:13:44.445192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.911 [2024-10-21 12:13:44.445222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.911 qpair failed and we were unable to recover it. 00:29:07.911 [2024-10-21 12:13:44.445581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.911 [2024-10-21 12:13:44.445613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.911 qpair failed and we were unable to recover it. 00:29:07.911 [2024-10-21 12:13:44.445979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.911 [2024-10-21 12:13:44.446011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.911 qpair failed and we were unable to recover it. 00:29:07.911 [2024-10-21 12:13:44.446270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.911 [2024-10-21 12:13:44.446305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.911 qpair failed and we were unable to recover it. 00:29:07.911 [2024-10-21 12:13:44.446586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.911 [2024-10-21 12:13:44.446617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.911 qpair failed and we were unable to recover it. 00:29:07.911 [2024-10-21 12:13:44.446965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.911 [2024-10-21 12:13:44.446997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.911 qpair failed and we were unable to recover it. 00:29:07.911 [2024-10-21 12:13:44.447360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.911 [2024-10-21 12:13:44.447393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.911 qpair failed and we were unable to recover it. 00:29:07.911 [2024-10-21 12:13:44.447759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.911 [2024-10-21 12:13:44.447791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.911 qpair failed and we were unable to recover it. 00:29:07.911 [2024-10-21 12:13:44.448131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.911 [2024-10-21 12:13:44.448162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.911 qpair failed and we were unable to recover it. 00:29:07.911 [2024-10-21 12:13:44.448429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.911 [2024-10-21 12:13:44.448460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.911 qpair failed and we were unable to recover it. 00:29:07.911 [2024-10-21 12:13:44.448822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.911 [2024-10-21 12:13:44.448852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.911 qpair failed and we were unable to recover it. 00:29:07.911 [2024-10-21 12:13:44.449201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.911 [2024-10-21 12:13:44.449233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.911 qpair failed and we were unable to recover it. 00:29:07.911 [2024-10-21 12:13:44.449605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.911 [2024-10-21 12:13:44.449638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.911 qpair failed and we were unable to recover it. 00:29:07.911 [2024-10-21 12:13:44.449873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.911 [2024-10-21 12:13:44.449907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.911 qpair failed and we were unable to recover it. 00:29:07.911 [2024-10-21 12:13:44.450151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.911 [2024-10-21 12:13:44.450181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.911 qpair failed and we were unable to recover it. 00:29:07.911 [2024-10-21 12:13:44.450568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.911 [2024-10-21 12:13:44.450598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.911 qpair failed and we were unable to recover it. 00:29:07.911 [2024-10-21 12:13:44.450952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.911 [2024-10-21 12:13:44.450983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.911 qpair failed and we were unable to recover it. 00:29:07.911 [2024-10-21 12:13:44.451365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.911 [2024-10-21 12:13:44.451399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.911 qpair failed and we were unable to recover it. 00:29:07.911 [2024-10-21 12:13:44.451790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.911 [2024-10-21 12:13:44.451820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.911 qpair failed and we were unable to recover it. 00:29:07.911 [2024-10-21 12:13:44.452063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.911 [2024-10-21 12:13:44.452093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.911 qpair failed and we were unable to recover it. 00:29:07.911 [2024-10-21 12:13:44.452440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.911 [2024-10-21 12:13:44.452471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.911 qpair failed and we were unable to recover it. 00:29:07.911 [2024-10-21 12:13:44.452842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.911 [2024-10-21 12:13:44.452873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.911 qpair failed and we were unable to recover it. 00:29:07.911 [2024-10-21 12:13:44.453241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.911 [2024-10-21 12:13:44.453272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.911 qpair failed and we were unable to recover it. 00:29:07.911 [2024-10-21 12:13:44.453651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.911 [2024-10-21 12:13:44.453684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.911 qpair failed and we were unable to recover it. 00:29:07.911 [2024-10-21 12:13:44.454034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.911 [2024-10-21 12:13:44.454065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.911 qpair failed and we were unable to recover it. 00:29:07.911 [2024-10-21 12:13:44.454448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.911 [2024-10-21 12:13:44.454481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.911 qpair failed and we were unable to recover it. 00:29:07.911 [2024-10-21 12:13:44.454845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.911 [2024-10-21 12:13:44.454878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.911 qpair failed and we were unable to recover it. 00:29:07.911 [2024-10-21 12:13:44.455233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.911 [2024-10-21 12:13:44.455264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.911 qpair failed and we were unable to recover it. 00:29:07.911 [2024-10-21 12:13:44.455576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.911 [2024-10-21 12:13:44.455609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.911 qpair failed and we were unable to recover it. 00:29:07.911 [2024-10-21 12:13:44.455844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.911 [2024-10-21 12:13:44.455876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.911 qpair failed and we were unable to recover it. 00:29:07.911 [2024-10-21 12:13:44.456220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.912 [2024-10-21 12:13:44.456251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.912 qpair failed and we were unable to recover it. 00:29:07.912 [2024-10-21 12:13:44.456595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.912 [2024-10-21 12:13:44.456629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.912 qpair failed and we were unable to recover it. 00:29:07.912 [2024-10-21 12:13:44.456971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.912 [2024-10-21 12:13:44.457002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.912 qpair failed and we were unable to recover it. 00:29:07.912 [2024-10-21 12:13:44.457381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.912 [2024-10-21 12:13:44.457412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.912 qpair failed and we were unable to recover it. 00:29:07.912 [2024-10-21 12:13:44.457671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.912 [2024-10-21 12:13:44.457702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.912 qpair failed and we were unable to recover it. 00:29:07.912 [2024-10-21 12:13:44.458049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.912 [2024-10-21 12:13:44.458081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.912 qpair failed and we were unable to recover it. 00:29:07.912 [2024-10-21 12:13:44.458357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.912 [2024-10-21 12:13:44.458390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.912 qpair failed and we were unable to recover it. 00:29:07.912 [2024-10-21 12:13:44.458714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.912 [2024-10-21 12:13:44.458745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.912 qpair failed and we were unable to recover it. 00:29:07.912 [2024-10-21 12:13:44.459132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.912 [2024-10-21 12:13:44.459162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.912 qpair failed and we were unable to recover it. 00:29:07.912 [2024-10-21 12:13:44.459513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.912 [2024-10-21 12:13:44.459551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.912 qpair failed and we were unable to recover it. 00:29:07.912 [2024-10-21 12:13:44.459920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.912 [2024-10-21 12:13:44.459953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.912 qpair failed and we were unable to recover it. 00:29:07.912 [2024-10-21 12:13:44.460285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.912 [2024-10-21 12:13:44.460315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.912 qpair failed and we were unable to recover it. 00:29:07.912 [2024-10-21 12:13:44.460732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.912 [2024-10-21 12:13:44.460764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.912 qpair failed and we were unable to recover it. 00:29:07.912 [2024-10-21 12:13:44.461166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.912 [2024-10-21 12:13:44.461197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.912 qpair failed and we were unable to recover it. 00:29:07.912 [2024-10-21 12:13:44.461432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.912 [2024-10-21 12:13:44.461466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.912 qpair failed and we were unable to recover it. 00:29:07.912 [2024-10-21 12:13:44.461855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.912 [2024-10-21 12:13:44.461887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.912 qpair failed and we were unable to recover it. 00:29:07.912 [2024-10-21 12:13:44.462242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.912 [2024-10-21 12:13:44.462273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.912 qpair failed and we were unable to recover it. 00:29:07.912 [2024-10-21 12:13:44.462617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.912 [2024-10-21 12:13:44.462650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.912 qpair failed and we were unable to recover it. 00:29:07.912 [2024-10-21 12:13:44.463001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.912 [2024-10-21 12:13:44.463032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.912 qpair failed and we were unable to recover it. 00:29:07.912 [2024-10-21 12:13:44.463356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.912 [2024-10-21 12:13:44.463387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.912 qpair failed and we were unable to recover it. 00:29:07.912 [2024-10-21 12:13:44.463673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.912 [2024-10-21 12:13:44.463704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:07.912 qpair failed and we were unable to recover it. 00:29:08.251 [2024-10-21 12:13:44.464048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.251 [2024-10-21 12:13:44.464081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.251 qpair failed and we were unable to recover it. 00:29:08.251 [2024-10-21 12:13:44.464482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.251 [2024-10-21 12:13:44.464513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.251 qpair failed and we were unable to recover it. 00:29:08.251 [2024-10-21 12:13:44.464778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.251 [2024-10-21 12:13:44.464808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.251 qpair failed and we were unable to recover it. 00:29:08.251 [2024-10-21 12:13:44.465164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.251 [2024-10-21 12:13:44.465194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.251 qpair failed and we were unable to recover it. 00:29:08.251 [2024-10-21 12:13:44.465552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.251 [2024-10-21 12:13:44.465587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.251 qpair failed and we were unable to recover it. 00:29:08.251 [2024-10-21 12:13:44.465927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.251 [2024-10-21 12:13:44.465957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.251 qpair failed and we were unable to recover it. 00:29:08.251 [2024-10-21 12:13:44.466343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.251 [2024-10-21 12:13:44.466375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.251 qpair failed and we were unable to recover it. 00:29:08.251 [2024-10-21 12:13:44.466739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.251 [2024-10-21 12:13:44.466772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.251 qpair failed and we were unable to recover it. 00:29:08.251 [2024-10-21 12:13:44.467060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.251 [2024-10-21 12:13:44.467092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.251 qpair failed and we were unable to recover it. 00:29:08.251 [2024-10-21 12:13:44.467366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.251 [2024-10-21 12:13:44.467397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.251 qpair failed and we were unable to recover it. 00:29:08.251 [2024-10-21 12:13:44.467795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.251 [2024-10-21 12:13:44.467826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.251 qpair failed and we were unable to recover it. 00:29:08.251 [2024-10-21 12:13:44.468188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.251 [2024-10-21 12:13:44.468219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.251 qpair failed and we were unable to recover it. 00:29:08.252 [2024-10-21 12:13:44.468566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.252 [2024-10-21 12:13:44.468597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.252 qpair failed and we were unable to recover it. 00:29:08.252 [2024-10-21 12:13:44.468956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.252 [2024-10-21 12:13:44.468985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.252 qpair failed and we were unable to recover it. 00:29:08.252 [2024-10-21 12:13:44.469315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.252 [2024-10-21 12:13:44.469381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.252 qpair failed and we were unable to recover it. 00:29:08.252 [2024-10-21 12:13:44.469784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.252 [2024-10-21 12:13:44.469816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.252 qpair failed and we were unable to recover it. 00:29:08.252 [2024-10-21 12:13:44.470176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.252 [2024-10-21 12:13:44.470208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.252 qpair failed and we were unable to recover it. 00:29:08.252 [2024-10-21 12:13:44.470607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.252 [2024-10-21 12:13:44.470640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.252 qpair failed and we were unable to recover it. 00:29:08.252 [2024-10-21 12:13:44.470998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.252 [2024-10-21 12:13:44.471030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.252 qpair failed and we were unable to recover it. 00:29:08.252 [2024-10-21 12:13:44.471445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.252 [2024-10-21 12:13:44.471478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.252 qpair failed and we were unable to recover it. 00:29:08.252 [2024-10-21 12:13:44.471846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.252 [2024-10-21 12:13:44.471879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.252 qpair failed and we were unable to recover it. 00:29:08.252 [2024-10-21 12:13:44.472237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.252 [2024-10-21 12:13:44.472270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.252 qpair failed and we were unable to recover it. 00:29:08.252 [2024-10-21 12:13:44.472753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.252 [2024-10-21 12:13:44.472786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.252 qpair failed and we were unable to recover it. 00:29:08.252 [2024-10-21 12:13:44.473184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.252 [2024-10-21 12:13:44.473217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.252 qpair failed and we were unable to recover it. 00:29:08.252 [2024-10-21 12:13:44.473695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.252 [2024-10-21 12:13:44.473727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.252 qpair failed and we were unable to recover it. 00:29:08.252 [2024-10-21 12:13:44.474087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.252 [2024-10-21 12:13:44.474120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.252 qpair failed and we were unable to recover it. 00:29:08.252 [2024-10-21 12:13:44.474386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.252 [2024-10-21 12:13:44.474419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.252 qpair failed and we were unable to recover it. 00:29:08.252 [2024-10-21 12:13:44.474818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.252 [2024-10-21 12:13:44.474850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.252 qpair failed and we were unable to recover it. 00:29:08.252 [2024-10-21 12:13:44.475209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.252 [2024-10-21 12:13:44.475246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.252 qpair failed and we were unable to recover it. 00:29:08.252 [2024-10-21 12:13:44.475671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.252 [2024-10-21 12:13:44.475704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.252 qpair failed and we were unable to recover it. 00:29:08.252 [2024-10-21 12:13:44.476060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.252 [2024-10-21 12:13:44.476091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.252 qpair failed and we were unable to recover it. 00:29:08.252 [2024-10-21 12:13:44.476432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.252 [2024-10-21 12:13:44.476465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.252 qpair failed and we were unable to recover it. 00:29:08.252 [2024-10-21 12:13:44.476844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.252 [2024-10-21 12:13:44.476874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.252 qpair failed and we were unable to recover it. 00:29:08.252 [2024-10-21 12:13:44.477252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.252 [2024-10-21 12:13:44.477285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.252 qpair failed and we were unable to recover it. 00:29:08.252 [2024-10-21 12:13:44.477685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.252 [2024-10-21 12:13:44.477718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.252 qpair failed and we were unable to recover it. 00:29:08.252 [2024-10-21 12:13:44.477926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.252 [2024-10-21 12:13:44.477955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.252 qpair failed and we were unable to recover it. 00:29:08.252 [2024-10-21 12:13:44.478316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.252 [2024-10-21 12:13:44.478363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.252 qpair failed and we were unable to recover it. 00:29:08.252 [2024-10-21 12:13:44.478604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.252 [2024-10-21 12:13:44.478638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.252 qpair failed and we were unable to recover it. 00:29:08.252 [2024-10-21 12:13:44.479008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.252 [2024-10-21 12:13:44.479038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.252 qpair failed and we were unable to recover it. 00:29:08.252 [2024-10-21 12:13:44.479405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.252 [2024-10-21 12:13:44.479439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.252 qpair failed and we were unable to recover it. 00:29:08.252 [2024-10-21 12:13:44.479819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.252 [2024-10-21 12:13:44.479851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.252 qpair failed and we were unable to recover it. 00:29:08.252 [2024-10-21 12:13:44.480199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.252 [2024-10-21 12:13:44.480228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.252 qpair failed and we were unable to recover it. 00:29:08.252 [2024-10-21 12:13:44.480583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.252 [2024-10-21 12:13:44.480616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.252 qpair failed and we were unable to recover it. 00:29:08.252 [2024-10-21 12:13:44.481014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.252 [2024-10-21 12:13:44.481045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.252 qpair failed and we were unable to recover it. 00:29:08.252 [2024-10-21 12:13:44.481406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.252 [2024-10-21 12:13:44.481438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.252 qpair failed and we were unable to recover it. 00:29:08.252 [2024-10-21 12:13:44.481831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.252 [2024-10-21 12:13:44.481864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.252 qpair failed and we were unable to recover it. 00:29:08.252 [2024-10-21 12:13:44.482238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.252 [2024-10-21 12:13:44.482269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.252 qpair failed and we were unable to recover it. 00:29:08.252 [2024-10-21 12:13:44.482674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.252 [2024-10-21 12:13:44.482706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.252 qpair failed and we were unable to recover it. 00:29:08.252 [2024-10-21 12:13:44.483065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.252 [2024-10-21 12:13:44.483097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.252 qpair failed and we were unable to recover it. 00:29:08.252 [2024-10-21 12:13:44.483448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.252 [2024-10-21 12:13:44.483480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.252 qpair failed and we were unable to recover it. 00:29:08.252 [2024-10-21 12:13:44.483853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.252 [2024-10-21 12:13:44.483884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.252 qpair failed and we were unable to recover it. 00:29:08.252 [2024-10-21 12:13:44.484138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.252 [2024-10-21 12:13:44.484171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.252 qpair failed and we were unable to recover it. 00:29:08.252 [2024-10-21 12:13:44.484416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.252 [2024-10-21 12:13:44.484450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.252 qpair failed and we were unable to recover it. 00:29:08.252 [2024-10-21 12:13:44.484816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.252 [2024-10-21 12:13:44.484847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.252 qpair failed and we were unable to recover it. 00:29:08.252 [2024-10-21 12:13:44.485099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.253 [2024-10-21 12:13:44.485131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-21 12:13:44.485520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.253 [2024-10-21 12:13:44.485552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-21 12:13:44.485905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.253 [2024-10-21 12:13:44.485937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-21 12:13:44.486292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.253 [2024-10-21 12:13:44.486333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-21 12:13:44.486719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.253 [2024-10-21 12:13:44.486750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-21 12:13:44.487147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.253 [2024-10-21 12:13:44.487178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-21 12:13:44.487571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.253 [2024-10-21 12:13:44.487603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-21 12:13:44.487956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.253 [2024-10-21 12:13:44.487988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-21 12:13:44.488246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.253 [2024-10-21 12:13:44.488281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-21 12:13:44.488635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.253 [2024-10-21 12:13:44.488668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-21 12:13:44.489023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.253 [2024-10-21 12:13:44.489055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-21 12:13:44.489406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.253 [2024-10-21 12:13:44.489439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-21 12:13:44.489757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.253 [2024-10-21 12:13:44.489786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-21 12:13:44.490140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.253 [2024-10-21 12:13:44.490172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-21 12:13:44.490420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.253 [2024-10-21 12:13:44.490458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-21 12:13:44.490822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.253 [2024-10-21 12:13:44.490852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-21 12:13:44.491227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.253 [2024-10-21 12:13:44.491259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-21 12:13:44.491638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.253 [2024-10-21 12:13:44.491672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-21 12:13:44.492043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.253 [2024-10-21 12:13:44.492074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-21 12:13:44.492417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.253 [2024-10-21 12:13:44.492451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-21 12:13:44.492844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.253 [2024-10-21 12:13:44.492874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-21 12:13:44.493229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.253 [2024-10-21 12:13:44.493260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-21 12:13:44.493637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.253 [2024-10-21 12:13:44.493670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-21 12:13:44.494027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.253 [2024-10-21 12:13:44.494058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-21 12:13:44.494366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.253 [2024-10-21 12:13:44.494397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-21 12:13:44.494783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.253 [2024-10-21 12:13:44.494814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-21 12:13:44.495177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.253 [2024-10-21 12:13:44.495208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-21 12:13:44.495565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.253 [2024-10-21 12:13:44.495596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-21 12:13:44.495961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.253 [2024-10-21 12:13:44.495992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-21 12:13:44.496363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.253 [2024-10-21 12:13:44.496395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-21 12:13:44.496664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.253 [2024-10-21 12:13:44.496694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-21 12:13:44.497063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.253 [2024-10-21 12:13:44.497094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-21 12:13:44.497452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.253 [2024-10-21 12:13:44.497483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-21 12:13:44.497861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.253 [2024-10-21 12:13:44.497893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-21 12:13:44.498247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.253 [2024-10-21 12:13:44.498279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-21 12:13:44.498694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.253 [2024-10-21 12:13:44.498726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-21 12:13:44.499084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.253 [2024-10-21 12:13:44.499114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.254 [2024-10-21 12:13:44.499509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.254 [2024-10-21 12:13:44.499541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.254 qpair failed and we were unable to recover it. 00:29:08.254 [2024-10-21 12:13:44.499972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.254 [2024-10-21 12:13:44.500003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.254 qpair failed and we were unable to recover it. 00:29:08.254 [2024-10-21 12:13:44.500246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.254 [2024-10-21 12:13:44.500277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.254 qpair failed and we were unable to recover it. 00:29:08.254 [2024-10-21 12:13:44.500497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.254 [2024-10-21 12:13:44.500530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.254 qpair failed and we were unable to recover it. 00:29:08.254 [2024-10-21 12:13:44.500910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.254 [2024-10-21 12:13:44.500942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.254 qpair failed and we were unable to recover it. 00:29:08.254 [2024-10-21 12:13:44.501295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.254 [2024-10-21 12:13:44.501338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.254 qpair failed and we were unable to recover it. 00:29:08.254 [2024-10-21 12:13:44.501778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.254 [2024-10-21 12:13:44.501809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.254 qpair failed and we were unable to recover it. 00:29:08.254 [2024-10-21 12:13:44.502176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.254 [2024-10-21 12:13:44.502208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.254 qpair failed and we were unable to recover it. 00:29:08.254 [2024-10-21 12:13:44.502629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.254 [2024-10-21 12:13:44.502661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.254 qpair failed and we were unable to recover it. 00:29:08.254 [2024-10-21 12:13:44.503001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.254 [2024-10-21 12:13:44.503032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.254 qpair failed and we were unable to recover it. 00:29:08.254 [2024-10-21 12:13:44.503371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.254 [2024-10-21 12:13:44.503403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.254 qpair failed and we were unable to recover it. 00:29:08.254 [2024-10-21 12:13:44.503799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.254 [2024-10-21 12:13:44.503829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.254 qpair failed and we were unable to recover it. 00:29:08.254 [2024-10-21 12:13:44.504096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.254 [2024-10-21 12:13:44.504127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.254 qpair failed and we were unable to recover it. 00:29:08.254 [2024-10-21 12:13:44.504369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.254 [2024-10-21 12:13:44.504401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.254 qpair failed and we were unable to recover it. 00:29:08.254 [2024-10-21 12:13:44.504810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.254 [2024-10-21 12:13:44.504841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.254 qpair failed and we were unable to recover it. 00:29:08.254 [2024-10-21 12:13:44.505199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.254 [2024-10-21 12:13:44.505230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.254 qpair failed and we were unable to recover it. 00:29:08.254 [2024-10-21 12:13:44.505618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.254 [2024-10-21 12:13:44.505650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.254 qpair failed and we were unable to recover it. 00:29:08.254 [2024-10-21 12:13:44.506007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.254 [2024-10-21 12:13:44.506044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.254 qpair failed and we were unable to recover it. 00:29:08.254 [2024-10-21 12:13:44.506447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.254 [2024-10-21 12:13:44.506481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.254 qpair failed and we were unable to recover it. 00:29:08.254 [2024-10-21 12:13:44.506910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.254 [2024-10-21 12:13:44.506941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.254 qpair failed and we were unable to recover it. 00:29:08.254 [2024-10-21 12:13:44.507295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.254 [2024-10-21 12:13:44.507338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.254 qpair failed and we were unable to recover it. 00:29:08.254 [2024-10-21 12:13:44.507635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.254 [2024-10-21 12:13:44.507666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.254 qpair failed and we were unable to recover it. 00:29:08.254 [2024-10-21 12:13:44.507896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.254 [2024-10-21 12:13:44.507925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.254 qpair failed and we were unable to recover it. 00:29:08.254 [2024-10-21 12:13:44.508292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.254 [2024-10-21 12:13:44.508336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.254 qpair failed and we were unable to recover it. 00:29:08.254 [2024-10-21 12:13:44.508750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.254 [2024-10-21 12:13:44.508780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.254 qpair failed and we were unable to recover it. 00:29:08.254 [2024-10-21 12:13:44.509147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.254 [2024-10-21 12:13:44.509178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.254 qpair failed and we were unable to recover it. 00:29:08.254 [2024-10-21 12:13:44.509600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.254 [2024-10-21 12:13:44.509632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.254 qpair failed and we were unable to recover it. 00:29:08.254 [2024-10-21 12:13:44.510010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.254 [2024-10-21 12:13:44.510042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.254 qpair failed and we were unable to recover it. 00:29:08.254 [2024-10-21 12:13:44.510297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.254 [2024-10-21 12:13:44.510337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.254 qpair failed and we were unable to recover it. 00:29:08.254 [2024-10-21 12:13:44.510783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.254 [2024-10-21 12:13:44.510814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.254 qpair failed and we were unable to recover it. 00:29:08.254 [2024-10-21 12:13:44.511168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.254 [2024-10-21 12:13:44.511200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.254 qpair failed and we were unable to recover it. 00:29:08.254 [2024-10-21 12:13:44.511556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.254 [2024-10-21 12:13:44.511589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.254 qpair failed and we were unable to recover it. 00:29:08.254 [2024-10-21 12:13:44.511959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.254 [2024-10-21 12:13:44.511991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.254 qpair failed and we were unable to recover it. 00:29:08.254 [2024-10-21 12:13:44.512423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.254 [2024-10-21 12:13:44.512455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.254 qpair failed and we were unable to recover it. 00:29:08.254 [2024-10-21 12:13:44.512706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.254 [2024-10-21 12:13:44.512735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.254 qpair failed and we were unable to recover it. 00:29:08.254 [2024-10-21 12:13:44.513100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.254 [2024-10-21 12:13:44.513130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.254 qpair failed and we were unable to recover it. 00:29:08.254 [2024-10-21 12:13:44.513514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.254 [2024-10-21 12:13:44.513547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.254 qpair failed and we were unable to recover it. 00:29:08.255 [2024-10-21 12:13:44.513936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.255 [2024-10-21 12:13:44.513966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.255 qpair failed and we were unable to recover it. 00:29:08.255 [2024-10-21 12:13:44.514342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.255 [2024-10-21 12:13:44.514375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.255 qpair failed and we were unable to recover it. 00:29:08.255 [2024-10-21 12:13:44.514749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.255 [2024-10-21 12:13:44.514779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.255 qpair failed and we were unable to recover it. 00:29:08.255 [2024-10-21 12:13:44.515187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.255 [2024-10-21 12:13:44.515217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.255 qpair failed and we were unable to recover it. 00:29:08.255 [2024-10-21 12:13:44.515543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.255 [2024-10-21 12:13:44.515575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.255 qpair failed and we were unable to recover it. 00:29:08.255 [2024-10-21 12:13:44.515832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.255 [2024-10-21 12:13:44.515862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.255 qpair failed and we were unable to recover it. 00:29:08.255 [2024-10-21 12:13:44.516267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.255 [2024-10-21 12:13:44.516297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.255 qpair failed and we were unable to recover it. 00:29:08.255 [2024-10-21 12:13:44.516706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.255 [2024-10-21 12:13:44.516740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.255 qpair failed and we were unable to recover it. 00:29:08.255 [2024-10-21 12:13:44.516982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.255 [2024-10-21 12:13:44.517011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.255 qpair failed and we were unable to recover it. 00:29:08.255 [2024-10-21 12:13:44.517391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.255 [2024-10-21 12:13:44.517423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.255 qpair failed and we were unable to recover it. 00:29:08.255 [2024-10-21 12:13:44.517838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.255 [2024-10-21 12:13:44.517870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.255 qpair failed and we were unable to recover it. 00:29:08.255 [2024-10-21 12:13:44.518219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.255 [2024-10-21 12:13:44.518250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.255 qpair failed and we were unable to recover it. 00:29:08.255 [2024-10-21 12:13:44.518542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.255 [2024-10-21 12:13:44.518573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.255 qpair failed and we were unable to recover it. 00:29:08.255 [2024-10-21 12:13:44.518947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.255 [2024-10-21 12:13:44.518980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.255 qpair failed and we were unable to recover it. 00:29:08.255 [2024-10-21 12:13:44.519344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.255 [2024-10-21 12:13:44.519376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.255 qpair failed and we were unable to recover it. 00:29:08.255 [2024-10-21 12:13:44.519564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.255 [2024-10-21 12:13:44.519594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.255 qpair failed and we were unable to recover it. 00:29:08.255 [2024-10-21 12:13:44.519960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.255 [2024-10-21 12:13:44.519990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.255 qpair failed and we were unable to recover it. 00:29:08.255 [2024-10-21 12:13:44.520357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.255 [2024-10-21 12:13:44.520391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.255 qpair failed and we were unable to recover it. 00:29:08.255 [2024-10-21 12:13:44.520677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.255 [2024-10-21 12:13:44.520708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.255 qpair failed and we were unable to recover it. 00:29:08.255 [2024-10-21 12:13:44.521061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.255 [2024-10-21 12:13:44.521092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.255 qpair failed and we were unable to recover it. 00:29:08.255 [2024-10-21 12:13:44.521460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.255 [2024-10-21 12:13:44.521499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.255 qpair failed and we were unable to recover it. 00:29:08.255 [2024-10-21 12:13:44.521914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.255 [2024-10-21 12:13:44.521945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.255 qpair failed and we were unable to recover it. 00:29:08.255 [2024-10-21 12:13:44.522301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.255 [2024-10-21 12:13:44.522346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.255 qpair failed and we were unable to recover it. 00:29:08.255 [2024-10-21 12:13:44.522711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.255 [2024-10-21 12:13:44.522741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.255 qpair failed and we were unable to recover it. 00:29:08.255 [2024-10-21 12:13:44.523089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.255 [2024-10-21 12:13:44.523119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.255 qpair failed and we were unable to recover it. 00:29:08.255 [2024-10-21 12:13:44.523486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.255 [2024-10-21 12:13:44.523518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.255 qpair failed and we were unable to recover it. 00:29:08.255 [2024-10-21 12:13:44.523891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.255 [2024-10-21 12:13:44.523922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.255 qpair failed and we were unable to recover it. 00:29:08.255 [2024-10-21 12:13:44.524317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.255 [2024-10-21 12:13:44.524361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.255 qpair failed and we were unable to recover it. 00:29:08.255 [2024-10-21 12:13:44.524689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.255 [2024-10-21 12:13:44.524719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.255 qpair failed and we were unable to recover it. 00:29:08.255 [2024-10-21 12:13:44.525149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.255 [2024-10-21 12:13:44.525180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.255 qpair failed and we were unable to recover it. 00:29:08.255 [2024-10-21 12:13:44.525528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.255 [2024-10-21 12:13:44.525560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.255 qpair failed and we were unable to recover it. 00:29:08.255 [2024-10-21 12:13:44.525930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.255 [2024-10-21 12:13:44.525961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.255 qpair failed and we were unable to recover it. 00:29:08.255 [2024-10-21 12:13:44.526336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.255 [2024-10-21 12:13:44.526368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.255 qpair failed and we were unable to recover it. 00:29:08.255 [2024-10-21 12:13:44.526768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.255 [2024-10-21 12:13:44.526798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.255 qpair failed and we were unable to recover it. 00:29:08.255 [2024-10-21 12:13:44.527142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.255 [2024-10-21 12:13:44.527175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.255 qpair failed and we were unable to recover it. 00:29:08.255 [2024-10-21 12:13:44.527470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.255 [2024-10-21 12:13:44.527502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.255 qpair failed and we were unable to recover it. 00:29:08.256 [2024-10-21 12:13:44.527783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.256 [2024-10-21 12:13:44.527812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.256 qpair failed and we were unable to recover it. 00:29:08.256 [2024-10-21 12:13:44.528167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.256 [2024-10-21 12:13:44.528197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.256 qpair failed and we were unable to recover it. 00:29:08.256 [2024-10-21 12:13:44.528460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.256 [2024-10-21 12:13:44.528492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.256 qpair failed and we were unable to recover it. 00:29:08.256 [2024-10-21 12:13:44.528751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.256 [2024-10-21 12:13:44.528781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.256 qpair failed and we were unable to recover it. 00:29:08.256 [2024-10-21 12:13:44.529134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.256 [2024-10-21 12:13:44.529164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.256 qpair failed and we were unable to recover it. 00:29:08.256 [2024-10-21 12:13:44.529609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.256 [2024-10-21 12:13:44.529642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.256 qpair failed and we were unable to recover it. 00:29:08.256 [2024-10-21 12:13:44.530012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.256 [2024-10-21 12:13:44.530044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.256 qpair failed and we were unable to recover it. 00:29:08.256 [2024-10-21 12:13:44.530362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.256 [2024-10-21 12:13:44.530395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.256 qpair failed and we were unable to recover it. 00:29:08.256 [2024-10-21 12:13:44.530778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.256 [2024-10-21 12:13:44.530810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.256 qpair failed and we were unable to recover it. 00:29:08.256 [2024-10-21 12:13:44.531165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.256 [2024-10-21 12:13:44.531198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.256 qpair failed and we were unable to recover it. 00:29:08.256 [2024-10-21 12:13:44.531595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.256 [2024-10-21 12:13:44.531627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.256 qpair failed and we were unable to recover it. 00:29:08.256 [2024-10-21 12:13:44.531990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.256 [2024-10-21 12:13:44.532020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.256 qpair failed and we were unable to recover it. 00:29:08.256 [2024-10-21 12:13:44.532267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.256 [2024-10-21 12:13:44.532297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.256 qpair failed and we were unable to recover it. 00:29:08.256 [2024-10-21 12:13:44.532683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.256 [2024-10-21 12:13:44.532715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.256 qpair failed and we were unable to recover it. 00:29:08.256 [2024-10-21 12:13:44.533084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.256 [2024-10-21 12:13:44.533116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.256 qpair failed and we were unable to recover it. 00:29:08.256 [2024-10-21 12:13:44.533450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.256 [2024-10-21 12:13:44.533482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.256 qpair failed and we were unable to recover it. 00:29:08.256 [2024-10-21 12:13:44.533842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.256 [2024-10-21 12:13:44.533873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.256 qpair failed and we were unable to recover it. 00:29:08.256 [2024-10-21 12:13:44.534237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.256 [2024-10-21 12:13:44.534268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.256 qpair failed and we were unable to recover it. 00:29:08.256 [2024-10-21 12:13:44.534732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.256 [2024-10-21 12:13:44.534764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.256 qpair failed and we were unable to recover it. 00:29:08.256 [2024-10-21 12:13:44.535180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.256 [2024-10-21 12:13:44.535211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.256 qpair failed and we were unable to recover it. 00:29:08.256 [2024-10-21 12:13:44.535561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.256 [2024-10-21 12:13:44.535595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.256 qpair failed and we were unable to recover it. 00:29:08.256 [2024-10-21 12:13:44.535948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.256 [2024-10-21 12:13:44.535977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.256 qpair failed and we were unable to recover it. 00:29:08.256 [2024-10-21 12:13:44.536355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.256 [2024-10-21 12:13:44.536387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.256 qpair failed and we were unable to recover it. 00:29:08.256 [2024-10-21 12:13:44.536755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.256 [2024-10-21 12:13:44.536785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.256 qpair failed and we were unable to recover it. 00:29:08.256 [2024-10-21 12:13:44.537046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.256 [2024-10-21 12:13:44.537082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.256 qpair failed and we were unable to recover it. 00:29:08.256 [2024-10-21 12:13:44.537415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.256 [2024-10-21 12:13:44.537449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.256 qpair failed and we were unable to recover it. 00:29:08.256 [2024-10-21 12:13:44.537817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.256 [2024-10-21 12:13:44.537848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.256 qpair failed and we were unable to recover it. 00:29:08.256 [2024-10-21 12:13:44.538200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.256 [2024-10-21 12:13:44.538231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.256 qpair failed and we were unable to recover it. 00:29:08.256 [2024-10-21 12:13:44.538581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.256 [2024-10-21 12:13:44.538613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.256 qpair failed and we were unable to recover it. 00:29:08.256 [2024-10-21 12:13:44.538976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.256 [2024-10-21 12:13:44.539007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.256 qpair failed and we were unable to recover it. 00:29:08.256 [2024-10-21 12:13:44.539379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.256 [2024-10-21 12:13:44.539411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.256 qpair failed and we were unable to recover it. 00:29:08.256 [2024-10-21 12:13:44.539782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.257 [2024-10-21 12:13:44.539814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.257 qpair failed and we were unable to recover it. 00:29:08.257 [2024-10-21 12:13:44.540172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.257 [2024-10-21 12:13:44.540204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.257 qpair failed and we were unable to recover it. 00:29:08.257 [2024-10-21 12:13:44.540456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.257 [2024-10-21 12:13:44.540489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.257 qpair failed and we were unable to recover it. 00:29:08.257 [2024-10-21 12:13:44.540837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.257 [2024-10-21 12:13:44.540869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.257 qpair failed and we were unable to recover it. 00:29:08.257 [2024-10-21 12:13:44.541220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.257 [2024-10-21 12:13:44.541250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.257 qpair failed and we were unable to recover it. 00:29:08.257 [2024-10-21 12:13:44.541614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.257 [2024-10-21 12:13:44.541645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.257 qpair failed and we were unable to recover it. 00:29:08.257 [2024-10-21 12:13:44.542018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.257 [2024-10-21 12:13:44.542051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.257 qpair failed and we were unable to recover it. 00:29:08.257 [2024-10-21 12:13:44.542390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.257 [2024-10-21 12:13:44.542422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.257 qpair failed and we were unable to recover it. 00:29:08.257 [2024-10-21 12:13:44.542806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.257 [2024-10-21 12:13:44.542839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.257 qpair failed and we were unable to recover it. 00:29:08.257 [2024-10-21 12:13:44.543155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.257 [2024-10-21 12:13:44.543187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.257 qpair failed and we were unable to recover it. 00:29:08.257 [2024-10-21 12:13:44.543436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.257 [2024-10-21 12:13:44.543469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.257 qpair failed and we were unable to recover it. 00:29:08.257 [2024-10-21 12:13:44.543860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.257 [2024-10-21 12:13:44.543890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.257 qpair failed and we were unable to recover it. 00:29:08.257 [2024-10-21 12:13:44.544137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.257 [2024-10-21 12:13:44.544166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.257 qpair failed and we were unable to recover it. 00:29:08.257 [2024-10-21 12:13:44.544503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.257 [2024-10-21 12:13:44.544534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.257 qpair failed and we were unable to recover it. 00:29:08.257 [2024-10-21 12:13:44.544875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.257 [2024-10-21 12:13:44.544913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.257 qpair failed and we were unable to recover it. 00:29:08.257 [2024-10-21 12:13:44.545264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.257 [2024-10-21 12:13:44.545295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.257 qpair failed and we were unable to recover it. 00:29:08.257 [2024-10-21 12:13:44.545689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.257 [2024-10-21 12:13:44.545721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.257 qpair failed and we were unable to recover it. 00:29:08.257 [2024-10-21 12:13:44.545969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.257 [2024-10-21 12:13:44.545998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.257 qpair failed and we were unable to recover it. 00:29:08.257 [2024-10-21 12:13:44.546352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.257 [2024-10-21 12:13:44.546385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.257 qpair failed and we were unable to recover it. 00:29:08.257 [2024-10-21 12:13:44.546742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.257 [2024-10-21 12:13:44.546773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.257 qpair failed and we were unable to recover it. 00:29:08.257 [2024-10-21 12:13:44.547127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.257 [2024-10-21 12:13:44.547158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.257 qpair failed and we were unable to recover it. 00:29:08.257 [2024-10-21 12:13:44.547516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.257 [2024-10-21 12:13:44.547550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.257 qpair failed and we were unable to recover it. 00:29:08.257 [2024-10-21 12:13:44.547936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.257 [2024-10-21 12:13:44.547966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.257 qpair failed and we were unable to recover it. 00:29:08.257 [2024-10-21 12:13:44.548334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.257 [2024-10-21 12:13:44.548366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.257 qpair failed and we were unable to recover it. 00:29:08.257 [2024-10-21 12:13:44.548729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.257 [2024-10-21 12:13:44.548761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.257 qpair failed and we were unable to recover it. 00:29:08.257 [2024-10-21 12:13:44.549114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.257 [2024-10-21 12:13:44.549144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.257 qpair failed and we were unable to recover it. 00:29:08.257 [2024-10-21 12:13:44.549515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.257 [2024-10-21 12:13:44.549547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.257 qpair failed and we were unable to recover it. 00:29:08.257 [2024-10-21 12:13:44.549887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.257 [2024-10-21 12:13:44.549917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.257 qpair failed and we were unable to recover it. 00:29:08.257 [2024-10-21 12:13:44.550201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.257 [2024-10-21 12:13:44.550231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.257 qpair failed and we were unable to recover it. 00:29:08.257 [2024-10-21 12:13:44.550579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.257 [2024-10-21 12:13:44.550612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.257 qpair failed and we were unable to recover it. 00:29:08.257 [2024-10-21 12:13:44.550845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.257 [2024-10-21 12:13:44.550875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.257 qpair failed and we were unable to recover it. 00:29:08.257 [2024-10-21 12:13:44.551230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.257 [2024-10-21 12:13:44.551261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.257 qpair failed and we were unable to recover it. 00:29:08.257 [2024-10-21 12:13:44.551631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.257 [2024-10-21 12:13:44.551666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.257 qpair failed and we were unable to recover it. 00:29:08.257 [2024-10-21 12:13:44.552010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.257 [2024-10-21 12:13:44.552046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.257 qpair failed and we were unable to recover it. 00:29:08.257 [2024-10-21 12:13:44.552361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.257 [2024-10-21 12:13:44.552391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.257 qpair failed and we were unable to recover it. 00:29:08.257 [2024-10-21 12:13:44.552758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.257 [2024-10-21 12:13:44.552788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.257 qpair failed and we were unable to recover it. 00:29:08.257 [2024-10-21 12:13:44.553143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.257 [2024-10-21 12:13:44.553173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.257 qpair failed and we were unable to recover it. 00:29:08.258 [2024-10-21 12:13:44.553544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.258 [2024-10-21 12:13:44.553577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.258 qpair failed and we were unable to recover it. 00:29:08.258 [2024-10-21 12:13:44.553933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.258 [2024-10-21 12:13:44.553964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.258 qpair failed and we were unable to recover it. 00:29:08.258 [2024-10-21 12:13:44.554338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.258 [2024-10-21 12:13:44.554371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.258 qpair failed and we were unable to recover it. 00:29:08.258 [2024-10-21 12:13:44.554733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.258 [2024-10-21 12:13:44.554763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.258 qpair failed and we were unable to recover it. 00:29:08.258 [2024-10-21 12:13:44.555104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.258 [2024-10-21 12:13:44.555135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.258 qpair failed and we were unable to recover it. 00:29:08.258 [2024-10-21 12:13:44.555392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.258 [2024-10-21 12:13:44.555424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.258 qpair failed and we were unable to recover it. 00:29:08.258 [2024-10-21 12:13:44.555802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.258 [2024-10-21 12:13:44.555832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.258 qpair failed and we were unable to recover it. 00:29:08.258 [2024-10-21 12:13:44.556191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.258 [2024-10-21 12:13:44.556223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.258 qpair failed and we were unable to recover it. 00:29:08.258 [2024-10-21 12:13:44.556571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.258 [2024-10-21 12:13:44.556602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.258 qpair failed and we were unable to recover it. 00:29:08.258 [2024-10-21 12:13:44.556960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.258 [2024-10-21 12:13:44.556990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.258 qpair failed and we were unable to recover it. 00:29:08.258 [2024-10-21 12:13:44.557348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.258 [2024-10-21 12:13:44.557382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.258 qpair failed and we were unable to recover it. 00:29:08.258 [2024-10-21 12:13:44.557746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.258 [2024-10-21 12:13:44.557777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.258 qpair failed and we were unable to recover it. 00:29:08.258 [2024-10-21 12:13:44.558129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.258 [2024-10-21 12:13:44.558159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.258 qpair failed and we were unable to recover it. 00:29:08.258 [2024-10-21 12:13:44.558522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.258 [2024-10-21 12:13:44.558553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.258 qpair failed and we were unable to recover it. 00:29:08.258 [2024-10-21 12:13:44.558912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.258 [2024-10-21 12:13:44.558949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.258 qpair failed and we were unable to recover it. 00:29:08.258 [2024-10-21 12:13:44.559304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.258 [2024-10-21 12:13:44.559348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.258 qpair failed and we were unable to recover it. 00:29:08.258 [2024-10-21 12:13:44.559702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.258 [2024-10-21 12:13:44.559733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.258 qpair failed and we were unable to recover it. 00:29:08.258 [2024-10-21 12:13:44.559982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.258 [2024-10-21 12:13:44.560011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.258 qpair failed and we were unable to recover it. 00:29:08.258 [2024-10-21 12:13:44.560367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.258 [2024-10-21 12:13:44.560400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.258 qpair failed and we were unable to recover it. 00:29:08.258 [2024-10-21 12:13:44.560768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.258 [2024-10-21 12:13:44.560799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.258 qpair failed and we were unable to recover it. 00:29:08.258 [2024-10-21 12:13:44.561150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.258 [2024-10-21 12:13:44.561182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.258 qpair failed and we were unable to recover it. 00:29:08.258 [2024-10-21 12:13:44.561540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.258 [2024-10-21 12:13:44.561571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.258 qpair failed and we were unable to recover it. 00:29:08.258 [2024-10-21 12:13:44.561928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.258 [2024-10-21 12:13:44.561958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.258 qpair failed and we were unable to recover it. 00:29:08.258 [2024-10-21 12:13:44.562338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.258 [2024-10-21 12:13:44.562372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.258 qpair failed and we were unable to recover it. 00:29:08.258 [2024-10-21 12:13:44.562735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.258 [2024-10-21 12:13:44.562767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.258 qpair failed and we were unable to recover it. 00:29:08.258 [2024-10-21 12:13:44.563124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.258 [2024-10-21 12:13:44.563155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.258 qpair failed and we were unable to recover it. 00:29:08.258 [2024-10-21 12:13:44.563418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.258 [2024-10-21 12:13:44.563449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.258 qpair failed and we were unable to recover it. 00:29:08.258 [2024-10-21 12:13:44.563807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.258 [2024-10-21 12:13:44.563838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.258 qpair failed and we were unable to recover it. 00:29:08.258 [2024-10-21 12:13:44.564082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.258 [2024-10-21 12:13:44.564116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.258 qpair failed and we were unable to recover it. 00:29:08.258 [2024-10-21 12:13:44.564480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.258 [2024-10-21 12:13:44.564512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.258 qpair failed and we were unable to recover it. 00:29:08.258 [2024-10-21 12:13:44.564875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.258 [2024-10-21 12:13:44.564905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.258 qpair failed and we were unable to recover it. 00:29:08.258 [2024-10-21 12:13:44.565277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.258 [2024-10-21 12:13:44.565308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.258 qpair failed and we were unable to recover it. 00:29:08.258 [2024-10-21 12:13:44.565559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.258 [2024-10-21 12:13:44.565590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.258 qpair failed and we were unable to recover it. 00:29:08.258 [2024-10-21 12:13:44.565823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.258 [2024-10-21 12:13:44.565856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.258 qpair failed and we were unable to recover it. 00:29:08.258 [2024-10-21 12:13:44.566227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.258 [2024-10-21 12:13:44.566256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.258 qpair failed and we were unable to recover it. 00:29:08.258 [2024-10-21 12:13:44.566624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.258 [2024-10-21 12:13:44.566655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.258 qpair failed and we were unable to recover it. 00:29:08.259 [2024-10-21 12:13:44.567004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.259 [2024-10-21 12:13:44.567043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.259 qpair failed and we were unable to recover it. 00:29:08.259 [2024-10-21 12:13:44.567399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.259 [2024-10-21 12:13:44.567431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.259 qpair failed and we were unable to recover it. 00:29:08.259 [2024-10-21 12:13:44.567811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.259 [2024-10-21 12:13:44.567842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.259 qpair failed and we were unable to recover it. 00:29:08.259 [2024-10-21 12:13:44.568198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.259 [2024-10-21 12:13:44.568230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.259 qpair failed and we were unable to recover it. 00:29:08.259 [2024-10-21 12:13:44.568570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.259 [2024-10-21 12:13:44.568601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.259 qpair failed and we were unable to recover it. 00:29:08.259 [2024-10-21 12:13:44.568959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.259 [2024-10-21 12:13:44.568991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.259 qpair failed and we were unable to recover it. 00:29:08.259 [2024-10-21 12:13:44.569211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.259 [2024-10-21 12:13:44.569247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.259 qpair failed and we were unable to recover it. 00:29:08.259 [2024-10-21 12:13:44.569641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.259 [2024-10-21 12:13:44.569673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.259 qpair failed and we were unable to recover it. 00:29:08.259 [2024-10-21 12:13:44.570031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.259 [2024-10-21 12:13:44.570062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.259 qpair failed and we were unable to recover it. 00:29:08.259 [2024-10-21 12:13:44.570396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.259 [2024-10-21 12:13:44.570429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.259 qpair failed and we were unable to recover it. 00:29:08.259 [2024-10-21 12:13:44.570796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.259 [2024-10-21 12:13:44.570826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.259 qpair failed and we were unable to recover it. 00:29:08.259 [2024-10-21 12:13:44.571193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.259 [2024-10-21 12:13:44.571223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.259 qpair failed and we were unable to recover it. 00:29:08.259 [2024-10-21 12:13:44.571441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.259 [2024-10-21 12:13:44.571476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.259 qpair failed and we were unable to recover it. 00:29:08.259 [2024-10-21 12:13:44.571852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.259 [2024-10-21 12:13:44.571882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.259 qpair failed and we were unable to recover it. 00:29:08.259 [2024-10-21 12:13:44.572240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.259 [2024-10-21 12:13:44.572271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.259 qpair failed and we were unable to recover it. 00:29:08.259 [2024-10-21 12:13:44.572621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.259 [2024-10-21 12:13:44.572652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.259 qpair failed and we were unable to recover it. 00:29:08.259 [2024-10-21 12:13:44.572999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.259 [2024-10-21 12:13:44.573029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.259 qpair failed and we were unable to recover it. 00:29:08.259 [2024-10-21 12:13:44.573398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.259 [2024-10-21 12:13:44.573431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.259 qpair failed and we were unable to recover it. 00:29:08.259 [2024-10-21 12:13:44.573817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.259 [2024-10-21 12:13:44.573848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.259 qpair failed and we were unable to recover it. 00:29:08.259 [2024-10-21 12:13:44.574204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.259 [2024-10-21 12:13:44.574236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.259 qpair failed and we were unable to recover it. 00:29:08.259 [2024-10-21 12:13:44.574486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.259 [2024-10-21 12:13:44.574519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.259 qpair failed and we were unable to recover it. 00:29:08.259 [2024-10-21 12:13:44.574884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.259 [2024-10-21 12:13:44.574915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.259 qpair failed and we were unable to recover it. 00:29:08.259 [2024-10-21 12:13:44.575268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.259 [2024-10-21 12:13:44.575300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.259 qpair failed and we were unable to recover it. 00:29:08.259 [2024-10-21 12:13:44.575680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.259 [2024-10-21 12:13:44.575711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.259 qpair failed and we were unable to recover it. 00:29:08.259 [2024-10-21 12:13:44.575949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.259 [2024-10-21 12:13:44.575982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.259 qpair failed and we were unable to recover it. 00:29:08.259 [2024-10-21 12:13:44.576360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.259 [2024-10-21 12:13:44.576394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.259 qpair failed and we were unable to recover it. 00:29:08.259 [2024-10-21 12:13:44.576741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.259 [2024-10-21 12:13:44.576773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.259 qpair failed and we were unable to recover it. 00:29:08.259 [2024-10-21 12:13:44.577025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.259 [2024-10-21 12:13:44.577062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.259 qpair failed and we were unable to recover it. 00:29:08.259 [2024-10-21 12:13:44.577398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.259 [2024-10-21 12:13:44.577431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.259 qpair failed and we were unable to recover it. 00:29:08.259 [2024-10-21 12:13:44.577780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.259 [2024-10-21 12:13:44.577811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.259 qpair failed and we were unable to recover it. 00:29:08.259 [2024-10-21 12:13:44.578159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.259 [2024-10-21 12:13:44.578190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.259 qpair failed and we were unable to recover it. 00:29:08.259 [2024-10-21 12:13:44.578551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.259 [2024-10-21 12:13:44.578583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.259 qpair failed and we were unable to recover it. 00:29:08.259 [2024-10-21 12:13:44.578954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.259 [2024-10-21 12:13:44.578984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.259 qpair failed and we were unable to recover it. 00:29:08.259 [2024-10-21 12:13:44.579345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.259 [2024-10-21 12:13:44.579376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.259 qpair failed and we were unable to recover it. 00:29:08.259 [2024-10-21 12:13:44.579734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.259 [2024-10-21 12:13:44.579765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.260 qpair failed and we were unable to recover it. 00:29:08.260 [2024-10-21 12:13:44.580007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.260 [2024-10-21 12:13:44.580036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.260 qpair failed and we were unable to recover it. 00:29:08.260 [2024-10-21 12:13:44.580394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.260 [2024-10-21 12:13:44.580425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.260 qpair failed and we were unable to recover it. 00:29:08.260 [2024-10-21 12:13:44.580828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.260 [2024-10-21 12:13:44.580858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.260 qpair failed and we were unable to recover it. 00:29:08.260 [2024-10-21 12:13:44.581206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.260 [2024-10-21 12:13:44.581236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.260 qpair failed and we were unable to recover it. 00:29:08.260 [2024-10-21 12:13:44.581573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.260 [2024-10-21 12:13:44.581608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.260 qpair failed and we were unable to recover it. 00:29:08.260 [2024-10-21 12:13:44.581971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.260 [2024-10-21 12:13:44.582001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.260 qpair failed and we were unable to recover it. 00:29:08.260 [2024-10-21 12:13:44.582370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.260 [2024-10-21 12:13:44.582402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.260 qpair failed and we were unable to recover it. 00:29:08.260 [2024-10-21 12:13:44.582702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.260 [2024-10-21 12:13:44.582732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.260 qpair failed and we were unable to recover it. 00:29:08.260 [2024-10-21 12:13:44.583086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.260 [2024-10-21 12:13:44.583118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.260 qpair failed and we were unable to recover it. 00:29:08.260 [2024-10-21 12:13:44.583482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.260 [2024-10-21 12:13:44.583515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.260 qpair failed and we were unable to recover it. 00:29:08.260 [2024-10-21 12:13:44.583864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.260 [2024-10-21 12:13:44.583896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.260 qpair failed and we were unable to recover it. 00:29:08.260 [2024-10-21 12:13:44.584135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.260 [2024-10-21 12:13:44.584170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.260 qpair failed and we were unable to recover it. 00:29:08.260 [2024-10-21 12:13:44.584507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.260 [2024-10-21 12:13:44.584540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.260 qpair failed and we were unable to recover it. 00:29:08.260 [2024-10-21 12:13:44.584908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.260 [2024-10-21 12:13:44.584938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.260 qpair failed and we were unable to recover it. 00:29:08.260 [2024-10-21 12:13:44.585296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.260 [2024-10-21 12:13:44.585342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.260 qpair failed and we were unable to recover it. 00:29:08.260 [2024-10-21 12:13:44.585735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.260 [2024-10-21 12:13:44.585767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.260 qpair failed and we were unable to recover it. 00:29:08.260 [2024-10-21 12:13:44.586119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.260 [2024-10-21 12:13:44.586149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.260 qpair failed and we were unable to recover it. 00:29:08.260 [2024-10-21 12:13:44.586492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.260 [2024-10-21 12:13:44.586523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.260 qpair failed and we were unable to recover it. 00:29:08.260 [2024-10-21 12:13:44.586879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.260 [2024-10-21 12:13:44.586910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.260 qpair failed and we were unable to recover it. 00:29:08.260 [2024-10-21 12:13:44.587272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.260 [2024-10-21 12:13:44.587303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.260 qpair failed and we were unable to recover it. 00:29:08.260 [2024-10-21 12:13:44.587675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.260 [2024-10-21 12:13:44.587706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.260 qpair failed and we were unable to recover it. 00:29:08.260 [2024-10-21 12:13:44.588074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.260 [2024-10-21 12:13:44.588106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.260 qpair failed and we were unable to recover it. 00:29:08.260 [2024-10-21 12:13:44.588469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.260 [2024-10-21 12:13:44.588502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.260 qpair failed and we were unable to recover it. 00:29:08.260 [2024-10-21 12:13:44.588740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.260 [2024-10-21 12:13:44.588772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.260 qpair failed and we were unable to recover it. 00:29:08.260 [2024-10-21 12:13:44.589118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.260 [2024-10-21 12:13:44.589148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.260 qpair failed and we were unable to recover it. 00:29:08.260 [2024-10-21 12:13:44.589500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.260 [2024-10-21 12:13:44.589533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.260 qpair failed and we were unable to recover it. 00:29:08.260 [2024-10-21 12:13:44.589896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.260 [2024-10-21 12:13:44.589927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.260 qpair failed and we were unable to recover it. 00:29:08.260 [2024-10-21 12:13:44.590295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.260 [2024-10-21 12:13:44.590337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.260 qpair failed and we were unable to recover it. 00:29:08.260 [2024-10-21 12:13:44.590667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.260 [2024-10-21 12:13:44.590699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.260 qpair failed and we were unable to recover it. 00:29:08.260 [2024-10-21 12:13:44.591050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.260 [2024-10-21 12:13:44.591080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.260 qpair failed and we were unable to recover it. 00:29:08.260 [2024-10-21 12:13:44.591454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.260 [2024-10-21 12:13:44.591486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.260 qpair failed and we were unable to recover it. 00:29:08.260 [2024-10-21 12:13:44.591848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.260 [2024-10-21 12:13:44.591878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.260 qpair failed and we were unable to recover it. 00:29:08.260 [2024-10-21 12:13:44.592226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.260 [2024-10-21 12:13:44.592263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.260 qpair failed and we were unable to recover it. 00:29:08.260 [2024-10-21 12:13:44.592645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.260 [2024-10-21 12:13:44.592677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.260 qpair failed and we were unable to recover it. 00:29:08.260 [2024-10-21 12:13:44.593039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.260 [2024-10-21 12:13:44.593070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.261 qpair failed and we were unable to recover it. 00:29:08.261 [2024-10-21 12:13:44.593424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.261 [2024-10-21 12:13:44.593457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.261 qpair failed and we were unable to recover it. 00:29:08.261 [2024-10-21 12:13:44.593824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.261 [2024-10-21 12:13:44.593855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.261 qpair failed and we were unable to recover it. 00:29:08.261 [2024-10-21 12:13:44.594271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.261 [2024-10-21 12:13:44.594301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.261 qpair failed and we were unable to recover it. 00:29:08.261 [2024-10-21 12:13:44.594664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.261 [2024-10-21 12:13:44.594699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.261 qpair failed and we were unable to recover it. 00:29:08.261 [2024-10-21 12:13:44.595058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.261 [2024-10-21 12:13:44.595089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.261 qpair failed and we were unable to recover it. 00:29:08.261 [2024-10-21 12:13:44.595454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.261 [2024-10-21 12:13:44.595489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.261 qpair failed and we were unable to recover it. 00:29:08.261 [2024-10-21 12:13:44.595838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.261 [2024-10-21 12:13:44.595870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.261 qpair failed and we were unable to recover it. 00:29:08.261 [2024-10-21 12:13:44.596227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.261 [2024-10-21 12:13:44.596259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.261 qpair failed and we were unable to recover it. 00:29:08.261 [2024-10-21 12:13:44.596621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.261 [2024-10-21 12:13:44.596655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.261 qpair failed and we were unable to recover it. 00:29:08.261 [2024-10-21 12:13:44.597017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.261 [2024-10-21 12:13:44.597049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.261 qpair failed and we were unable to recover it. 00:29:08.261 [2024-10-21 12:13:44.597414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.261 [2024-10-21 12:13:44.597449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.261 qpair failed and we were unable to recover it. 00:29:08.261 [2024-10-21 12:13:44.597818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.261 [2024-10-21 12:13:44.597848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.261 qpair failed and we were unable to recover it. 00:29:08.261 [2024-10-21 12:13:44.598225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.261 [2024-10-21 12:13:44.598256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.261 qpair failed and we were unable to recover it. 00:29:08.261 [2024-10-21 12:13:44.598599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.261 [2024-10-21 12:13:44.598632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.261 qpair failed and we were unable to recover it. 00:29:08.261 [2024-10-21 12:13:44.598986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.261 [2024-10-21 12:13:44.599016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.261 qpair failed and we were unable to recover it. 00:29:08.261 [2024-10-21 12:13:44.599381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.261 [2024-10-21 12:13:44.599413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.261 qpair failed and we were unable to recover it. 00:29:08.261 [2024-10-21 12:13:44.599795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.261 [2024-10-21 12:13:44.599826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.261 qpair failed and we were unable to recover it. 00:29:08.261 [2024-10-21 12:13:44.600112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.261 [2024-10-21 12:13:44.600142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.261 qpair failed and we were unable to recover it. 00:29:08.261 [2024-10-21 12:13:44.600512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.261 [2024-10-21 12:13:44.600544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.261 qpair failed and we were unable to recover it. 00:29:08.261 [2024-10-21 12:13:44.600905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.261 [2024-10-21 12:13:44.600938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.261 qpair failed and we were unable to recover it. 00:29:08.261 [2024-10-21 12:13:44.601301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.261 [2024-10-21 12:13:44.601341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.261 qpair failed and we were unable to recover it. 00:29:08.261 [2024-10-21 12:13:44.601699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.261 [2024-10-21 12:13:44.601729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.261 qpair failed and we were unable to recover it. 00:29:08.261 [2024-10-21 12:13:44.602100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.261 [2024-10-21 12:13:44.602132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.261 qpair failed and we were unable to recover it. 00:29:08.261 [2024-10-21 12:13:44.602493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.261 [2024-10-21 12:13:44.602526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.261 qpair failed and we were unable to recover it. 00:29:08.261 [2024-10-21 12:13:44.602872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.261 [2024-10-21 12:13:44.602905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.261 qpair failed and we were unable to recover it. 00:29:08.261 [2024-10-21 12:13:44.603265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.261 [2024-10-21 12:13:44.603295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.261 qpair failed and we were unable to recover it. 00:29:08.261 [2024-10-21 12:13:44.603669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.261 [2024-10-21 12:13:44.603701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.261 qpair failed and we were unable to recover it. 00:29:08.261 [2024-10-21 12:13:44.604068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.261 [2024-10-21 12:13:44.604098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.261 qpair failed and we were unable to recover it. 00:29:08.261 [2024-10-21 12:13:44.604466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.261 [2024-10-21 12:13:44.604499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.261 qpair failed and we were unable to recover it. 00:29:08.261 [2024-10-21 12:13:44.604886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.262 [2024-10-21 12:13:44.604918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.262 qpair failed and we were unable to recover it. 00:29:08.262 [2024-10-21 12:13:44.605272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.262 [2024-10-21 12:13:44.605304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.262 qpair failed and we were unable to recover it. 00:29:08.262 [2024-10-21 12:13:44.605679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.262 [2024-10-21 12:13:44.605711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.262 qpair failed and we were unable to recover it. 00:29:08.262 [2024-10-21 12:13:44.606069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.262 [2024-10-21 12:13:44.606102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.262 qpair failed and we were unable to recover it. 00:29:08.262 [2024-10-21 12:13:44.606463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.262 [2024-10-21 12:13:44.606493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.262 qpair failed and we were unable to recover it. 00:29:08.262 [2024-10-21 12:13:44.606726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.262 [2024-10-21 12:13:44.606759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.262 qpair failed and we were unable to recover it. 00:29:08.262 [2024-10-21 12:13:44.607116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.262 [2024-10-21 12:13:44.607149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.262 qpair failed and we were unable to recover it. 00:29:08.262 [2024-10-21 12:13:44.607511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.262 [2024-10-21 12:13:44.607544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.262 qpair failed and we were unable to recover it. 00:29:08.262 [2024-10-21 12:13:44.607914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.262 [2024-10-21 12:13:44.607952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.262 qpair failed and we were unable to recover it. 00:29:08.262 [2024-10-21 12:13:44.608317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.262 [2024-10-21 12:13:44.608365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.262 qpair failed and we were unable to recover it. 00:29:08.262 [2024-10-21 12:13:44.608730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.262 [2024-10-21 12:13:44.608759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.262 qpair failed and we were unable to recover it. 00:29:08.262 [2024-10-21 12:13:44.609119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.262 [2024-10-21 12:13:44.609149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.262 qpair failed and we were unable to recover it. 00:29:08.262 [2024-10-21 12:13:44.609553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.262 [2024-10-21 12:13:44.609586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.262 qpair failed and we were unable to recover it. 00:29:08.262 [2024-10-21 12:13:44.609946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.262 [2024-10-21 12:13:44.609976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.262 qpair failed and we were unable to recover it. 00:29:08.262 [2024-10-21 12:13:44.610335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.262 [2024-10-21 12:13:44.610369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.262 qpair failed and we were unable to recover it. 00:29:08.262 [2024-10-21 12:13:44.610733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.262 [2024-10-21 12:13:44.610765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.262 qpair failed and we were unable to recover it. 00:29:08.262 [2024-10-21 12:13:44.611132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.262 [2024-10-21 12:13:44.611162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.262 qpair failed and we were unable to recover it. 00:29:08.262 [2024-10-21 12:13:44.611502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.262 [2024-10-21 12:13:44.611534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.262 qpair failed and we were unable to recover it. 00:29:08.262 [2024-10-21 12:13:44.611889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.262 [2024-10-21 12:13:44.611920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.262 qpair failed and we were unable to recover it. 00:29:08.262 [2024-10-21 12:13:44.612281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.262 [2024-10-21 12:13:44.612312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.262 qpair failed and we were unable to recover it. 00:29:08.262 [2024-10-21 12:13:44.612573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.262 [2024-10-21 12:13:44.612604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.262 qpair failed and we were unable to recover it. 00:29:08.262 [2024-10-21 12:13:44.612975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.262 [2024-10-21 12:13:44.613005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.262 qpair failed and we were unable to recover it. 00:29:08.262 [2024-10-21 12:13:44.613359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.262 [2024-10-21 12:13:44.613393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.262 qpair failed and we were unable to recover it. 00:29:08.262 [2024-10-21 12:13:44.613643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.262 [2024-10-21 12:13:44.613674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.262 qpair failed and we were unable to recover it. 00:29:08.262 [2024-10-21 12:13:44.614041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.262 [2024-10-21 12:13:44.614070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.262 qpair failed and we were unable to recover it. 00:29:08.262 [2024-10-21 12:13:44.614434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.262 [2024-10-21 12:13:44.614465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.262 qpair failed and we were unable to recover it. 00:29:08.262 [2024-10-21 12:13:44.614831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.262 [2024-10-21 12:13:44.614861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.262 qpair failed and we were unable to recover it. 00:29:08.262 [2024-10-21 12:13:44.615228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.262 [2024-10-21 12:13:44.615260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.262 qpair failed and we were unable to recover it. 00:29:08.262 [2024-10-21 12:13:44.615639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.262 [2024-10-21 12:13:44.615671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.262 qpair failed and we were unable to recover it. 00:29:08.262 [2024-10-21 12:13:44.616022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.262 [2024-10-21 12:13:44.616053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.262 qpair failed and we were unable to recover it. 00:29:08.262 [2024-10-21 12:13:44.616423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.262 [2024-10-21 12:13:44.616455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.262 qpair failed and we were unable to recover it. 00:29:08.262 [2024-10-21 12:13:44.616822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.262 [2024-10-21 12:13:44.616854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.262 qpair failed and we were unable to recover it. 00:29:08.262 [2024-10-21 12:13:44.617090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.262 [2024-10-21 12:13:44.617122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.262 qpair failed and we were unable to recover it. 00:29:08.262 [2024-10-21 12:13:44.617468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.262 [2024-10-21 12:13:44.617503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.262 qpair failed and we were unable to recover it. 00:29:08.262 [2024-10-21 12:13:44.617865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.262 [2024-10-21 12:13:44.617895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.262 qpair failed and we were unable to recover it. 00:29:08.262 [2024-10-21 12:13:44.618142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-10-21 12:13:44.618172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-10-21 12:13:44.618415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-10-21 12:13:44.618447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-10-21 12:13:44.618827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-10-21 12:13:44.618857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-10-21 12:13:44.619113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-10-21 12:13:44.619146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-10-21 12:13:44.619573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-10-21 12:13:44.619604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-10-21 12:13:44.619950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-10-21 12:13:44.619982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-10-21 12:13:44.620341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-10-21 12:13:44.620374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-10-21 12:13:44.620729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-10-21 12:13:44.620761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-10-21 12:13:44.621127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-10-21 12:13:44.621159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-10-21 12:13:44.621489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-10-21 12:13:44.621520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-10-21 12:13:44.621790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-10-21 12:13:44.621822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-10-21 12:13:44.622200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-10-21 12:13:44.622231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-10-21 12:13:44.622479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-10-21 12:13:44.622511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-10-21 12:13:44.622834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-10-21 12:13:44.622871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-10-21 12:13:44.623219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-10-21 12:13:44.623249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-10-21 12:13:44.623592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-10-21 12:13:44.623623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-10-21 12:13:44.623988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-10-21 12:13:44.624019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-10-21 12:13:44.624424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-10-21 12:13:44.624456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-10-21 12:13:44.624864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-10-21 12:13:44.624896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-10-21 12:13:44.625260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-10-21 12:13:44.625291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-10-21 12:13:44.625629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-10-21 12:13:44.625662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-10-21 12:13:44.625915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-10-21 12:13:44.625948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-10-21 12:13:44.626192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-10-21 12:13:44.626222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-10-21 12:13:44.626348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-10-21 12:13:44.626381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-10-21 12:13:44.626532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-10-21 12:13:44.626560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-10-21 12:13:44.626894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-10-21 12:13:44.626925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-10-21 12:13:44.627281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-10-21 12:13:44.627313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-10-21 12:13:44.627707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-10-21 12:13:44.627739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-10-21 12:13:44.628099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-10-21 12:13:44.628131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-10-21 12:13:44.628492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-10-21 12:13:44.628525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-10-21 12:13:44.628895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-10-21 12:13:44.628926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-10-21 12:13:44.629293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-10-21 12:13:44.629341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-10-21 12:13:44.629697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-10-21 12:13:44.629727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-10-21 12:13:44.630100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-10-21 12:13:44.630130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-10-21 12:13:44.630497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-10-21 12:13:44.630529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-10-21 12:13:44.630905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-10-21 12:13:44.630936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-10-21 12:13:44.631299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-10-21 12:13:44.631341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-10-21 12:13:44.631729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-10-21 12:13:44.631761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-10-21 12:13:44.631995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-10-21 12:13:44.632028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-10-21 12:13:44.632379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-10-21 12:13:44.632413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-10-21 12:13:44.632654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-10-21 12:13:44.632685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-10-21 12:13:44.633067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-10-21 12:13:44.633098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-10-21 12:13:44.633467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-10-21 12:13:44.633499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-10-21 12:13:44.633750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-10-21 12:13:44.633780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-10-21 12:13:44.634126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-10-21 12:13:44.634158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-10-21 12:13:44.634535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-10-21 12:13:44.634567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-10-21 12:13:44.634925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-10-21 12:13:44.634956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-10-21 12:13:44.635313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-10-21 12:13:44.635354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-10-21 12:13:44.635690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-10-21 12:13:44.635721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-10-21 12:13:44.636080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-10-21 12:13:44.636111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-10-21 12:13:44.636350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-10-21 12:13:44.636383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-10-21 12:13:44.636816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-10-21 12:13:44.636848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-10-21 12:13:44.637204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-10-21 12:13:44.637236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-10-21 12:13:44.637632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-10-21 12:13:44.637675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-10-21 12:13:44.638049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-10-21 12:13:44.638081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-10-21 12:13:44.638436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-10-21 12:13:44.638469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-10-21 12:13:44.638823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-10-21 12:13:44.638855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-10-21 12:13:44.639300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-10-21 12:13:44.639341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-10-21 12:13:44.639602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-10-21 12:13:44.639632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-10-21 12:13:44.639896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-10-21 12:13:44.639926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-10-21 12:13:44.640276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-10-21 12:13:44.640308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-10-21 12:13:44.640704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-10-21 12:13:44.640736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-10-21 12:13:44.641086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-10-21 12:13:44.641119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-10-21 12:13:44.641488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-10-21 12:13:44.641521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-10-21 12:13:44.641762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-10-21 12:13:44.641792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-10-21 12:13:44.642151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-10-21 12:13:44.642181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-10-21 12:13:44.642553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-10-21 12:13:44.642585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-10-21 12:13:44.642953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-10-21 12:13:44.642985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-10-21 12:13:44.643357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-10-21 12:13:44.643389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-10-21 12:13:44.643632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-10-21 12:13:44.643662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-10-21 12:13:44.644012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-10-21 12:13:44.644043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-10-21 12:13:44.644393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-10-21 12:13:44.644424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-10-21 12:13:44.644829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-10-21 12:13:44.644860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-10-21 12:13:44.645247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-10-21 12:13:44.645278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-10-21 12:13:44.645645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-10-21 12:13:44.645677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-10-21 12:13:44.646048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-10-21 12:13:44.646080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-10-21 12:13:44.646456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-10-21 12:13:44.646488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-10-21 12:13:44.646855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-10-21 12:13:44.646886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-10-21 12:13:44.647113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-10-21 12:13:44.647143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-10-21 12:13:44.647515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-10-21 12:13:44.647548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-10-21 12:13:44.647937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-10-21 12:13:44.647969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-10-21 12:13:44.648376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-10-21 12:13:44.648407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-10-21 12:13:44.648773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-10-21 12:13:44.648803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-10-21 12:13:44.649048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-10-21 12:13:44.649081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-10-21 12:13:44.649449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-10-21 12:13:44.649481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-10-21 12:13:44.649848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-10-21 12:13:44.649879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-10-21 12:13:44.650244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-10-21 12:13:44.650275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-10-21 12:13:44.650523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-10-21 12:13:44.650557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-10-21 12:13:44.650777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-10-21 12:13:44.650809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-10-21 12:13:44.651062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-10-21 12:13:44.651094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-10-21 12:13:44.651463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-10-21 12:13:44.651496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-10-21 12:13:44.651745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-10-21 12:13:44.651775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-10-21 12:13:44.652125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-10-21 12:13:44.652155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-10-21 12:13:44.652504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-10-21 12:13:44.652541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-10-21 12:13:44.652901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-10-21 12:13:44.652933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-10-21 12:13:44.653303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-10-21 12:13:44.653345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-10-21 12:13:44.653695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-10-21 12:13:44.653727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-10-21 12:13:44.654099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-10-21 12:13:44.654130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-10-21 12:13:44.654486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-10-21 12:13:44.654520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-10-21 12:13:44.654881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-10-21 12:13:44.654912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-10-21 12:13:44.655250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-10-21 12:13:44.655282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-10-21 12:13:44.655716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-10-21 12:13:44.655748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-10-21 12:13:44.656085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-10-21 12:13:44.656117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-10-21 12:13:44.656493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-10-21 12:13:44.656526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-10-21 12:13:44.656938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-10-21 12:13:44.656968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-10-21 12:13:44.657349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-10-21 12:13:44.657382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-10-21 12:13:44.657837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-10-21 12:13:44.657868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-10-21 12:13:44.658221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-10-21 12:13:44.658251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-10-21 12:13:44.658619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-10-21 12:13:44.658652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-10-21 12:13:44.659011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-10-21 12:13:44.659043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-10-21 12:13:44.659288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-10-21 12:13:44.659319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-10-21 12:13:44.659705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-10-21 12:13:44.659735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-10-21 12:13:44.660095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-10-21 12:13:44.660126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-10-21 12:13:44.660497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-10-21 12:13:44.660531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-10-21 12:13:44.660885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-10-21 12:13:44.660916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-10-21 12:13:44.661254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-10-21 12:13:44.661283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-10-21 12:13:44.661680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-10-21 12:13:44.661712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-10-21 12:13:44.661958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-10-21 12:13:44.661990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-10-21 12:13:44.662348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-10-21 12:13:44.662379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-10-21 12:13:44.662747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-10-21 12:13:44.662777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-10-21 12:13:44.663140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-10-21 12:13:44.663170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-10-21 12:13:44.663512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-10-21 12:13:44.663543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-10-21 12:13:44.663912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-10-21 12:13:44.663943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-10-21 12:13:44.664305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-10-21 12:13:44.664347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-10-21 12:13:44.664712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-10-21 12:13:44.664743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-10-21 12:13:44.665120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-10-21 12:13:44.665150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-10-21 12:13:44.665498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-10-21 12:13:44.665530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-10-21 12:13:44.665892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-10-21 12:13:44.665921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-10-21 12:13:44.666273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-10-21 12:13:44.666304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-10-21 12:13:44.666693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-10-21 12:13:44.666726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-10-21 12:13:44.667072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-10-21 12:13:44.667104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-10-21 12:13:44.667470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-10-21 12:13:44.667502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-10-21 12:13:44.667859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-10-21 12:13:44.667891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-10-21 12:13:44.668255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-10-21 12:13:44.668291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-10-21 12:13:44.668673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-10-21 12:13:44.668705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-10-21 12:13:44.669070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-10-21 12:13:44.669102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.267 [2024-10-21 12:13:44.669458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-10-21 12:13:44.669491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-10-21 12:13:44.669848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-10-21 12:13:44.669879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-10-21 12:13:44.670244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-10-21 12:13:44.670275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-10-21 12:13:44.670703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-10-21 12:13:44.670735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-10-21 12:13:44.671088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-10-21 12:13:44.671119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-10-21 12:13:44.671383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-10-21 12:13:44.671415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-10-21 12:13:44.671791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-10-21 12:13:44.671821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-10-21 12:13:44.672157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-10-21 12:13:44.672189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-10-21 12:13:44.672575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-10-21 12:13:44.672607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-10-21 12:13:44.672964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-10-21 12:13:44.672995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-10-21 12:13:44.673356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-10-21 12:13:44.673389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-10-21 12:13:44.673794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-10-21 12:13:44.673825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-10-21 12:13:44.674175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-10-21 12:13:44.674206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-10-21 12:13:44.674582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-10-21 12:13:44.674614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-10-21 12:13:44.674966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-10-21 12:13:44.674997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-10-21 12:13:44.675355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-10-21 12:13:44.675387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-10-21 12:13:44.675738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-10-21 12:13:44.675770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-10-21 12:13:44.676129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-10-21 12:13:44.676162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-10-21 12:13:44.676500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-10-21 12:13:44.676532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-10-21 12:13:44.676892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-10-21 12:13:44.676923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-10-21 12:13:44.677289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-10-21 12:13:44.677359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-10-21 12:13:44.677742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-10-21 12:13:44.677772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-10-21 12:13:44.678198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-10-21 12:13:44.678230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-10-21 12:13:44.678592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-10-21 12:13:44.678625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-10-21 12:13:44.678980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-10-21 12:13:44.679014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-10-21 12:13:44.679378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-10-21 12:13:44.679410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-10-21 12:13:44.679766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-10-21 12:13:44.679795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-10-21 12:13:44.680154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-10-21 12:13:44.680186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-10-21 12:13:44.680546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-10-21 12:13:44.680579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-10-21 12:13:44.680939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-10-21 12:13:44.680969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-10-21 12:13:44.681340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-10-21 12:13:44.681373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-10-21 12:13:44.681770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-10-21 12:13:44.681802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-10-21 12:13:44.682143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-10-21 12:13:44.682175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-10-21 12:13:44.682540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-10-21 12:13:44.682573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-10-21 12:13:44.682925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-10-21 12:13:44.682955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.268 [2024-10-21 12:13:44.683329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-10-21 12:13:44.683363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-10-21 12:13:44.683734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-10-21 12:13:44.683765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-10-21 12:13:44.684121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-10-21 12:13:44.684160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-10-21 12:13:44.684394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-10-21 12:13:44.684425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-10-21 12:13:44.684786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-10-21 12:13:44.684816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-10-21 12:13:44.685179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-10-21 12:13:44.685211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-10-21 12:13:44.685584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-10-21 12:13:44.685617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-10-21 12:13:44.685945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-10-21 12:13:44.685977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-10-21 12:13:44.686352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-10-21 12:13:44.686385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-10-21 12:13:44.686782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-10-21 12:13:44.686813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-10-21 12:13:44.687160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-10-21 12:13:44.687191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-10-21 12:13:44.687559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-10-21 12:13:44.687592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-10-21 12:13:44.687947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-10-21 12:13:44.687977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-10-21 12:13:44.688221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-10-21 12:13:44.688254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-10-21 12:13:44.688606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-10-21 12:13:44.688638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-10-21 12:13:44.688989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-10-21 12:13:44.689020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-10-21 12:13:44.689283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-10-21 12:13:44.689313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-10-21 12:13:44.689559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-10-21 12:13:44.689594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-10-21 12:13:44.689813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-10-21 12:13:44.689846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-10-21 12:13:44.690179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-10-21 12:13:44.690211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-10-21 12:13:44.690576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-10-21 12:13:44.690607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-10-21 12:13:44.690965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-10-21 12:13:44.690997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-10-21 12:13:44.691251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-10-21 12:13:44.691281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-10-21 12:13:44.691535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-10-21 12:13:44.691566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-10-21 12:13:44.691915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-10-21 12:13:44.691946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-10-21 12:13:44.692334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-10-21 12:13:44.692367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-10-21 12:13:44.692712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-10-21 12:13:44.692742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-10-21 12:13:44.693104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-10-21 12:13:44.693135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-10-21 12:13:44.693531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-10-21 12:13:44.693563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-10-21 12:13:44.693941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-10-21 12:13:44.693973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-10-21 12:13:44.694342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-10-21 12:13:44.694375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-10-21 12:13:44.694721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-10-21 12:13:44.694751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-10-21 12:13:44.695114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-10-21 12:13:44.695144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-10-21 12:13:44.695518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-10-21 12:13:44.695551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-10-21 12:13:44.695905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-10-21 12:13:44.695935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-10-21 12:13:44.696289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-10-21 12:13:44.696329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-10-21 12:13:44.696678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-10-21 12:13:44.696710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-10-21 12:13:44.697070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-10-21 12:13:44.697100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-10-21 12:13:44.697500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-10-21 12:13:44.697533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-10-21 12:13:44.697884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-10-21 12:13:44.697916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-10-21 12:13:44.698278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-10-21 12:13:44.698308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-10-21 12:13:44.698670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-10-21 12:13:44.698702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-10-21 12:13:44.699057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-10-21 12:13:44.699094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-10-21 12:13:44.699444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-10-21 12:13:44.699476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-10-21 12:13:44.699844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-10-21 12:13:44.699875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-10-21 12:13:44.700275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-10-21 12:13:44.700306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-10-21 12:13:44.700672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-10-21 12:13:44.700704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-10-21 12:13:44.700943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-10-21 12:13:44.700973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-10-21 12:13:44.701222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-10-21 12:13:44.701253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-10-21 12:13:44.701628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-10-21 12:13:44.701660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-10-21 12:13:44.702011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-10-21 12:13:44.702042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-10-21 12:13:44.702415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-10-21 12:13:44.702448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-10-21 12:13:44.702841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-10-21 12:13:44.702872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-10-21 12:13:44.703226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-10-21 12:13:44.703256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-10-21 12:13:44.703623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-10-21 12:13:44.703656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-10-21 12:13:44.704007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-10-21 12:13:44.704039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-10-21 12:13:44.704396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-10-21 12:13:44.704429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-10-21 12:13:44.704775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-10-21 12:13:44.704807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-10-21 12:13:44.705048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-10-21 12:13:44.705083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-10-21 12:13:44.705510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-10-21 12:13:44.705542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-10-21 12:13:44.705903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-10-21 12:13:44.705934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-10-21 12:13:44.706277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-10-21 12:13:44.706309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-10-21 12:13:44.706750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-10-21 12:13:44.706780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-10-21 12:13:44.707128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-10-21 12:13:44.707160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-10-21 12:13:44.707527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-10-21 12:13:44.707560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-10-21 12:13:44.707919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-10-21 12:13:44.707950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-10-21 12:13:44.708310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-10-21 12:13:44.708353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-10-21 12:13:44.708711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-10-21 12:13:44.708740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-10-21 12:13:44.709102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-10-21 12:13:44.709132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-10-21 12:13:44.709560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-10-21 12:13:44.709593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-10-21 12:13:44.709972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-10-21 12:13:44.710002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-10-21 12:13:44.710361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-10-21 12:13:44.710393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-10-21 12:13:44.710791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-10-21 12:13:44.710821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-10-21 12:13:44.711178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-10-21 12:13:44.711208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-10-21 12:13:44.711604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-10-21 12:13:44.711637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-10-21 12:13:44.712008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-10-21 12:13:44.712038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-10-21 12:13:44.712444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-10-21 12:13:44.712477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-10-21 12:13:44.712835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-10-21 12:13:44.712866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-10-21 12:13:44.713224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-10-21 12:13:44.713256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-10-21 12:13:44.713625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-10-21 12:13:44.713658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-10-21 12:13:44.714002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-10-21 12:13:44.714035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-10-21 12:13:44.714389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-10-21 12:13:44.714420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-10-21 12:13:44.714791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-10-21 12:13:44.714833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-10-21 12:13:44.715180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-10-21 12:13:44.715212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-10-21 12:13:44.715646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-10-21 12:13:44.715679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-10-21 12:13:44.716034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-10-21 12:13:44.716064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-10-21 12:13:44.716465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-10-21 12:13:44.716496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-10-21 12:13:44.716847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-10-21 12:13:44.716877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-10-21 12:13:44.717244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-10-21 12:13:44.717276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-10-21 12:13:44.717644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-10-21 12:13:44.717677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-10-21 12:13:44.718031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-10-21 12:13:44.718064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-10-21 12:13:44.718413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-10-21 12:13:44.718446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-10-21 12:13:44.718809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-10-21 12:13:44.718840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-10-21 12:13:44.719197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-10-21 12:13:44.719228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-10-21 12:13:44.719580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-10-21 12:13:44.719613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-10-21 12:13:44.719966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-10-21 12:13:44.719997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-10-21 12:13:44.720348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-10-21 12:13:44.720383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-10-21 12:13:44.720785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-10-21 12:13:44.720815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-10-21 12:13:44.721169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-10-21 12:13:44.721201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-10-21 12:13:44.721464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-10-21 12:13:44.721496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-10-21 12:13:44.721874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-10-21 12:13:44.721905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-10-21 12:13:44.722263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-10-21 12:13:44.722294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.271 [2024-10-21 12:13:44.722690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-10-21 12:13:44.722721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-10-21 12:13:44.723079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-10-21 12:13:44.723110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-10-21 12:13:44.723474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-10-21 12:13:44.723507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-10-21 12:13:44.723867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-10-21 12:13:44.723897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-10-21 12:13:44.724233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-10-21 12:13:44.724265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-10-21 12:13:44.724620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-10-21 12:13:44.724651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-10-21 12:13:44.725013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-10-21 12:13:44.725043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-10-21 12:13:44.725277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-10-21 12:13:44.725310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-10-21 12:13:44.725706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-10-21 12:13:44.725738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-10-21 12:13:44.726131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-10-21 12:13:44.726162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-10-21 12:13:44.726525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-10-21 12:13:44.726557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-10-21 12:13:44.726915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-10-21 12:13:44.726948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-10-21 12:13:44.727351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-10-21 12:13:44.727384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-10-21 12:13:44.727738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-10-21 12:13:44.727769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-10-21 12:13:44.728127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-10-21 12:13:44.728159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-10-21 12:13:44.728535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-10-21 12:13:44.728567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-10-21 12:13:44.728919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-10-21 12:13:44.728949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-10-21 12:13:44.729288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-10-21 12:13:44.729331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-10-21 12:13:44.729705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-10-21 12:13:44.729737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-10-21 12:13:44.730099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-10-21 12:13:44.730130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-10-21 12:13:44.730482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-10-21 12:13:44.730521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-10-21 12:13:44.730900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-10-21 12:13:44.730930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-10-21 12:13:44.731282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-10-21 12:13:44.731313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-10-21 12:13:44.731674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-10-21 12:13:44.731705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-10-21 12:13:44.732071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-10-21 12:13:44.732103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-10-21 12:13:44.732438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-10-21 12:13:44.732470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-10-21 12:13:44.732828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-10-21 12:13:44.732859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-10-21 12:13:44.733223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-10-21 12:13:44.733254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-10-21 12:13:44.733606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-10-21 12:13:44.733637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-10-21 12:13:44.733989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-10-21 12:13:44.734018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-10-21 12:13:44.734380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-10-21 12:13:44.734412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-10-21 12:13:44.734808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-10-21 12:13:44.734839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-10-21 12:13:44.735066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-10-21 12:13:44.735099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.272 [2024-10-21 12:13:44.735457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-10-21 12:13:44.735489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-10-21 12:13:44.735738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-10-21 12:13:44.735768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-10-21 12:13:44.736172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-10-21 12:13:44.736203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-10-21 12:13:44.736476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-10-21 12:13:44.736508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-10-21 12:13:44.736856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-10-21 12:13:44.736886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-10-21 12:13:44.737244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-10-21 12:13:44.737274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-10-21 12:13:44.737636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-10-21 12:13:44.737669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-10-21 12:13:44.738061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-10-21 12:13:44.738091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-10-21 12:13:44.738452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-10-21 12:13:44.738485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-10-21 12:13:44.738867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-10-21 12:13:44.738897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-10-21 12:13:44.739259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-10-21 12:13:44.739291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-10-21 12:13:44.739644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-10-21 12:13:44.739675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-10-21 12:13:44.740008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-10-21 12:13:44.740041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-10-21 12:13:44.740396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-10-21 12:13:44.740430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-10-21 12:13:44.740658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-10-21 12:13:44.740691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-10-21 12:13:44.741068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-10-21 12:13:44.741099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-10-21 12:13:44.741454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-10-21 12:13:44.741487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-10-21 12:13:44.741857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-10-21 12:13:44.741889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-10-21 12:13:44.742226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-10-21 12:13:44.742257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-10-21 12:13:44.742622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-10-21 12:13:44.742653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-10-21 12:13:44.743007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-10-21 12:13:44.743038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-10-21 12:13:44.743395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-10-21 12:13:44.743426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-10-21 12:13:44.743782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-10-21 12:13:44.743813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-10-21 12:13:44.744170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-10-21 12:13:44.744201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-10-21 12:13:44.744575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-10-21 12:13:44.744607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-10-21 12:13:44.744969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-10-21 12:13:44.745000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-10-21 12:13:44.745360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-10-21 12:13:44.745392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-10-21 12:13:44.745778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-10-21 12:13:44.745815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-10-21 12:13:44.746180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-10-21 12:13:44.746212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-10-21 12:13:44.746574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-10-21 12:13:44.746607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-10-21 12:13:44.746962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-10-21 12:13:44.746994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-10-21 12:13:44.747355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-10-21 12:13:44.747385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-10-21 12:13:44.747745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-10-21 12:13:44.747777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-10-21 12:13:44.748116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-10-21 12:13:44.748146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-10-21 12:13:44.748416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-10-21 12:13:44.748447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-10-21 12:13:44.748831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-10-21 12:13:44.748861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-10-21 12:13:44.749212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-10-21 12:13:44.749243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-10-21 12:13:44.749608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-10-21 12:13:44.749641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-10-21 12:13:44.749999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-10-21 12:13:44.750031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-10-21 12:13:44.750395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-10-21 12:13:44.750426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-10-21 12:13:44.750792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-10-21 12:13:44.750822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-10-21 12:13:44.751173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-10-21 12:13:44.751205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-10-21 12:13:44.751452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-10-21 12:13:44.751485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-10-21 12:13:44.751852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-10-21 12:13:44.751883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-10-21 12:13:44.752245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-10-21 12:13:44.752276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-10-21 12:13:44.752641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-10-21 12:13:44.752673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-10-21 12:13:44.753047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-10-21 12:13:44.753079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-10-21 12:13:44.753432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-10-21 12:13:44.753464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-10-21 12:13:44.753835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-10-21 12:13:44.753865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-10-21 12:13:44.754227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-10-21 12:13:44.754259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-10-21 12:13:44.754614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-10-21 12:13:44.754646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-10-21 12:13:44.754870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-10-21 12:13:44.754899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-10-21 12:13:44.755273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-10-21 12:13:44.755304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-10-21 12:13:44.755550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-10-21 12:13:44.755585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-10-21 12:13:44.755938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-10-21 12:13:44.755968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-10-21 12:13:44.756220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-10-21 12:13:44.756250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-10-21 12:13:44.756600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-10-21 12:13:44.756633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-10-21 12:13:44.756988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-10-21 12:13:44.757018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-10-21 12:13:44.757380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-10-21 12:13:44.757414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-10-21 12:13:44.757791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-10-21 12:13:44.757823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-10-21 12:13:44.758171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-10-21 12:13:44.758201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-10-21 12:13:44.758541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-10-21 12:13:44.758572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-10-21 12:13:44.758931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-10-21 12:13:44.758962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-10-21 12:13:44.759331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-10-21 12:13:44.759364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-10-21 12:13:44.759710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-10-21 12:13:44.759740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-10-21 12:13:44.760120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-10-21 12:13:44.760150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-10-21 12:13:44.760508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-10-21 12:13:44.760540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-10-21 12:13:44.760899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-10-21 12:13:44.760936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-10-21 12:13:44.761272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-10-21 12:13:44.761303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-10-21 12:13:44.761707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-10-21 12:13:44.761739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.274 [2024-10-21 12:13:44.762097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-10-21 12:13:44.762129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-10-21 12:13:44.762487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-10-21 12:13:44.762520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-10-21 12:13:44.762882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-10-21 12:13:44.762913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-10-21 12:13:44.763267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-10-21 12:13:44.763298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-10-21 12:13:44.763578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-10-21 12:13:44.763608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-10-21 12:13:44.763982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-10-21 12:13:44.764012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-10-21 12:13:44.764355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-10-21 12:13:44.764388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-10-21 12:13:44.764745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-10-21 12:13:44.764774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-10-21 12:13:44.765207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-10-21 12:13:44.765236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-10-21 12:13:44.765596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-10-21 12:13:44.765629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-10-21 12:13:44.765987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-10-21 12:13:44.766019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-10-21 12:13:44.766355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-10-21 12:13:44.766388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-10-21 12:13:44.766745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-10-21 12:13:44.766775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-10-21 12:13:44.767133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-10-21 12:13:44.767164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-10-21 12:13:44.767505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-10-21 12:13:44.767536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-10-21 12:13:44.767889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-10-21 12:13:44.767920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-10-21 12:13:44.768274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-10-21 12:13:44.768306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-10-21 12:13:44.768681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-10-21 12:13:44.768712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-10-21 12:13:44.769066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-10-21 12:13:44.769097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-10-21 12:13:44.769464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-10-21 12:13:44.769495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-10-21 12:13:44.769749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-10-21 12:13:44.769780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-10-21 12:13:44.770127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-10-21 12:13:44.770157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-10-21 12:13:44.770499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-10-21 12:13:44.770533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-10-21 12:13:44.770885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-10-21 12:13:44.770916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-10-21 12:13:44.771279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-10-21 12:13:44.771315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-10-21 12:13:44.771678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-10-21 12:13:44.771710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-10-21 12:13:44.772074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-10-21 12:13:44.772107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-10-21 12:13:44.772462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-10-21 12:13:44.772494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-10-21 12:13:44.772859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-10-21 12:13:44.772889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-10-21 12:13:44.773250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-10-21 12:13:44.773282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-10-21 12:13:44.773520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-10-21 12:13:44.773553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-10-21 12:13:44.773968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-10-21 12:13:44.773998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-10-21 12:13:44.774345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-10-21 12:13:44.774379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-10-21 12:13:44.774711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-10-21 12:13:44.774742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-10-21 12:13:44.775095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-10-21 12:13:44.775126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-10-21 12:13:44.775484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-10-21 12:13:44.775517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-10-21 12:13:44.775879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-10-21 12:13:44.775909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-10-21 12:13:44.776268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-10-21 12:13:44.776300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-10-21 12:13:44.776681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-10-21 12:13:44.776712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-10-21 12:13:44.777072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-10-21 12:13:44.777104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-10-21 12:13:44.777463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-10-21 12:13:44.777495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-10-21 12:13:44.777855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-10-21 12:13:44.777886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-10-21 12:13:44.778222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-10-21 12:13:44.778254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-10-21 12:13:44.778516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-10-21 12:13:44.778549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-10-21 12:13:44.778892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-10-21 12:13:44.778924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-10-21 12:13:44.779286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-10-21 12:13:44.779318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-10-21 12:13:44.779684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-10-21 12:13:44.779715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-10-21 12:13:44.780086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-10-21 12:13:44.780117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-10-21 12:13:44.780525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-10-21 12:13:44.780557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-10-21 12:13:44.780904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-10-21 12:13:44.780936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-10-21 12:13:44.781306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-10-21 12:13:44.781354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-10-21 12:13:44.781692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-10-21 12:13:44.781725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-10-21 12:13:44.782075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-10-21 12:13:44.782104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-10-21 12:13:44.782464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-10-21 12:13:44.782498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-10-21 12:13:44.782920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-10-21 12:13:44.782952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-10-21 12:13:44.783308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-10-21 12:13:44.783349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-10-21 12:13:44.783708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-10-21 12:13:44.783739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-10-21 12:13:44.784101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-10-21 12:13:44.784133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-10-21 12:13:44.784482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-10-21 12:13:44.784515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-10-21 12:13:44.784873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-10-21 12:13:44.784902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-10-21 12:13:44.785274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-10-21 12:13:44.785305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-10-21 12:13:44.785669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-10-21 12:13:44.785700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-10-21 12:13:44.786056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-10-21 12:13:44.786089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-10-21 12:13:44.786448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-10-21 12:13:44.786480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-10-21 12:13:44.786848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-10-21 12:13:44.786885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-10-21 12:13:44.787235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-10-21 12:13:44.787267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-10-21 12:13:44.787604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-10-21 12:13:44.787635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-10-21 12:13:44.787878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-10-21 12:13:44.787907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-10-21 12:13:44.788251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-10-21 12:13:44.788281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-10-21 12:13:44.788628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-10-21 12:13:44.788659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-10-21 12:13:44.789025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-10-21 12:13:44.789056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-10-21 12:13:44.789404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-10-21 12:13:44.789436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-10-21 12:13:44.789800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-10-21 12:13:44.789830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-10-21 12:13:44.790175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-10-21 12:13:44.790207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-10-21 12:13:44.790551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-10-21 12:13:44.790582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-10-21 12:13:44.790934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-10-21 12:13:44.790966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-10-21 12:13:44.791334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-10-21 12:13:44.791368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-10-21 12:13:44.791719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-10-21 12:13:44.791750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-10-21 12:13:44.792108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-10-21 12:13:44.792138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-10-21 12:13:44.792509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-10-21 12:13:44.792543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-10-21 12:13:44.792891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-10-21 12:13:44.792921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-10-21 12:13:44.793282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-10-21 12:13:44.793314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-10-21 12:13:44.793682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-10-21 12:13:44.793715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-10-21 12:13:44.794061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-10-21 12:13:44.794091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-10-21 12:13:44.794440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-10-21 12:13:44.794474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-10-21 12:13:44.794820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-10-21 12:13:44.794853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-10-21 12:13:44.795200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-10-21 12:13:44.795232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-10-21 12:13:44.795610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-10-21 12:13:44.795642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-10-21 12:13:44.796003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-10-21 12:13:44.796034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-10-21 12:13:44.796414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-10-21 12:13:44.796446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-10-21 12:13:44.796809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-10-21 12:13:44.796840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-10-21 12:13:44.797179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-10-21 12:13:44.797211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-10-21 12:13:44.797578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-10-21 12:13:44.797610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-10-21 12:13:44.798011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-10-21 12:13:44.798043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-10-21 12:13:44.798391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-10-21 12:13:44.798424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-10-21 12:13:44.798782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-10-21 12:13:44.798813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-10-21 12:13:44.799181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-10-21 12:13:44.799212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-10-21 12:13:44.799575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-10-21 12:13:44.799608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-10-21 12:13:44.799968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-10-21 12:13:44.799998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-10-21 12:13:44.800363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-10-21 12:13:44.800396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.277 [2024-10-21 12:13:44.800764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-10-21 12:13:44.800794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-10-21 12:13:44.801156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-10-21 12:13:44.801187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-10-21 12:13:44.801545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-10-21 12:13:44.801578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-10-21 12:13:44.801935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-10-21 12:13:44.801965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-10-21 12:13:44.802311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-10-21 12:13:44.802362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-10-21 12:13:44.802734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-10-21 12:13:44.802766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-10-21 12:13:44.803123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-10-21 12:13:44.803153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-10-21 12:13:44.803516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-10-21 12:13:44.803551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-10-21 12:13:44.803790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-10-21 12:13:44.803823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-10-21 12:13:44.804107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-10-21 12:13:44.804136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-10-21 12:13:44.804512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-10-21 12:13:44.804545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-10-21 12:13:44.804910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-10-21 12:13:44.804941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-10-21 12:13:44.805340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-10-21 12:13:44.805373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-10-21 12:13:44.805627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-10-21 12:13:44.805658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-10-21 12:13:44.806033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-10-21 12:13:44.806063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-10-21 12:13:44.806423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-10-21 12:13:44.806455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-10-21 12:13:44.806695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-10-21 12:13:44.806726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-10-21 12:13:44.807080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-10-21 12:13:44.807111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-10-21 12:13:44.807489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-10-21 12:13:44.807523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-10-21 12:13:44.807766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-10-21 12:13:44.807800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-10-21 12:13:44.808152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-10-21 12:13:44.808182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-10-21 12:13:44.808552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-10-21 12:13:44.808583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-10-21 12:13:44.808940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-10-21 12:13:44.808970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-10-21 12:13:44.809333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-10-21 12:13:44.809367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-10-21 12:13:44.809727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-10-21 12:13:44.809758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-10-21 12:13:44.810115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-10-21 12:13:44.810145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-10-21 12:13:44.810501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-10-21 12:13:44.810532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-10-21 12:13:44.810769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-10-21 12:13:44.810803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-10-21 12:13:44.811160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-10-21 12:13:44.811191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-10-21 12:13:44.811558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-10-21 12:13:44.811591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-10-21 12:13:44.811943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-10-21 12:13:44.811973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-10-21 12:13:44.812341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-10-21 12:13:44.812375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-10-21 12:13:44.812696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-10-21 12:13:44.812727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-10-21 12:13:44.813085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-10-21 12:13:44.813117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-10-21 12:13:44.813482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-10-21 12:13:44.813514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-10-21 12:13:44.813863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-10-21 12:13:44.813893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-10-21 12:13:44.814250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-10-21 12:13:44.814279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-10-21 12:13:44.814658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-10-21 12:13:44.814690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-10-21 12:13:44.814938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-10-21 12:13:44.814967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-10-21 12:13:44.815294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-10-21 12:13:44.815337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-10-21 12:13:44.815679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-10-21 12:13:44.815710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-10-21 12:13:44.815949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-10-21 12:13:44.815983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-10-21 12:13:44.816346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-10-21 12:13:44.816378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-10-21 12:13:44.816737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-10-21 12:13:44.816768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-10-21 12:13:44.817137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-10-21 12:13:44.817175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-10-21 12:13:44.817558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-10-21 12:13:44.817591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-10-21 12:13:44.817946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-10-21 12:13:44.817979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-10-21 12:13:44.818241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-10-21 12:13:44.818273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-10-21 12:13:44.818549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-10-21 12:13:44.818580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-10-21 12:13:44.818942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-10-21 12:13:44.818972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-10-21 12:13:44.819352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-10-21 12:13:44.819384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-10-21 12:13:44.819750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-10-21 12:13:44.819782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-10-21 12:13:44.820140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-10-21 12:13:44.820171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-10-21 12:13:44.820546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-10-21 12:13:44.820577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-10-21 12:13:44.820974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-10-21 12:13:44.821005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-10-21 12:13:44.821385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-10-21 12:13:44.821417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.622 [2024-10-21 12:13:44.821787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.622 [2024-10-21 12:13:44.821821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.622 qpair failed and we were unable to recover it. 00:29:08.622 [2024-10-21 12:13:44.822174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.622 [2024-10-21 12:13:44.822204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.622 qpair failed and we were unable to recover it. 00:29:08.622 [2024-10-21 12:13:44.822588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.622 [2024-10-21 12:13:44.822623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.622 qpair failed and we were unable to recover it. 00:29:08.622 [2024-10-21 12:13:44.822979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.622 [2024-10-21 12:13:44.823010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.622 qpair failed and we were unable to recover it. 00:29:08.622 [2024-10-21 12:13:44.823368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.622 [2024-10-21 12:13:44.823400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.622 qpair failed and we were unable to recover it. 00:29:08.622 [2024-10-21 12:13:44.823667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.622 [2024-10-21 12:13:44.823697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.622 qpair failed and we were unable to recover it. 00:29:08.622 [2024-10-21 12:13:44.824044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.622 [2024-10-21 12:13:44.824075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.622 qpair failed and we were unable to recover it. 00:29:08.622 [2024-10-21 12:13:44.824408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.622 [2024-10-21 12:13:44.824441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.622 qpair failed and we were unable to recover it. 00:29:08.622 [2024-10-21 12:13:44.824793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.622 [2024-10-21 12:13:44.824824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.622 qpair failed and we were unable to recover it. 00:29:08.622 [2024-10-21 12:13:44.825172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.622 [2024-10-21 12:13:44.825205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.622 qpair failed and we were unable to recover it. 00:29:08.622 [2024-10-21 12:13:44.825564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.622 [2024-10-21 12:13:44.825597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.622 qpair failed and we were unable to recover it. 00:29:08.622 [2024-10-21 12:13:44.825958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.622 [2024-10-21 12:13:44.825988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.622 qpair failed and we were unable to recover it. 00:29:08.622 [2024-10-21 12:13:44.826359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.622 [2024-10-21 12:13:44.826393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.622 qpair failed and we were unable to recover it. 00:29:08.622 [2024-10-21 12:13:44.826778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.622 [2024-10-21 12:13:44.826807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.622 qpair failed and we were unable to recover it. 00:29:08.622 [2024-10-21 12:13:44.827175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.622 [2024-10-21 12:13:44.827205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.622 qpair failed and we were unable to recover it. 00:29:08.622 [2024-10-21 12:13:44.827445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.622 [2024-10-21 12:13:44.827477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.622 qpair failed and we were unable to recover it. 00:29:08.622 [2024-10-21 12:13:44.827727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.622 [2024-10-21 12:13:44.827758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.622 qpair failed and we were unable to recover it. 00:29:08.622 [2024-10-21 12:13:44.827999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.622 [2024-10-21 12:13:44.828030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.622 qpair failed and we were unable to recover it. 00:29:08.622 [2024-10-21 12:13:44.828385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.622 [2024-10-21 12:13:44.828418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.622 qpair failed and we were unable to recover it. 00:29:08.622 [2024-10-21 12:13:44.828663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.622 [2024-10-21 12:13:44.828694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.622 qpair failed and we were unable to recover it. 00:29:08.622 [2024-10-21 12:13:44.829066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.622 [2024-10-21 12:13:44.829098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.622 qpair failed and we were unable to recover it. 00:29:08.623 [2024-10-21 12:13:44.829443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.623 [2024-10-21 12:13:44.829477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.623 qpair failed and we were unable to recover it. 00:29:08.623 [2024-10-21 12:13:44.829836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.623 [2024-10-21 12:13:44.829867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.623 qpair failed and we were unable to recover it. 00:29:08.623 [2024-10-21 12:13:44.830075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.623 [2024-10-21 12:13:44.830106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.623 qpair failed and we were unable to recover it. 00:29:08.623 [2024-10-21 12:13:44.830352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.623 [2024-10-21 12:13:44.830384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.623 qpair failed and we were unable to recover it. 00:29:08.623 [2024-10-21 12:13:44.830683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.623 [2024-10-21 12:13:44.830713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.623 qpair failed and we were unable to recover it. 00:29:08.623 [2024-10-21 12:13:44.831066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.623 [2024-10-21 12:13:44.831097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.623 qpair failed and we were unable to recover it. 00:29:08.623 [2024-10-21 12:13:44.831464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.623 [2024-10-21 12:13:44.831496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.623 qpair failed and we were unable to recover it. 00:29:08.623 [2024-10-21 12:13:44.831857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.623 [2024-10-21 12:13:44.831894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.623 qpair failed and we were unable to recover it. 00:29:08.623 [2024-10-21 12:13:44.832248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.623 [2024-10-21 12:13:44.832279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.623 qpair failed and we were unable to recover it. 00:29:08.623 [2024-10-21 12:13:44.832678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.623 [2024-10-21 12:13:44.832710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.623 qpair failed and we were unable to recover it. 00:29:08.623 [2024-10-21 12:13:44.833076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.623 [2024-10-21 12:13:44.833110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.623 qpair failed and we were unable to recover it. 00:29:08.623 [2024-10-21 12:13:44.833464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.623 [2024-10-21 12:13:44.833498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.623 qpair failed and we were unable to recover it. 00:29:08.623 [2024-10-21 12:13:44.833850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.623 [2024-10-21 12:13:44.833880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.623 qpair failed and we were unable to recover it. 00:29:08.623 [2024-10-21 12:13:44.834126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.623 [2024-10-21 12:13:44.834156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.623 qpair failed and we were unable to recover it. 00:29:08.623 [2024-10-21 12:13:44.834504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.623 [2024-10-21 12:13:44.834537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.623 qpair failed and we were unable to recover it. 00:29:08.623 [2024-10-21 12:13:44.834913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.623 [2024-10-21 12:13:44.834945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.623 qpair failed and we were unable to recover it. 00:29:08.623 [2024-10-21 12:13:44.835332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.623 [2024-10-21 12:13:44.835364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.623 qpair failed and we were unable to recover it. 00:29:08.623 [2024-10-21 12:13:44.835724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.623 [2024-10-21 12:13:44.835756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.623 qpair failed and we were unable to recover it. 00:29:08.623 [2024-10-21 12:13:44.836110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.623 [2024-10-21 12:13:44.836142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.623 qpair failed and we were unable to recover it. 00:29:08.623 [2024-10-21 12:13:44.836503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.623 [2024-10-21 12:13:44.836535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.623 qpair failed and we were unable to recover it. 00:29:08.623 [2024-10-21 12:13:44.836763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.623 [2024-10-21 12:13:44.836796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.623 qpair failed and we were unable to recover it. 00:29:08.623 [2024-10-21 12:13:44.837145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.623 [2024-10-21 12:13:44.837177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.623 qpair failed and we were unable to recover it. 00:29:08.623 [2024-10-21 12:13:44.837445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.623 [2024-10-21 12:13:44.837477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.623 qpair failed and we were unable to recover it. 00:29:08.623 [2024-10-21 12:13:44.837807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.623 [2024-10-21 12:13:44.837838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.623 qpair failed and we were unable to recover it. 00:29:08.623 [2024-10-21 12:13:44.838202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.623 [2024-10-21 12:13:44.838235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.623 qpair failed and we were unable to recover it. 00:29:08.623 [2024-10-21 12:13:44.838570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.623 [2024-10-21 12:13:44.838603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.623 qpair failed and we were unable to recover it. 00:29:08.623 [2024-10-21 12:13:44.838962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.623 [2024-10-21 12:13:44.838995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.623 qpair failed and we were unable to recover it. 00:29:08.623 [2024-10-21 12:13:44.839298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.623 [2024-10-21 12:13:44.839355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.623 qpair failed and we were unable to recover it. 00:29:08.623 [2024-10-21 12:13:44.839728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.623 [2024-10-21 12:13:44.839757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.623 qpair failed and we were unable to recover it. 00:29:08.623 [2024-10-21 12:13:44.840146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.623 [2024-10-21 12:13:44.840179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.623 qpair failed and we were unable to recover it. 00:29:08.623 [2024-10-21 12:13:44.840550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.623 [2024-10-21 12:13:44.840585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.623 qpair failed and we were unable to recover it. 00:29:08.623 [2024-10-21 12:13:44.840969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.623 [2024-10-21 12:13:44.840999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.623 qpair failed and we were unable to recover it. 00:29:08.623 [2024-10-21 12:13:44.841345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.623 [2024-10-21 12:13:44.841379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.623 qpair failed and we were unable to recover it. 00:29:08.623 [2024-10-21 12:13:44.841745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.623 [2024-10-21 12:13:44.841775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.623 qpair failed and we were unable to recover it. 00:29:08.623 [2024-10-21 12:13:44.842141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.623 [2024-10-21 12:13:44.842172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.623 qpair failed and we were unable to recover it. 00:29:08.624 [2024-10-21 12:13:44.842404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.624 [2024-10-21 12:13:44.842435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.624 qpair failed and we were unable to recover it. 00:29:08.624 [2024-10-21 12:13:44.842788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.624 [2024-10-21 12:13:44.842821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.624 qpair failed and we were unable to recover it. 00:29:08.624 [2024-10-21 12:13:44.843184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.624 [2024-10-21 12:13:44.843217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.624 qpair failed and we were unable to recover it. 00:29:08.624 [2024-10-21 12:13:44.843571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.624 [2024-10-21 12:13:44.843603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.624 qpair failed and we were unable to recover it. 00:29:08.624 [2024-10-21 12:13:44.843985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.624 [2024-10-21 12:13:44.844016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.624 qpair failed and we were unable to recover it. 00:29:08.624 [2024-10-21 12:13:44.844418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.624 [2024-10-21 12:13:44.844450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.624 qpair failed and we were unable to recover it. 00:29:08.624 [2024-10-21 12:13:44.844822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.624 [2024-10-21 12:13:44.844853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.624 qpair failed and we were unable to recover it. 00:29:08.624 [2024-10-21 12:13:44.845222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.624 [2024-10-21 12:13:44.845254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.624 qpair failed and we were unable to recover it. 00:29:08.624 [2024-10-21 12:13:44.845623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.624 [2024-10-21 12:13:44.845657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.624 qpair failed and we were unable to recover it. 00:29:08.624 [2024-10-21 12:13:44.846019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.624 [2024-10-21 12:13:44.846050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.624 qpair failed and we were unable to recover it. 00:29:08.624 [2024-10-21 12:13:44.846400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.624 [2024-10-21 12:13:44.846433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.624 qpair failed and we were unable to recover it. 00:29:08.624 [2024-10-21 12:13:44.846701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.624 [2024-10-21 12:13:44.846737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.624 qpair failed and we were unable to recover it. 00:29:08.624 [2024-10-21 12:13:44.846980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.624 [2024-10-21 12:13:44.847017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.624 qpair failed and we were unable to recover it. 00:29:08.624 [2024-10-21 12:13:44.847267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.624 [2024-10-21 12:13:44.847299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.624 qpair failed and we were unable to recover it. 00:29:08.624 [2024-10-21 12:13:44.847564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.624 [2024-10-21 12:13:44.847598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.624 qpair failed and we were unable to recover it. 00:29:08.624 [2024-10-21 12:13:44.847854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.624 [2024-10-21 12:13:44.847886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.624 qpair failed and we were unable to recover it. 00:29:08.624 [2024-10-21 12:13:44.848245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.624 [2024-10-21 12:13:44.848276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.624 qpair failed and we were unable to recover it. 00:29:08.624 [2024-10-21 12:13:44.848679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.624 [2024-10-21 12:13:44.848712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.624 qpair failed and we were unable to recover it. 00:29:08.624 [2024-10-21 12:13:44.849063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.624 [2024-10-21 12:13:44.849095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.624 qpair failed and we were unable to recover it. 00:29:08.624 [2024-10-21 12:13:44.849457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.624 [2024-10-21 12:13:44.849491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.624 qpair failed and we were unable to recover it. 00:29:08.624 [2024-10-21 12:13:44.849864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.624 [2024-10-21 12:13:44.849898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.624 qpair failed and we were unable to recover it. 00:29:08.624 [2024-10-21 12:13:44.850266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.624 [2024-10-21 12:13:44.850298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.624 qpair failed and we were unable to recover it. 00:29:08.624 [2024-10-21 12:13:44.850655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.624 [2024-10-21 12:13:44.850689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.624 qpair failed and we were unable to recover it. 00:29:08.624 [2024-10-21 12:13:44.851041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.624 [2024-10-21 12:13:44.851073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.624 qpair failed and we were unable to recover it. 00:29:08.624 [2024-10-21 12:13:44.851337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.624 [2024-10-21 12:13:44.851373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.624 qpair failed and we were unable to recover it. 00:29:08.624 [2024-10-21 12:13:44.851757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.624 [2024-10-21 12:13:44.851788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.624 qpair failed and we were unable to recover it. 00:29:08.624 [2024-10-21 12:13:44.852188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.624 [2024-10-21 12:13:44.852219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.624 qpair failed and we were unable to recover it. 00:29:08.624 [2024-10-21 12:13:44.852617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.624 [2024-10-21 12:13:44.852650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.624 qpair failed and we were unable to recover it. 00:29:08.624 [2024-10-21 12:13:44.853017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.624 [2024-10-21 12:13:44.853048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.624 qpair failed and we were unable to recover it. 00:29:08.624 [2024-10-21 12:13:44.853399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.624 [2024-10-21 12:13:44.853432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.624 qpair failed and we were unable to recover it. 00:29:08.624 [2024-10-21 12:13:44.853785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.624 [2024-10-21 12:13:44.853818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.624 qpair failed and we were unable to recover it. 00:29:08.624 [2024-10-21 12:13:44.854183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.624 [2024-10-21 12:13:44.854216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.624 qpair failed and we were unable to recover it. 00:29:08.624 [2024-10-21 12:13:44.854575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.624 [2024-10-21 12:13:44.854609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.624 qpair failed and we were unable to recover it. 00:29:08.624 [2024-10-21 12:13:44.854977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.624 [2024-10-21 12:13:44.855008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.624 qpair failed and we were unable to recover it. 00:29:08.624 [2024-10-21 12:13:44.855377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.624 [2024-10-21 12:13:44.855410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.624 qpair failed and we were unable to recover it. 00:29:08.625 [2024-10-21 12:13:44.855779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.625 [2024-10-21 12:13:44.855812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.625 qpair failed and we were unable to recover it. 00:29:08.625 [2024-10-21 12:13:44.856188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.625 [2024-10-21 12:13:44.856221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.625 qpair failed and we were unable to recover it. 00:29:08.625 [2024-10-21 12:13:44.856597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.625 [2024-10-21 12:13:44.856630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.625 qpair failed and we were unable to recover it. 00:29:08.625 [2024-10-21 12:13:44.857004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.625 [2024-10-21 12:13:44.857035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.625 qpair failed and we were unable to recover it. 00:29:08.625 [2024-10-21 12:13:44.857398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.625 [2024-10-21 12:13:44.857433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.625 qpair failed and we were unable to recover it. 00:29:08.625 [2024-10-21 12:13:44.857834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.625 [2024-10-21 12:13:44.857865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.625 qpair failed and we were unable to recover it. 00:29:08.625 [2024-10-21 12:13:44.858221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.625 [2024-10-21 12:13:44.858253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.625 qpair failed and we were unable to recover it. 00:29:08.625 [2024-10-21 12:13:44.858596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.625 [2024-10-21 12:13:44.858630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.625 qpair failed and we were unable to recover it. 00:29:08.625 [2024-10-21 12:13:44.858997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.625 [2024-10-21 12:13:44.859029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.625 qpair failed and we were unable to recover it. 00:29:08.625 [2024-10-21 12:13:44.859395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.625 [2024-10-21 12:13:44.859428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.625 qpair failed and we were unable to recover it. 00:29:08.625 [2024-10-21 12:13:44.859806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.625 [2024-10-21 12:13:44.859836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.625 qpair failed and we were unable to recover it. 00:29:08.625 [2024-10-21 12:13:44.860203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.625 [2024-10-21 12:13:44.860234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.625 qpair failed and we were unable to recover it. 00:29:08.625 [2024-10-21 12:13:44.860591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.625 [2024-10-21 12:13:44.860624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.625 qpair failed and we were unable to recover it. 00:29:08.625 [2024-10-21 12:13:44.860976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.625 [2024-10-21 12:13:44.861008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.625 qpair failed and we were unable to recover it. 00:29:08.625 [2024-10-21 12:13:44.861267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.625 [2024-10-21 12:13:44.861299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.625 qpair failed and we were unable to recover it. 00:29:08.625 [2024-10-21 12:13:44.861641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.625 [2024-10-21 12:13:44.861675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.625 qpair failed and we were unable to recover it. 00:29:08.625 [2024-10-21 12:13:44.862040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.625 [2024-10-21 12:13:44.862072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.625 qpair failed and we were unable to recover it. 00:29:08.625 [2024-10-21 12:13:44.862398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.625 [2024-10-21 12:13:44.862438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.625 qpair failed and we were unable to recover it. 00:29:08.625 [2024-10-21 12:13:44.862806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.625 [2024-10-21 12:13:44.862838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.625 qpair failed and we were unable to recover it. 00:29:08.625 [2024-10-21 12:13:44.863195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.625 [2024-10-21 12:13:44.863226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.625 qpair failed and we were unable to recover it. 00:29:08.625 [2024-10-21 12:13:44.863582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.625 [2024-10-21 12:13:44.863615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.625 qpair failed and we were unable to recover it. 00:29:08.625 [2024-10-21 12:13:44.863973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.625 [2024-10-21 12:13:44.864006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.625 qpair failed and we were unable to recover it. 00:29:08.625 [2024-10-21 12:13:44.864394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.625 [2024-10-21 12:13:44.864425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.625 qpair failed and we were unable to recover it. 00:29:08.625 [2024-10-21 12:13:44.864687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.625 [2024-10-21 12:13:44.864719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.625 qpair failed and we were unable to recover it. 00:29:08.625 [2024-10-21 12:13:44.865080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.625 [2024-10-21 12:13:44.865111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.625 qpair failed and we were unable to recover it. 00:29:08.625 [2024-10-21 12:13:44.865368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.625 [2024-10-21 12:13:44.865399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.625 qpair failed and we were unable to recover it. 00:29:08.625 [2024-10-21 12:13:44.865654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.625 [2024-10-21 12:13:44.865684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.625 qpair failed and we were unable to recover it. 00:29:08.625 [2024-10-21 12:13:44.866036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.625 [2024-10-21 12:13:44.866068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.625 qpair failed and we were unable to recover it. 00:29:08.625 [2024-10-21 12:13:44.866333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.625 [2024-10-21 12:13:44.866365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.625 qpair failed and we were unable to recover it. 00:29:08.625 [2024-10-21 12:13:44.866750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.625 [2024-10-21 12:13:44.866780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.625 qpair failed and we were unable to recover it. 00:29:08.625 [2024-10-21 12:13:44.867025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.625 [2024-10-21 12:13:44.867055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.625 qpair failed and we were unable to recover it. 00:29:08.625 [2024-10-21 12:13:44.867404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.625 [2024-10-21 12:13:44.867438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.625 qpair failed and we were unable to recover it. 00:29:08.625 [2024-10-21 12:13:44.867816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.625 [2024-10-21 12:13:44.867847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.625 qpair failed and we were unable to recover it. 00:29:08.625 [2024-10-21 12:13:44.868209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.625 [2024-10-21 12:13:44.868238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.625 qpair failed and we were unable to recover it. 00:29:08.625 [2024-10-21 12:13:44.868582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.625 [2024-10-21 12:13:44.868615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.626 qpair failed and we were unable to recover it. 00:29:08.626 [2024-10-21 12:13:44.868971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.626 [2024-10-21 12:13:44.869003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.626 qpair failed and we were unable to recover it. 00:29:08.626 [2024-10-21 12:13:44.869360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.626 [2024-10-21 12:13:44.869392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.626 qpair failed and we were unable to recover it. 00:29:08.626 [2024-10-21 12:13:44.869775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.626 [2024-10-21 12:13:44.869806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.626 qpair failed and we were unable to recover it. 00:29:08.626 [2024-10-21 12:13:44.870195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.626 [2024-10-21 12:13:44.870225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.626 qpair failed and we were unable to recover it. 00:29:08.626 [2024-10-21 12:13:44.870576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.626 [2024-10-21 12:13:44.870607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.626 qpair failed and we were unable to recover it. 00:29:08.626 [2024-10-21 12:13:44.870964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.626 [2024-10-21 12:13:44.870996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.626 qpair failed and we were unable to recover it. 00:29:08.626 [2024-10-21 12:13:44.871356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.626 [2024-10-21 12:13:44.871388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.626 qpair failed and we were unable to recover it. 00:29:08.626 [2024-10-21 12:13:44.871773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.626 [2024-10-21 12:13:44.871806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.626 qpair failed and we were unable to recover it. 00:29:08.626 [2024-10-21 12:13:44.872158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.626 [2024-10-21 12:13:44.872190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.626 qpair failed and we were unable to recover it. 00:29:08.626 [2024-10-21 12:13:44.872544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.626 [2024-10-21 12:13:44.872577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.626 qpair failed and we were unable to recover it. 00:29:08.626 [2024-10-21 12:13:44.872932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.626 [2024-10-21 12:13:44.872964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.626 qpair failed and we were unable to recover it. 00:29:08.626 [2024-10-21 12:13:44.873335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.626 [2024-10-21 12:13:44.873369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.626 qpair failed and we were unable to recover it. 00:29:08.626 [2024-10-21 12:13:44.873742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.626 [2024-10-21 12:13:44.873775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.626 qpair failed and we were unable to recover it. 00:29:08.626 [2024-10-21 12:13:44.874130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.626 [2024-10-21 12:13:44.874163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.626 qpair failed and we were unable to recover it. 00:29:08.626 [2024-10-21 12:13:44.874526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.626 [2024-10-21 12:13:44.874560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.626 qpair failed and we were unable to recover it. 00:29:08.626 [2024-10-21 12:13:44.874921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.626 [2024-10-21 12:13:44.874952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.626 qpair failed and we were unable to recover it. 00:29:08.626 [2024-10-21 12:13:44.875365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.626 [2024-10-21 12:13:44.875399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.626 qpair failed and we were unable to recover it. 00:29:08.626 [2024-10-21 12:13:44.875772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.626 [2024-10-21 12:13:44.875803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.626 qpair failed and we were unable to recover it. 00:29:08.626 [2024-10-21 12:13:44.876156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.626 [2024-10-21 12:13:44.876188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.626 qpair failed and we were unable to recover it. 00:29:08.626 [2024-10-21 12:13:44.876557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.626 [2024-10-21 12:13:44.876591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.626 qpair failed and we were unable to recover it. 00:29:08.626 [2024-10-21 12:13:44.876952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.626 [2024-10-21 12:13:44.876984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.626 qpair failed and we were unable to recover it. 00:29:08.626 [2024-10-21 12:13:44.877348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.626 [2024-10-21 12:13:44.877381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.626 qpair failed and we were unable to recover it. 00:29:08.626 [2024-10-21 12:13:44.877746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.626 [2024-10-21 12:13:44.877784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.626 qpair failed and we were unable to recover it. 00:29:08.626 [2024-10-21 12:13:44.878139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.626 [2024-10-21 12:13:44.878172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.626 qpair failed and we were unable to recover it. 00:29:08.626 [2024-10-21 12:13:44.878553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.626 [2024-10-21 12:13:44.878585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.626 qpair failed and we were unable to recover it. 00:29:08.626 [2024-10-21 12:13:44.878937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.626 [2024-10-21 12:13:44.878967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.626 qpair failed and we were unable to recover it. 00:29:08.626 [2024-10-21 12:13:44.879345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.626 [2024-10-21 12:13:44.879378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.626 qpair failed and we were unable to recover it. 00:29:08.626 [2024-10-21 12:13:44.879605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.626 [2024-10-21 12:13:44.879639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.626 qpair failed and we were unable to recover it. 00:29:08.626 [2024-10-21 12:13:44.879991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.626 [2024-10-21 12:13:44.880022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.626 qpair failed and we were unable to recover it. 00:29:08.626 [2024-10-21 12:13:44.880401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.626 [2024-10-21 12:13:44.880433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.626 qpair failed and we were unable to recover it. 00:29:08.626 [2024-10-21 12:13:44.880815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.626 [2024-10-21 12:13:44.880848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.626 qpair failed and we were unable to recover it. 00:29:08.626 [2024-10-21 12:13:44.881203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.626 [2024-10-21 12:13:44.881236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.626 qpair failed and we were unable to recover it. 00:29:08.626 [2024-10-21 12:13:44.881592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.626 [2024-10-21 12:13:44.881624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.626 qpair failed and we were unable to recover it. 00:29:08.626 [2024-10-21 12:13:44.882010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.626 [2024-10-21 12:13:44.882042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.626 qpair failed and we were unable to recover it. 00:29:08.626 [2024-10-21 12:13:44.882406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.626 [2024-10-21 12:13:44.882439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.627 qpair failed and we were unable to recover it. 00:29:08.627 [2024-10-21 12:13:44.882676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.627 [2024-10-21 12:13:44.882710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.627 qpair failed and we were unable to recover it. 00:29:08.627 [2024-10-21 12:13:44.882969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.627 [2024-10-21 12:13:44.883002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.627 qpair failed and we were unable to recover it. 00:29:08.627 [2024-10-21 12:13:44.883255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.627 [2024-10-21 12:13:44.883287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.627 qpair failed and we were unable to recover it. 00:29:08.627 [2024-10-21 12:13:44.883680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.627 [2024-10-21 12:13:44.883714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.627 qpair failed and we were unable to recover it. 00:29:08.627 [2024-10-21 12:13:44.884077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.627 [2024-10-21 12:13:44.884109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.627 qpair failed and we were unable to recover it. 00:29:08.627 [2024-10-21 12:13:44.884466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.627 [2024-10-21 12:13:44.884499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.627 qpair failed and we were unable to recover it. 00:29:08.627 [2024-10-21 12:13:44.884857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.627 [2024-10-21 12:13:44.884889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.627 qpair failed and we were unable to recover it. 00:29:08.627 [2024-10-21 12:13:44.885264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.627 [2024-10-21 12:13:44.885293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.627 qpair failed and we were unable to recover it. 00:29:08.627 [2024-10-21 12:13:44.885663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.627 [2024-10-21 12:13:44.885694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.627 qpair failed and we were unable to recover it. 00:29:08.627 [2024-10-21 12:13:44.886039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.627 [2024-10-21 12:13:44.886068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.627 qpair failed and we were unable to recover it. 00:29:08.627 [2024-10-21 12:13:44.886427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.627 [2024-10-21 12:13:44.886459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.627 qpair failed and we were unable to recover it. 00:29:08.627 [2024-10-21 12:13:44.886820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.627 [2024-10-21 12:13:44.886852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.627 qpair failed and we were unable to recover it. 00:29:08.627 [2024-10-21 12:13:44.887223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.627 [2024-10-21 12:13:44.887255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.627 qpair failed and we were unable to recover it. 00:29:08.627 [2024-10-21 12:13:44.887583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.627 [2024-10-21 12:13:44.887616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.627 qpair failed and we were unable to recover it. 00:29:08.627 [2024-10-21 12:13:44.887949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.627 [2024-10-21 12:13:44.887981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.627 qpair failed and we were unable to recover it. 00:29:08.627 [2024-10-21 12:13:44.888217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.627 [2024-10-21 12:13:44.888250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.627 qpair failed and we were unable to recover it. 00:29:08.627 [2024-10-21 12:13:44.888627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.627 [2024-10-21 12:13:44.888662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.627 qpair failed and we were unable to recover it. 00:29:08.627 [2024-10-21 12:13:44.888872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.627 [2024-10-21 12:13:44.888905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.627 qpair failed and we were unable to recover it. 00:29:08.627 [2024-10-21 12:13:44.889146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.627 [2024-10-21 12:13:44.889182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.627 qpair failed and we were unable to recover it. 00:29:08.627 [2024-10-21 12:13:44.889595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.627 [2024-10-21 12:13:44.889629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.627 qpair failed and we were unable to recover it. 00:29:08.627 [2024-10-21 12:13:44.889986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.627 [2024-10-21 12:13:44.890019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.627 qpair failed and we were unable to recover it. 00:29:08.627 [2024-10-21 12:13:44.890497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.627 [2024-10-21 12:13:44.890530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.627 qpair failed and we were unable to recover it. 00:29:08.627 [2024-10-21 12:13:44.890872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.627 [2024-10-21 12:13:44.890902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.627 qpair failed and we were unable to recover it. 00:29:08.627 [2024-10-21 12:13:44.891257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.627 [2024-10-21 12:13:44.891289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.627 qpair failed and we were unable to recover it. 00:29:08.627 [2024-10-21 12:13:44.891742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.627 [2024-10-21 12:13:44.891775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.627 qpair failed and we were unable to recover it. 00:29:08.627 [2024-10-21 12:13:44.892025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.627 [2024-10-21 12:13:44.892055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.627 qpair failed and we were unable to recover it. 00:29:08.627 [2024-10-21 12:13:44.892433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.627 [2024-10-21 12:13:44.892468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.627 qpair failed and we were unable to recover it. 00:29:08.627 [2024-10-21 12:13:44.892738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.627 [2024-10-21 12:13:44.892778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.627 qpair failed and we were unable to recover it. 00:29:08.628 [2024-10-21 12:13:44.893163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.628 [2024-10-21 12:13:44.893195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.628 qpair failed and we were unable to recover it. 00:29:08.628 [2024-10-21 12:13:44.893565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.628 [2024-10-21 12:13:44.893599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.628 qpair failed and we were unable to recover it. 00:29:08.628 [2024-10-21 12:13:44.893844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.628 [2024-10-21 12:13:44.893875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.628 qpair failed and we were unable to recover it. 00:29:08.628 [2024-10-21 12:13:44.894230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.628 [2024-10-21 12:13:44.894261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.628 qpair failed and we were unable to recover it. 00:29:08.628 [2024-10-21 12:13:44.894650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.628 [2024-10-21 12:13:44.894684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.628 qpair failed and we were unable to recover it. 00:29:08.628 [2024-10-21 12:13:44.895028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.628 [2024-10-21 12:13:44.895061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.628 qpair failed and we were unable to recover it. 00:29:08.628 [2024-10-21 12:13:44.895238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.628 [2024-10-21 12:13:44.895269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.628 qpair failed and we were unable to recover it. 00:29:08.628 [2024-10-21 12:13:44.895465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.628 [2024-10-21 12:13:44.895499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.628 qpair failed and we were unable to recover it. 00:29:08.628 [2024-10-21 12:13:44.895855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.628 [2024-10-21 12:13:44.895888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.628 qpair failed and we were unable to recover it. 00:29:08.628 [2024-10-21 12:13:44.896280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.628 [2024-10-21 12:13:44.896311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.628 qpair failed and we were unable to recover it. 00:29:08.628 [2024-10-21 12:13:44.896558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.628 [2024-10-21 12:13:44.896590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.628 qpair failed and we were unable to recover it. 00:29:08.628 [2024-10-21 12:13:44.896940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.628 [2024-10-21 12:13:44.896972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.628 qpair failed and we were unable to recover it. 00:29:08.628 [2024-10-21 12:13:44.897348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.628 [2024-10-21 12:13:44.897382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.628 qpair failed and we were unable to recover it. 00:29:08.628 [2024-10-21 12:13:44.897758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.628 [2024-10-21 12:13:44.897790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.628 qpair failed and we were unable to recover it. 00:29:08.628 [2024-10-21 12:13:44.898166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.628 [2024-10-21 12:13:44.898199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.628 qpair failed and we were unable to recover it. 00:29:08.628 [2024-10-21 12:13:44.898565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.628 [2024-10-21 12:13:44.898598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.628 qpair failed and we were unable to recover it. 00:29:08.628 [2024-10-21 12:13:44.898840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.628 [2024-10-21 12:13:44.898869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.628 qpair failed and we were unable to recover it. 00:29:08.628 [2024-10-21 12:13:44.899233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.628 [2024-10-21 12:13:44.899265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.628 qpair failed and we were unable to recover it. 00:29:08.628 [2024-10-21 12:13:44.899634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.628 [2024-10-21 12:13:44.899668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.628 qpair failed and we were unable to recover it. 00:29:08.628 [2024-10-21 12:13:44.900028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.628 [2024-10-21 12:13:44.900059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.628 qpair failed and we were unable to recover it. 00:29:08.628 [2024-10-21 12:13:44.900432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.628 [2024-10-21 12:13:44.900466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.628 qpair failed and we were unable to recover it. 00:29:08.628 [2024-10-21 12:13:44.900889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.628 [2024-10-21 12:13:44.900919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.628 qpair failed and we were unable to recover it. 00:29:08.628 [2024-10-21 12:13:44.901355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.628 [2024-10-21 12:13:44.901389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.628 qpair failed and we were unable to recover it. 00:29:08.628 [2024-10-21 12:13:44.901748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.628 [2024-10-21 12:13:44.901778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.628 qpair failed and we were unable to recover it. 00:29:08.628 [2024-10-21 12:13:44.902143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.628 [2024-10-21 12:13:44.902173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.628 qpair failed and we were unable to recover it. 00:29:08.628 [2024-10-21 12:13:44.902599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.628 [2024-10-21 12:13:44.902631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.628 qpair failed and we were unable to recover it. 00:29:08.628 [2024-10-21 12:13:44.902983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.628 [2024-10-21 12:13:44.903016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.628 qpair failed and we were unable to recover it. 00:29:08.628 [2024-10-21 12:13:44.903384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.628 [2024-10-21 12:13:44.903416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.628 qpair failed and we were unable to recover it. 00:29:08.628 [2024-10-21 12:13:44.903779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.628 [2024-10-21 12:13:44.903810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.628 qpair failed and we were unable to recover it. 00:29:08.628 [2024-10-21 12:13:44.904163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.628 [2024-10-21 12:13:44.904194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.628 qpair failed and we were unable to recover it. 00:29:08.628 [2024-10-21 12:13:44.904531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.628 [2024-10-21 12:13:44.904563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.628 qpair failed and we were unable to recover it. 00:29:08.628 [2024-10-21 12:13:44.904803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.628 [2024-10-21 12:13:44.904834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.628 qpair failed and we were unable to recover it. 00:29:08.628 [2024-10-21 12:13:44.905182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.628 [2024-10-21 12:13:44.905214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.628 qpair failed and we were unable to recover it. 00:29:08.628 [2024-10-21 12:13:44.905455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.628 [2024-10-21 12:13:44.905488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.628 qpair failed and we were unable to recover it. 00:29:08.628 [2024-10-21 12:13:44.905722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.628 [2024-10-21 12:13:44.905752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.628 qpair failed and we were unable to recover it. 00:29:08.628 [2024-10-21 12:13:44.906116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.629 [2024-10-21 12:13:44.906147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.629 qpair failed and we were unable to recover it. 00:29:08.629 [2024-10-21 12:13:44.906523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.629 [2024-10-21 12:13:44.906557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.629 qpair failed and we were unable to recover it. 00:29:08.629 [2024-10-21 12:13:44.907000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.629 [2024-10-21 12:13:44.907032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.629 qpair failed and we were unable to recover it. 00:29:08.629 [2024-10-21 12:13:44.907380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.629 [2024-10-21 12:13:44.907412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.629 qpair failed and we were unable to recover it. 00:29:08.629 [2024-10-21 12:13:44.907789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.629 [2024-10-21 12:13:44.907826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.629 qpair failed and we were unable to recover it. 00:29:08.629 [2024-10-21 12:13:44.908186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.629 [2024-10-21 12:13:44.908218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.629 qpair failed and we were unable to recover it. 00:29:08.629 [2024-10-21 12:13:44.908590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.629 [2024-10-21 12:13:44.908622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.629 qpair failed and we were unable to recover it. 00:29:08.629 [2024-10-21 12:13:44.908972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.629 [2024-10-21 12:13:44.909002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.629 qpair failed and we were unable to recover it. 00:29:08.629 [2024-10-21 12:13:44.909352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.629 [2024-10-21 12:13:44.909384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.629 qpair failed and we were unable to recover it. 00:29:08.629 [2024-10-21 12:13:44.909784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.629 [2024-10-21 12:13:44.909813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.629 qpair failed and we were unable to recover it. 00:29:08.629 [2024-10-21 12:13:44.910059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.629 [2024-10-21 12:13:44.910089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.629 qpair failed and we were unable to recover it. 00:29:08.629 [2024-10-21 12:13:44.910443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.629 [2024-10-21 12:13:44.910475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.629 qpair failed and we were unable to recover it. 00:29:08.629 [2024-10-21 12:13:44.910839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.629 [2024-10-21 12:13:44.910870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.629 qpair failed and we were unable to recover it. 00:29:08.629 [2024-10-21 12:13:44.911235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.629 [2024-10-21 12:13:44.911267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.629 qpair failed and we were unable to recover it. 00:29:08.629 [2024-10-21 12:13:44.911623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.629 [2024-10-21 12:13:44.911655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.629 qpair failed and we were unable to recover it. 00:29:08.629 [2024-10-21 12:13:44.912085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.629 [2024-10-21 12:13:44.912117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.629 qpair failed and we were unable to recover it. 00:29:08.629 [2024-10-21 12:13:44.912471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.629 [2024-10-21 12:13:44.912503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.629 qpair failed and we were unable to recover it. 00:29:08.629 [2024-10-21 12:13:44.912855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.629 [2024-10-21 12:13:44.912887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.629 qpair failed and we were unable to recover it. 00:29:08.629 [2024-10-21 12:13:44.913111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.629 [2024-10-21 12:13:44.913141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.629 qpair failed and we were unable to recover it. 00:29:08.629 [2024-10-21 12:13:44.913504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.629 [2024-10-21 12:13:44.913537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.629 qpair failed and we were unable to recover it. 00:29:08.629 [2024-10-21 12:13:44.913893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.629 [2024-10-21 12:13:44.913924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.629 qpair failed and we were unable to recover it. 00:29:08.629 [2024-10-21 12:13:44.914354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.629 [2024-10-21 12:13:44.914386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.629 qpair failed and we were unable to recover it. 00:29:08.629 [2024-10-21 12:13:44.914742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.629 [2024-10-21 12:13:44.914775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.629 qpair failed and we were unable to recover it. 00:29:08.629 [2024-10-21 12:13:44.915211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.629 [2024-10-21 12:13:44.915243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.629 qpair failed and we were unable to recover it. 00:29:08.629 [2024-10-21 12:13:44.915572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.629 [2024-10-21 12:13:44.915605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.629 qpair failed and we were unable to recover it. 00:29:08.629 [2024-10-21 12:13:44.915951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.629 [2024-10-21 12:13:44.915982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.629 qpair failed and we were unable to recover it. 00:29:08.629 [2024-10-21 12:13:44.916342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.629 [2024-10-21 12:13:44.916375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.629 qpair failed and we were unable to recover it. 00:29:08.629 [2024-10-21 12:13:44.916749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.629 [2024-10-21 12:13:44.916779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.629 qpair failed and we were unable to recover it. 00:29:08.629 [2024-10-21 12:13:44.917133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.629 [2024-10-21 12:13:44.917164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.629 qpair failed and we were unable to recover it. 00:29:08.629 [2024-10-21 12:13:44.917560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.629 [2024-10-21 12:13:44.917594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.629 qpair failed and we were unable to recover it. 00:29:08.629 [2024-10-21 12:13:44.917940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.629 [2024-10-21 12:13:44.917973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.629 qpair failed and we were unable to recover it. 00:29:08.629 [2024-10-21 12:13:44.918341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.629 [2024-10-21 12:13:44.918375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.629 qpair failed and we were unable to recover it. 00:29:08.629 [2024-10-21 12:13:44.918612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.629 [2024-10-21 12:13:44.918642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.629 qpair failed and we were unable to recover it. 00:29:08.629 [2024-10-21 12:13:44.919073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.629 [2024-10-21 12:13:44.919103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.629 qpair failed and we were unable to recover it. 00:29:08.629 [2024-10-21 12:13:44.919468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.629 [2024-10-21 12:13:44.919501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.629 qpair failed and we were unable to recover it. 00:29:08.629 [2024-10-21 12:13:44.919859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.630 [2024-10-21 12:13:44.919890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.630 qpair failed and we were unable to recover it. 00:29:08.630 [2024-10-21 12:13:44.920246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.630 [2024-10-21 12:13:44.920278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.630 qpair failed and we were unable to recover it. 00:29:08.630 [2024-10-21 12:13:44.920674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.630 [2024-10-21 12:13:44.920705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.630 qpair failed and we were unable to recover it. 00:29:08.630 [2024-10-21 12:13:44.921071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.630 [2024-10-21 12:13:44.921103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.630 qpair failed and we were unable to recover it. 00:29:08.630 [2024-10-21 12:13:44.921460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.630 [2024-10-21 12:13:44.921493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.630 qpair failed and we were unable to recover it. 00:29:08.630 [2024-10-21 12:13:44.921890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.630 [2024-10-21 12:13:44.921921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.630 qpair failed and we were unable to recover it. 00:29:08.630 [2024-10-21 12:13:44.922279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.630 [2024-10-21 12:13:44.922310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.630 qpair failed and we were unable to recover it. 00:29:08.630 [2024-10-21 12:13:44.922696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.630 [2024-10-21 12:13:44.922727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.630 qpair failed and we were unable to recover it. 00:29:08.630 [2024-10-21 12:13:44.923083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.630 [2024-10-21 12:13:44.923115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.630 qpair failed and we were unable to recover it. 00:29:08.630 [2024-10-21 12:13:44.923470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.630 [2024-10-21 12:13:44.923508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.630 qpair failed and we were unable to recover it. 00:29:08.630 [2024-10-21 12:13:44.923757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.630 [2024-10-21 12:13:44.923787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.630 qpair failed and we were unable to recover it. 00:29:08.630 [2024-10-21 12:13:44.924188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.630 [2024-10-21 12:13:44.924218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.630 qpair failed and we were unable to recover it. 00:29:08.630 [2024-10-21 12:13:44.924590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.630 [2024-10-21 12:13:44.924623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.630 qpair failed and we were unable to recover it. 00:29:08.630 [2024-10-21 12:13:44.924982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.630 [2024-10-21 12:13:44.925015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.630 qpair failed and we were unable to recover it. 00:29:08.630 [2024-10-21 12:13:44.925378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.630 [2024-10-21 12:13:44.925411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.630 qpair failed and we were unable to recover it. 00:29:08.630 [2024-10-21 12:13:44.925771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.630 [2024-10-21 12:13:44.925802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.630 qpair failed and we were unable to recover it. 00:29:08.630 [2024-10-21 12:13:44.926238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.630 [2024-10-21 12:13:44.926269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.630 qpair failed and we were unable to recover it. 00:29:08.630 [2024-10-21 12:13:44.926628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.630 [2024-10-21 12:13:44.926661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.630 qpair failed and we were unable to recover it. 00:29:08.630 [2024-10-21 12:13:44.927033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.630 [2024-10-21 12:13:44.927064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.630 qpair failed and we were unable to recover it. 00:29:08.630 [2024-10-21 12:13:44.927354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.630 [2024-10-21 12:13:44.927386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.630 qpair failed and we were unable to recover it. 00:29:08.630 [2024-10-21 12:13:44.927756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.630 [2024-10-21 12:13:44.927786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.630 qpair failed and we were unable to recover it. 00:29:08.630 [2024-10-21 12:13:44.928145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.630 [2024-10-21 12:13:44.928179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.630 qpair failed and we were unable to recover it. 00:29:08.630 [2024-10-21 12:13:44.928545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.630 [2024-10-21 12:13:44.928578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.630 qpair failed and we were unable to recover it. 00:29:08.630 [2024-10-21 12:13:44.928932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.630 [2024-10-21 12:13:44.928963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.630 qpair failed and we were unable to recover it. 00:29:08.630 [2024-10-21 12:13:44.929352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.630 [2024-10-21 12:13:44.929386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.630 qpair failed and we were unable to recover it. 00:29:08.630 [2024-10-21 12:13:44.929641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.630 [2024-10-21 12:13:44.929674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.630 qpair failed and we were unable to recover it. 00:29:08.630 [2024-10-21 12:13:44.930037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.630 [2024-10-21 12:13:44.930067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.630 qpair failed and we were unable to recover it. 00:29:08.630 [2024-10-21 12:13:44.930420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.630 [2024-10-21 12:13:44.930454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.630 qpair failed and we were unable to recover it. 00:29:08.630 [2024-10-21 12:13:44.930815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.630 [2024-10-21 12:13:44.930845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.630 qpair failed and we were unable to recover it. 00:29:08.630 [2024-10-21 12:13:44.931206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.630 [2024-10-21 12:13:44.931237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.630 qpair failed and we were unable to recover it. 00:29:08.630 [2024-10-21 12:13:44.931608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.630 [2024-10-21 12:13:44.931641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.630 qpair failed and we were unable to recover it. 00:29:08.630 [2024-10-21 12:13:44.931885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.630 [2024-10-21 12:13:44.931915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.630 qpair failed and we were unable to recover it. 00:29:08.630 [2024-10-21 12:13:44.932254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.630 [2024-10-21 12:13:44.932284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.630 qpair failed and we were unable to recover it. 00:29:08.630 [2024-10-21 12:13:44.932646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.630 [2024-10-21 12:13:44.932679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.630 qpair failed and we were unable to recover it. 00:29:08.630 [2024-10-21 12:13:44.933078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.630 [2024-10-21 12:13:44.933109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.630 qpair failed and we were unable to recover it. 00:29:08.630 [2024-10-21 12:13:44.933464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.631 [2024-10-21 12:13:44.933497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.631 qpair failed and we were unable to recover it. 00:29:08.631 [2024-10-21 12:13:44.933849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.631 [2024-10-21 12:13:44.933881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.631 qpair failed and we were unable to recover it. 00:29:08.631 [2024-10-21 12:13:44.934240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.631 [2024-10-21 12:13:44.934271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.631 qpair failed and we were unable to recover it. 00:29:08.631 [2024-10-21 12:13:44.934635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.631 [2024-10-21 12:13:44.934666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.631 qpair failed and we were unable to recover it. 00:29:08.631 [2024-10-21 12:13:44.935008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.631 [2024-10-21 12:13:44.935040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.631 qpair failed and we were unable to recover it. 00:29:08.631 [2024-10-21 12:13:44.935396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.631 [2024-10-21 12:13:44.935429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.631 qpair failed and we were unable to recover it. 00:29:08.631 [2024-10-21 12:13:44.935790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.631 [2024-10-21 12:13:44.935822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.631 qpair failed and we were unable to recover it. 00:29:08.631 [2024-10-21 12:13:44.936183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.631 [2024-10-21 12:13:44.936213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.631 qpair failed and we were unable to recover it. 00:29:08.631 [2024-10-21 12:13:44.936585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.631 [2024-10-21 12:13:44.936617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.631 qpair failed and we were unable to recover it. 00:29:08.631 [2024-10-21 12:13:44.936978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.631 [2024-10-21 12:13:44.937009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.631 qpair failed and we were unable to recover it. 00:29:08.631 [2024-10-21 12:13:44.937366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.631 [2024-10-21 12:13:44.937398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.631 qpair failed and we were unable to recover it. 00:29:08.631 [2024-10-21 12:13:44.937800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.631 [2024-10-21 12:13:44.937830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.631 qpair failed and we were unable to recover it. 00:29:08.631 [2024-10-21 12:13:44.938179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.631 [2024-10-21 12:13:44.938211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.631 qpair failed and we were unable to recover it. 00:29:08.631 [2024-10-21 12:13:44.938580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.631 [2024-10-21 12:13:44.938613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.631 qpair failed and we were unable to recover it. 00:29:08.631 [2024-10-21 12:13:44.938970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.631 [2024-10-21 12:13:44.939007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.631 qpair failed and we were unable to recover it. 00:29:08.631 [2024-10-21 12:13:44.939372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.631 [2024-10-21 12:13:44.939407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.631 qpair failed and we were unable to recover it. 00:29:08.631 [2024-10-21 12:13:44.939772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.631 [2024-10-21 12:13:44.939803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.631 qpair failed and we were unable to recover it. 00:29:08.631 [2024-10-21 12:13:44.940157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.631 [2024-10-21 12:13:44.940188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.631 qpair failed and we were unable to recover it. 00:29:08.631 [2024-10-21 12:13:44.940589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.631 [2024-10-21 12:13:44.940621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.631 qpair failed and we were unable to recover it. 00:29:08.631 [2024-10-21 12:13:44.940979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.631 [2024-10-21 12:13:44.941011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.631 qpair failed and we were unable to recover it. 00:29:08.631 [2024-10-21 12:13:44.941354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.631 [2024-10-21 12:13:44.941406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.631 qpair failed and we were unable to recover it. 00:29:08.631 [2024-10-21 12:13:44.941770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.631 [2024-10-21 12:13:44.941801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.631 qpair failed and we were unable to recover it. 00:29:08.631 [2024-10-21 12:13:44.942157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.631 [2024-10-21 12:13:44.942188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.631 qpair failed and we were unable to recover it. 00:29:08.631 [2024-10-21 12:13:44.942548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.631 [2024-10-21 12:13:44.942580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.631 qpair failed and we were unable to recover it. 00:29:08.631 [2024-10-21 12:13:44.942938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.631 [2024-10-21 12:13:44.942969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.631 qpair failed and we were unable to recover it. 00:29:08.631 [2024-10-21 12:13:44.943378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.631 [2024-10-21 12:13:44.943411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.631 qpair failed and we were unable to recover it. 00:29:08.631 [2024-10-21 12:13:44.943774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.631 [2024-10-21 12:13:44.943805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.631 qpair failed and we were unable to recover it. 00:29:08.631 [2024-10-21 12:13:44.944151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.631 [2024-10-21 12:13:44.944183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.631 qpair failed and we were unable to recover it. 00:29:08.631 [2024-10-21 12:13:44.944590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.631 [2024-10-21 12:13:44.944622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.631 qpair failed and we were unable to recover it. 00:29:08.631 [2024-10-21 12:13:44.945016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.631 [2024-10-21 12:13:44.945047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.631 qpair failed and we were unable to recover it. 00:29:08.631 [2024-10-21 12:13:44.945476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.631 [2024-10-21 12:13:44.945507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.631 qpair failed and we were unable to recover it. 00:29:08.631 [2024-10-21 12:13:44.945863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.631 [2024-10-21 12:13:44.945893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.631 qpair failed and we were unable to recover it. 00:29:08.631 [2024-10-21 12:13:44.946137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.631 [2024-10-21 12:13:44.946167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.631 qpair failed and we were unable to recover it. 00:29:08.631 [2024-10-21 12:13:44.946543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.631 [2024-10-21 12:13:44.946575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.631 qpair failed and we were unable to recover it. 00:29:08.631 [2024-10-21 12:13:44.946925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.631 [2024-10-21 12:13:44.946955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.631 qpair failed and we were unable to recover it. 00:29:08.631 [2024-10-21 12:13:44.947315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.632 [2024-10-21 12:13:44.947373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.632 qpair failed and we were unable to recover it. 00:29:08.632 [2024-10-21 12:13:44.947728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.632 [2024-10-21 12:13:44.947758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.632 qpair failed and we were unable to recover it. 00:29:08.632 [2024-10-21 12:13:44.948155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.632 [2024-10-21 12:13:44.948185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.632 qpair failed and we were unable to recover it. 00:29:08.632 [2024-10-21 12:13:44.948547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.632 [2024-10-21 12:13:44.948580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.632 qpair failed and we were unable to recover it. 00:29:08.632 [2024-10-21 12:13:44.948934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.632 [2024-10-21 12:13:44.948964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.632 qpair failed and we were unable to recover it. 00:29:08.632 [2024-10-21 12:13:44.949337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.632 [2024-10-21 12:13:44.949370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.632 qpair failed and we were unable to recover it. 00:29:08.632 [2024-10-21 12:13:44.949742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.632 [2024-10-21 12:13:44.949773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.632 qpair failed and we were unable to recover it. 00:29:08.632 [2024-10-21 12:13:44.950123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.632 [2024-10-21 12:13:44.950154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.632 qpair failed and we were unable to recover it. 00:29:08.632 [2024-10-21 12:13:44.950516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.632 [2024-10-21 12:13:44.950548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.632 qpair failed and we were unable to recover it. 00:29:08.632 [2024-10-21 12:13:44.950903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.632 [2024-10-21 12:13:44.950934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.632 qpair failed and we were unable to recover it. 00:29:08.632 [2024-10-21 12:13:44.951308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.632 [2024-10-21 12:13:44.951360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.632 qpair failed and we were unable to recover it. 00:29:08.632 [2024-10-21 12:13:44.951731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.632 [2024-10-21 12:13:44.951762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.632 qpair failed and we were unable to recover it. 00:29:08.632 [2024-10-21 12:13:44.952124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.632 [2024-10-21 12:13:44.952156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.632 qpair failed and we were unable to recover it. 00:29:08.632 [2024-10-21 12:13:44.952414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.632 [2024-10-21 12:13:44.952447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.632 qpair failed and we were unable to recover it. 00:29:08.632 [2024-10-21 12:13:44.952733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.632 [2024-10-21 12:13:44.952763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.632 qpair failed and we were unable to recover it. 00:29:08.632 [2024-10-21 12:13:44.953121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.632 [2024-10-21 12:13:44.953152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.632 qpair failed and we were unable to recover it. 00:29:08.632 [2024-10-21 12:13:44.953510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.632 [2024-10-21 12:13:44.953543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.632 qpair failed and we were unable to recover it. 00:29:08.632 [2024-10-21 12:13:44.953901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.632 [2024-10-21 12:13:44.953933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.632 qpair failed and we were unable to recover it. 00:29:08.632 [2024-10-21 12:13:44.954171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.632 [2024-10-21 12:13:44.954203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.632 qpair failed and we were unable to recover it. 00:29:08.632 [2024-10-21 12:13:44.954547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.632 [2024-10-21 12:13:44.954585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.632 qpair failed and we were unable to recover it. 00:29:08.632 [2024-10-21 12:13:44.954934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.632 [2024-10-21 12:13:44.954967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.632 qpair failed and we were unable to recover it. 00:29:08.632 [2024-10-21 12:13:44.955335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.632 [2024-10-21 12:13:44.955367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.632 qpair failed and we were unable to recover it. 00:29:08.632 [2024-10-21 12:13:44.955715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.632 [2024-10-21 12:13:44.955746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.632 qpair failed and we were unable to recover it. 00:29:08.632 [2024-10-21 12:13:44.955978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.632 [2024-10-21 12:13:44.956008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.632 qpair failed and we were unable to recover it. 00:29:08.632 [2024-10-21 12:13:44.956258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.632 [2024-10-21 12:13:44.956289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.632 qpair failed and we were unable to recover it. 00:29:08.632 [2024-10-21 12:13:44.956635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.632 [2024-10-21 12:13:44.956667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.632 qpair failed and we were unable to recover it. 00:29:08.632 [2024-10-21 12:13:44.957025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.632 [2024-10-21 12:13:44.957057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.632 qpair failed and we were unable to recover it. 00:29:08.632 [2024-10-21 12:13:44.957441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.632 [2024-10-21 12:13:44.957475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.632 qpair failed and we were unable to recover it. 00:29:08.632 [2024-10-21 12:13:44.957814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.633 [2024-10-21 12:13:44.957845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.633 qpair failed and we were unable to recover it. 00:29:08.633 [2024-10-21 12:13:44.958197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.633 [2024-10-21 12:13:44.958228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.633 qpair failed and we were unable to recover it. 00:29:08.633 [2024-10-21 12:13:44.958585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.633 [2024-10-21 12:13:44.958616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.633 qpair failed and we were unable to recover it. 00:29:08.633 [2024-10-21 12:13:44.958963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.633 [2024-10-21 12:13:44.958995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.633 qpair failed and we were unable to recover it. 00:29:08.633 [2024-10-21 12:13:44.959359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.633 [2024-10-21 12:13:44.959393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.633 qpair failed and we were unable to recover it. 00:29:08.633 [2024-10-21 12:13:44.959784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.633 [2024-10-21 12:13:44.959816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.633 qpair failed and we were unable to recover it. 00:29:08.633 [2024-10-21 12:13:44.960171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.633 [2024-10-21 12:13:44.960202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.633 qpair failed and we were unable to recover it. 00:29:08.633 [2024-10-21 12:13:44.960568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.633 [2024-10-21 12:13:44.960600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.633 qpair failed and we were unable to recover it. 00:29:08.633 [2024-10-21 12:13:44.960838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.633 [2024-10-21 12:13:44.960869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.633 qpair failed and we were unable to recover it. 00:29:08.633 [2024-10-21 12:13:44.961223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.633 [2024-10-21 12:13:44.961254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.633 qpair failed and we were unable to recover it. 00:29:08.633 [2024-10-21 12:13:44.961621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.633 [2024-10-21 12:13:44.961653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.633 qpair failed and we were unable to recover it. 00:29:08.633 [2024-10-21 12:13:44.962008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.633 [2024-10-21 12:13:44.962041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.633 qpair failed and we were unable to recover it. 00:29:08.633 [2024-10-21 12:13:44.962399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.633 [2024-10-21 12:13:44.962431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.633 qpair failed and we were unable to recover it. 00:29:08.633 [2024-10-21 12:13:44.962801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.633 [2024-10-21 12:13:44.962834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.633 qpair failed and we were unable to recover it. 00:29:08.633 [2024-10-21 12:13:44.963073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.633 [2024-10-21 12:13:44.963104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.633 qpair failed and we were unable to recover it. 00:29:08.633 [2024-10-21 12:13:44.963449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.633 [2024-10-21 12:13:44.963482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.633 qpair failed and we were unable to recover it. 00:29:08.633 [2024-10-21 12:13:44.963843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.633 [2024-10-21 12:13:44.963873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.633 qpair failed and we were unable to recover it. 00:29:08.633 [2024-10-21 12:13:44.964230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.633 [2024-10-21 12:13:44.964262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.633 qpair failed and we were unable to recover it. 00:29:08.633 [2024-10-21 12:13:44.964504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.633 [2024-10-21 12:13:44.964542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.633 qpair failed and we were unable to recover it. 00:29:08.633 [2024-10-21 12:13:44.964931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.633 [2024-10-21 12:13:44.964962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.633 qpair failed and we were unable to recover it. 00:29:08.633 [2024-10-21 12:13:44.965343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.633 [2024-10-21 12:13:44.965376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.633 qpair failed and we were unable to recover it. 00:29:08.633 [2024-10-21 12:13:44.965725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.633 [2024-10-21 12:13:44.965754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.633 qpair failed and we were unable to recover it. 00:29:08.633 [2024-10-21 12:13:44.966111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.633 [2024-10-21 12:13:44.966141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.633 qpair failed and we were unable to recover it. 00:29:08.633 [2024-10-21 12:13:44.966501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.633 [2024-10-21 12:13:44.966535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.633 qpair failed and we were unable to recover it. 00:29:08.633 [2024-10-21 12:13:44.966891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.633 [2024-10-21 12:13:44.966921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.633 qpair failed and we were unable to recover it. 00:29:08.633 [2024-10-21 12:13:44.967287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.633 [2024-10-21 12:13:44.967318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.633 qpair failed and we were unable to recover it. 00:29:08.633 [2024-10-21 12:13:44.967700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.633 [2024-10-21 12:13:44.967729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.633 qpair failed and we were unable to recover it. 00:29:08.633 [2024-10-21 12:13:44.968086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.633 [2024-10-21 12:13:44.968117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.633 qpair failed and we were unable to recover it. 00:29:08.633 [2024-10-21 12:13:44.968484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.633 [2024-10-21 12:13:44.968516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.633 qpair failed and we were unable to recover it. 00:29:08.633 [2024-10-21 12:13:44.968882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.633 [2024-10-21 12:13:44.968913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.633 qpair failed and we were unable to recover it. 00:29:08.633 [2024-10-21 12:13:44.969343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.633 [2024-10-21 12:13:44.969374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.633 qpair failed and we were unable to recover it. 00:29:08.633 [2024-10-21 12:13:44.969655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.633 [2024-10-21 12:13:44.969685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.633 qpair failed and we were unable to recover it. 00:29:08.633 [2024-10-21 12:13:44.970042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.633 [2024-10-21 12:13:44.970072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.633 qpair failed and we were unable to recover it. 00:29:08.633 [2024-10-21 12:13:44.970500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.633 [2024-10-21 12:13:44.970533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.633 qpair failed and we were unable to recover it. 00:29:08.633 [2024-10-21 12:13:44.970772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.633 [2024-10-21 12:13:44.970802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.633 qpair failed and we were unable to recover it. 00:29:08.633 [2024-10-21 12:13:44.971178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.633 [2024-10-21 12:13:44.971208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.633 qpair failed and we were unable to recover it. 00:29:08.634 [2024-10-21 12:13:44.971572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.634 [2024-10-21 12:13:44.971605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.634 qpair failed and we were unable to recover it. 00:29:08.634 [2024-10-21 12:13:44.971964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.634 [2024-10-21 12:13:44.971997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.634 qpair failed and we were unable to recover it. 00:29:08.634 [2024-10-21 12:13:44.972354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.634 [2024-10-21 12:13:44.972387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.634 qpair failed and we were unable to recover it. 00:29:08.634 [2024-10-21 12:13:44.972846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.634 [2024-10-21 12:13:44.972876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.634 qpair failed and we were unable to recover it. 00:29:08.634 [2024-10-21 12:13:44.973230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.634 [2024-10-21 12:13:44.973261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.634 qpair failed and we were unable to recover it. 00:29:08.634 [2024-10-21 12:13:44.973626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.634 [2024-10-21 12:13:44.973658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.634 qpair failed and we were unable to recover it. 00:29:08.634 [2024-10-21 12:13:44.974012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.634 [2024-10-21 12:13:44.974043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.634 qpair failed and we were unable to recover it. 00:29:08.634 [2024-10-21 12:13:44.974406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.634 [2024-10-21 12:13:44.974440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.634 qpair failed and we were unable to recover it. 00:29:08.634 [2024-10-21 12:13:44.974851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.634 [2024-10-21 12:13:44.974882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.634 qpair failed and we were unable to recover it. 00:29:08.634 [2024-10-21 12:13:44.975232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.634 [2024-10-21 12:13:44.975264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.634 qpair failed and we were unable to recover it. 00:29:08.634 [2024-10-21 12:13:44.975666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.634 [2024-10-21 12:13:44.975698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.634 qpair failed and we were unable to recover it. 00:29:08.634 [2024-10-21 12:13:44.976060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.634 [2024-10-21 12:13:44.976091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.634 qpair failed and we were unable to recover it. 00:29:08.634 [2024-10-21 12:13:44.976449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.634 [2024-10-21 12:13:44.976483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.634 qpair failed and we were unable to recover it. 00:29:08.634 [2024-10-21 12:13:44.976841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.634 [2024-10-21 12:13:44.976872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.634 qpair failed and we were unable to recover it. 00:29:08.634 [2024-10-21 12:13:44.977229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.634 [2024-10-21 12:13:44.977260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.634 qpair failed and we were unable to recover it. 00:29:08.634 [2024-10-21 12:13:44.977621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.634 [2024-10-21 12:13:44.977653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.634 qpair failed and we were unable to recover it. 00:29:08.634 [2024-10-21 12:13:44.978013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.634 [2024-10-21 12:13:44.978043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.634 qpair failed and we were unable to recover it. 00:29:08.634 [2024-10-21 12:13:44.978369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.634 [2024-10-21 12:13:44.978400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.634 qpair failed and we were unable to recover it. 00:29:08.634 [2024-10-21 12:13:44.978782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.634 [2024-10-21 12:13:44.978813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.634 qpair failed and we were unable to recover it. 00:29:08.634 [2024-10-21 12:13:44.978947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.634 [2024-10-21 12:13:44.978979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.634 qpair failed and we were unable to recover it. 00:29:08.634 [2024-10-21 12:13:44.979333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.634 [2024-10-21 12:13:44.979366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.634 qpair failed and we were unable to recover it. 00:29:08.634 [2024-10-21 12:13:44.979721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.634 [2024-10-21 12:13:44.979753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.634 qpair failed and we were unable to recover it. 00:29:08.634 [2024-10-21 12:13:44.980096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.634 [2024-10-21 12:13:44.980135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.634 qpair failed and we were unable to recover it. 00:29:08.634 [2024-10-21 12:13:44.980487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.634 [2024-10-21 12:13:44.980519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.634 qpair failed and we were unable to recover it. 00:29:08.634 [2024-10-21 12:13:44.980872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.634 [2024-10-21 12:13:44.980904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.634 qpair failed and we were unable to recover it. 00:29:08.634 [2024-10-21 12:13:44.981240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.634 [2024-10-21 12:13:44.981271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.634 qpair failed and we were unable to recover it. 00:29:08.634 [2024-10-21 12:13:44.981620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.634 [2024-10-21 12:13:44.981652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.634 qpair failed and we were unable to recover it. 00:29:08.634 [2024-10-21 12:13:44.982024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.634 [2024-10-21 12:13:44.982055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.634 qpair failed and we were unable to recover it. 00:29:08.634 [2024-10-21 12:13:44.982406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.634 [2024-10-21 12:13:44.982437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.634 qpair failed and we were unable to recover it. 00:29:08.634 [2024-10-21 12:13:44.982789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.634 [2024-10-21 12:13:44.982819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.634 qpair failed and we were unable to recover it. 00:29:08.634 [2024-10-21 12:13:44.983046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.634 [2024-10-21 12:13:44.983080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.634 qpair failed and we were unable to recover it. 00:29:08.634 [2024-10-21 12:13:44.983429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.634 [2024-10-21 12:13:44.983461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.634 qpair failed and we were unable to recover it. 00:29:08.634 [2024-10-21 12:13:44.983821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.634 [2024-10-21 12:13:44.983853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.634 qpair failed and we were unable to recover it. 00:29:08.634 [2024-10-21 12:13:44.984073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.634 [2024-10-21 12:13:44.984105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.634 qpair failed and we were unable to recover it. 00:29:08.634 [2024-10-21 12:13:44.984487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.634 [2024-10-21 12:13:44.984519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.634 qpair failed and we were unable to recover it. 00:29:08.634 [2024-10-21 12:13:44.984871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.635 [2024-10-21 12:13:44.984903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.635 qpair failed and we were unable to recover it. 00:29:08.635 [2024-10-21 12:13:44.985259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.635 [2024-10-21 12:13:44.985290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.635 qpair failed and we were unable to recover it. 00:29:08.635 [2024-10-21 12:13:44.985708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.635 [2024-10-21 12:13:44.985740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.635 qpair failed and we were unable to recover it. 00:29:08.635 [2024-10-21 12:13:44.986109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.635 [2024-10-21 12:13:44.986140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.635 qpair failed and we were unable to recover it. 00:29:08.635 [2024-10-21 12:13:44.986495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.635 [2024-10-21 12:13:44.986527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.635 qpair failed and we were unable to recover it. 00:29:08.635 [2024-10-21 12:13:44.986883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.635 [2024-10-21 12:13:44.986914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.635 qpair failed and we were unable to recover it. 00:29:08.635 [2024-10-21 12:13:44.987282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.635 [2024-10-21 12:13:44.987314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.635 qpair failed and we were unable to recover it. 00:29:08.635 [2024-10-21 12:13:44.987717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.635 [2024-10-21 12:13:44.987747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.635 qpair failed and we were unable to recover it. 00:29:08.635 [2024-10-21 12:13:44.988108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.635 [2024-10-21 12:13:44.988141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.635 qpair failed and we were unable to recover it. 00:29:08.635 [2024-10-21 12:13:44.988510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.635 [2024-10-21 12:13:44.988544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.635 qpair failed and we were unable to recover it. 00:29:08.635 [2024-10-21 12:13:44.988889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.635 [2024-10-21 12:13:44.988919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.635 qpair failed and we were unable to recover it. 00:29:08.635 [2024-10-21 12:13:44.989285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.635 [2024-10-21 12:13:44.989329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.635 qpair failed and we were unable to recover it. 00:29:08.635 [2024-10-21 12:13:44.989675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.635 [2024-10-21 12:13:44.989706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.635 qpair failed and we were unable to recover it. 00:29:08.635 [2024-10-21 12:13:44.990049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.635 [2024-10-21 12:13:44.990081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.635 qpair failed and we were unable to recover it. 00:29:08.635 [2024-10-21 12:13:44.990431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.635 [2024-10-21 12:13:44.990466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.635 qpair failed and we were unable to recover it. 00:29:08.635 [2024-10-21 12:13:44.990845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.635 [2024-10-21 12:13:44.990875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.635 qpair failed and we were unable to recover it. 00:29:08.635 [2024-10-21 12:13:44.991241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.635 [2024-10-21 12:13:44.991272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.635 qpair failed and we were unable to recover it. 00:29:08.635 [2024-10-21 12:13:44.991681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.635 [2024-10-21 12:13:44.991714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.635 qpair failed and we were unable to recover it. 00:29:08.635 [2024-10-21 12:13:44.992066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.635 [2024-10-21 12:13:44.992098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.635 qpair failed and we were unable to recover it. 00:29:08.635 [2024-10-21 12:13:44.992457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.635 [2024-10-21 12:13:44.992490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.635 qpair failed and we were unable to recover it. 00:29:08.635 [2024-10-21 12:13:44.992845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.635 [2024-10-21 12:13:44.992876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.635 qpair failed and we were unable to recover it. 00:29:08.635 [2024-10-21 12:13:44.993233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.635 [2024-10-21 12:13:44.993266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.635 qpair failed and we were unable to recover it. 00:29:08.635 [2024-10-21 12:13:44.993620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.635 [2024-10-21 12:13:44.993652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.635 qpair failed and we were unable to recover it. 00:29:08.635 [2024-10-21 12:13:44.994009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.635 [2024-10-21 12:13:44.994040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.635 qpair failed and we were unable to recover it. 00:29:08.635 [2024-10-21 12:13:44.994389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.635 [2024-10-21 12:13:44.994421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.635 qpair failed and we were unable to recover it. 00:29:08.635 [2024-10-21 12:13:44.994673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.635 [2024-10-21 12:13:44.994704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.635 qpair failed and we were unable to recover it. 00:29:08.635 [2024-10-21 12:13:44.995048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.635 [2024-10-21 12:13:44.995079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.635 qpair failed and we were unable to recover it. 00:29:08.635 [2024-10-21 12:13:44.995439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.635 [2024-10-21 12:13:44.995477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.635 qpair failed and we were unable to recover it. 00:29:08.635 [2024-10-21 12:13:44.995836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.635 [2024-10-21 12:13:44.995866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.635 qpair failed and we were unable to recover it. 00:29:08.635 [2024-10-21 12:13:44.996222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.635 [2024-10-21 12:13:44.996253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.635 qpair failed and we were unable to recover it. 00:29:08.635 [2024-10-21 12:13:44.996619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.635 [2024-10-21 12:13:44.996652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.635 qpair failed and we were unable to recover it. 00:29:08.635 [2024-10-21 12:13:44.996887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.635 [2024-10-21 12:13:44.996917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.635 qpair failed and we were unable to recover it. 00:29:08.635 [2024-10-21 12:13:44.997270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.635 [2024-10-21 12:13:44.997300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.635 qpair failed and we were unable to recover it. 00:29:08.635 [2024-10-21 12:13:44.997665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.635 [2024-10-21 12:13:44.997697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.635 qpair failed and we were unable to recover it. 00:29:08.635 [2024-10-21 12:13:44.998052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.635 [2024-10-21 12:13:44.998085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.635 qpair failed and we were unable to recover it. 00:29:08.635 [2024-10-21 12:13:44.998476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.635 [2024-10-21 12:13:44.998509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.636 qpair failed and we were unable to recover it. 00:29:08.636 [2024-10-21 12:13:44.998874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.636 [2024-10-21 12:13:44.998905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.636 qpair failed and we were unable to recover it. 00:29:08.636 [2024-10-21 12:13:44.999265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.636 [2024-10-21 12:13:44.999295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.636 qpair failed and we were unable to recover it. 00:29:08.636 [2024-10-21 12:13:44.999671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.636 [2024-10-21 12:13:44.999704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.636 qpair failed and we were unable to recover it. 00:29:08.636 [2024-10-21 12:13:45.000057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.636 [2024-10-21 12:13:45.000088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.636 qpair failed and we were unable to recover it. 00:29:08.636 [2024-10-21 12:13:45.000439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.636 [2024-10-21 12:13:45.000474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.636 qpair failed and we were unable to recover it. 00:29:08.636 [2024-10-21 12:13:45.000872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.636 [2024-10-21 12:13:45.000903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.636 qpair failed and we were unable to recover it. 00:29:08.636 [2024-10-21 12:13:45.001252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.636 [2024-10-21 12:13:45.001282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.636 qpair failed and we were unable to recover it. 00:29:08.636 [2024-10-21 12:13:45.001641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.636 [2024-10-21 12:13:45.001674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.636 qpair failed and we were unable to recover it. 00:29:08.636 [2024-10-21 12:13:45.002032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.636 [2024-10-21 12:13:45.002064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.636 qpair failed and we were unable to recover it. 00:29:08.636 [2024-10-21 12:13:45.002424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.636 [2024-10-21 12:13:45.002456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.636 qpair failed and we were unable to recover it. 00:29:08.636 [2024-10-21 12:13:45.002813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.636 [2024-10-21 12:13:45.002845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.636 qpair failed and we were unable to recover it. 00:29:08.636 [2024-10-21 12:13:45.003201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.636 [2024-10-21 12:13:45.003232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.636 qpair failed and we were unable to recover it. 00:29:08.636 [2024-10-21 12:13:45.003540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.636 [2024-10-21 12:13:45.003574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.636 qpair failed and we were unable to recover it. 00:29:08.636 [2024-10-21 12:13:45.003928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.636 [2024-10-21 12:13:45.003958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.636 qpair failed and we were unable to recover it. 00:29:08.636 [2024-10-21 12:13:45.004308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.636 [2024-10-21 12:13:45.004351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.636 qpair failed and we were unable to recover it. 00:29:08.636 [2024-10-21 12:13:45.004726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.636 [2024-10-21 12:13:45.004757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.636 qpair failed and we were unable to recover it. 00:29:08.636 [2024-10-21 12:13:45.005113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.636 [2024-10-21 12:13:45.005145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.636 qpair failed and we were unable to recover it. 00:29:08.636 [2024-10-21 12:13:45.005518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.636 [2024-10-21 12:13:45.005551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.636 qpair failed and we were unable to recover it. 00:29:08.636 [2024-10-21 12:13:45.005893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.636 [2024-10-21 12:13:45.005924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.636 qpair failed and we were unable to recover it. 00:29:08.636 [2024-10-21 12:13:45.006270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.636 [2024-10-21 12:13:45.006302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.636 qpair failed and we were unable to recover it. 00:29:08.636 [2024-10-21 12:13:45.006691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.636 [2024-10-21 12:13:45.006721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.636 qpair failed and we were unable to recover it. 00:29:08.636 [2024-10-21 12:13:45.007082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.636 [2024-10-21 12:13:45.007113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.636 qpair failed and we were unable to recover it. 00:29:08.636 [2024-10-21 12:13:45.007472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.636 [2024-10-21 12:13:45.007504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.636 qpair failed and we were unable to recover it. 00:29:08.636 [2024-10-21 12:13:45.007834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.636 [2024-10-21 12:13:45.007864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.636 qpair failed and we were unable to recover it. 00:29:08.636 [2024-10-21 12:13:45.008213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.636 [2024-10-21 12:13:45.008244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.636 qpair failed and we were unable to recover it. 00:29:08.636 [2024-10-21 12:13:45.008600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.636 [2024-10-21 12:13:45.008633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.636 qpair failed and we were unable to recover it. 00:29:08.636 [2024-10-21 12:13:45.008984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.636 [2024-10-21 12:13:45.009014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.636 qpair failed and we were unable to recover it. 00:29:08.636 [2024-10-21 12:13:45.009381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.636 [2024-10-21 12:13:45.009413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.636 qpair failed and we were unable to recover it. 00:29:08.636 [2024-10-21 12:13:45.009750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.636 [2024-10-21 12:13:45.009784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.636 qpair failed and we were unable to recover it. 00:29:08.636 [2024-10-21 12:13:45.010141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.636 [2024-10-21 12:13:45.010173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.636 qpair failed and we were unable to recover it. 00:29:08.636 [2024-10-21 12:13:45.010504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.636 [2024-10-21 12:13:45.010537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.636 qpair failed and we were unable to recover it. 00:29:08.636 [2024-10-21 12:13:45.010897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.636 [2024-10-21 12:13:45.010936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.636 qpair failed and we were unable to recover it. 00:29:08.636 [2024-10-21 12:13:45.011283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.636 [2024-10-21 12:13:45.011316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.636 qpair failed and we were unable to recover it. 00:29:08.636 [2024-10-21 12:13:45.011721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.636 [2024-10-21 12:13:45.011752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.636 qpair failed and we were unable to recover it. 00:29:08.636 [2024-10-21 12:13:45.012097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.636 [2024-10-21 12:13:45.012127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.636 qpair failed and we were unable to recover it. 00:29:08.636 [2024-10-21 12:13:45.012485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.637 [2024-10-21 12:13:45.012517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.637 qpair failed and we were unable to recover it. 00:29:08.637 [2024-10-21 12:13:45.012871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.637 [2024-10-21 12:13:45.012901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.637 qpair failed and we were unable to recover it. 00:29:08.637 [2024-10-21 12:13:45.013141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.637 [2024-10-21 12:13:45.013175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.637 qpair failed and we were unable to recover it. 00:29:08.637 [2024-10-21 12:13:45.013552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.637 [2024-10-21 12:13:45.013583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.637 qpair failed and we were unable to recover it. 00:29:08.637 [2024-10-21 12:13:45.013938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.637 [2024-10-21 12:13:45.013969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.637 qpair failed and we were unable to recover it. 00:29:08.637 [2024-10-21 12:13:45.014340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.637 [2024-10-21 12:13:45.014375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.637 qpair failed and we were unable to recover it. 00:29:08.637 [2024-10-21 12:13:45.014727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.637 [2024-10-21 12:13:45.014757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.637 qpair failed and we were unable to recover it. 00:29:08.637 [2024-10-21 12:13:45.015114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.637 [2024-10-21 12:13:45.015144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.637 qpair failed and we were unable to recover it. 00:29:08.637 [2024-10-21 12:13:45.015498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.637 [2024-10-21 12:13:45.015529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.637 qpair failed and we were unable to recover it. 00:29:08.637 [2024-10-21 12:13:45.015896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.637 [2024-10-21 12:13:45.015926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.637 qpair failed and we were unable to recover it. 00:29:08.637 [2024-10-21 12:13:45.016283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.637 [2024-10-21 12:13:45.016314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.637 qpair failed and we were unable to recover it. 00:29:08.637 [2024-10-21 12:13:45.016686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.637 [2024-10-21 12:13:45.016717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.637 qpair failed and we were unable to recover it. 00:29:08.637 [2024-10-21 12:13:45.017067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.637 [2024-10-21 12:13:45.017100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.637 qpair failed and we were unable to recover it. 00:29:08.637 [2024-10-21 12:13:45.017462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.637 [2024-10-21 12:13:45.017495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.637 qpair failed and we were unable to recover it. 00:29:08.637 [2024-10-21 12:13:45.017857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.637 [2024-10-21 12:13:45.017888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.637 qpair failed and we were unable to recover it. 00:29:08.637 [2024-10-21 12:13:45.018154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.637 [2024-10-21 12:13:45.018183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.637 qpair failed and we were unable to recover it. 00:29:08.637 [2024-10-21 12:13:45.018551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.637 [2024-10-21 12:13:45.018583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.637 qpair failed and we were unable to recover it. 00:29:08.637 [2024-10-21 12:13:45.018997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.637 [2024-10-21 12:13:45.019027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.637 qpair failed and we were unable to recover it. 00:29:08.637 [2024-10-21 12:13:45.019378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.637 [2024-10-21 12:13:45.019410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.637 qpair failed and we were unable to recover it. 00:29:08.637 [2024-10-21 12:13:45.019768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.637 [2024-10-21 12:13:45.019801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.637 qpair failed and we were unable to recover it. 00:29:08.637 [2024-10-21 12:13:45.020160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.637 [2024-10-21 12:13:45.020191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.637 qpair failed and we were unable to recover it. 00:29:08.637 [2024-10-21 12:13:45.020455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.637 [2024-10-21 12:13:45.020486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.637 qpair failed and we were unable to recover it. 00:29:08.637 [2024-10-21 12:13:45.020851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.637 [2024-10-21 12:13:45.020881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.637 qpair failed and we were unable to recover it. 00:29:08.637 [2024-10-21 12:13:45.021239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.637 [2024-10-21 12:13:45.021271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.637 qpair failed and we were unable to recover it. 00:29:08.637 [2024-10-21 12:13:45.021664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.637 [2024-10-21 12:13:45.021697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.637 qpair failed and we were unable to recover it. 00:29:08.637 [2024-10-21 12:13:45.022055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.637 [2024-10-21 12:13:45.022086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.637 qpair failed and we were unable to recover it. 00:29:08.637 [2024-10-21 12:13:45.022430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.637 [2024-10-21 12:13:45.022461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.637 qpair failed and we were unable to recover it. 00:29:08.637 [2024-10-21 12:13:45.022814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.637 [2024-10-21 12:13:45.022846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.637 qpair failed and we were unable to recover it. 00:29:08.637 [2024-10-21 12:13:45.023200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.637 [2024-10-21 12:13:45.023231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.637 qpair failed and we were unable to recover it. 00:29:08.637 [2024-10-21 12:13:45.023588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.637 [2024-10-21 12:13:45.023622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.637 qpair failed and we were unable to recover it. 00:29:08.637 [2024-10-21 12:13:45.023864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.637 [2024-10-21 12:13:45.023894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.637 qpair failed and we were unable to recover it. 00:29:08.637 [2024-10-21 12:13:45.024257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.638 [2024-10-21 12:13:45.024288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.638 qpair failed and we were unable to recover it. 00:29:08.638 [2024-10-21 12:13:45.024644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.638 [2024-10-21 12:13:45.024678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.638 qpair failed and we were unable to recover it. 00:29:08.638 [2024-10-21 12:13:45.025038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.638 [2024-10-21 12:13:45.025070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.638 qpair failed and we were unable to recover it. 00:29:08.638 [2024-10-21 12:13:45.025329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.638 [2024-10-21 12:13:45.025362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.638 qpair failed and we were unable to recover it. 00:29:08.638 [2024-10-21 12:13:45.025737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.638 [2024-10-21 12:13:45.025768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.638 qpair failed and we were unable to recover it. 00:29:08.638 [2024-10-21 12:13:45.026128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.638 [2024-10-21 12:13:45.026165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.638 qpair failed and we were unable to recover it. 00:29:08.638 [2024-10-21 12:13:45.026497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.638 [2024-10-21 12:13:45.026532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.638 qpair failed and we were unable to recover it. 00:29:08.638 [2024-10-21 12:13:45.026891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.638 [2024-10-21 12:13:45.026922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.638 qpair failed and we were unable to recover it. 00:29:08.638 [2024-10-21 12:13:45.027271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.638 [2024-10-21 12:13:45.027301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.638 qpair failed and we were unable to recover it. 00:29:08.638 [2024-10-21 12:13:45.027701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.638 [2024-10-21 12:13:45.027733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.638 qpair failed and we were unable to recover it. 00:29:08.638 [2024-10-21 12:13:45.028086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.638 [2024-10-21 12:13:45.028117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.638 qpair failed and we were unable to recover it. 00:29:08.638 [2024-10-21 12:13:45.028484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.638 [2024-10-21 12:13:45.028517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.638 qpair failed and we were unable to recover it. 00:29:08.638 [2024-10-21 12:13:45.028873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.638 [2024-10-21 12:13:45.028904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.638 qpair failed and we were unable to recover it. 00:29:08.638 [2024-10-21 12:13:45.029268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.638 [2024-10-21 12:13:45.029298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.638 qpair failed and we were unable to recover it. 00:29:08.638 [2024-10-21 12:13:45.029667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.638 [2024-10-21 12:13:45.029699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.638 qpair failed and we were unable to recover it. 00:29:08.638 [2024-10-21 12:13:45.030068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.638 [2024-10-21 12:13:45.030101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.638 qpair failed and we were unable to recover it. 00:29:08.638 [2024-10-21 12:13:45.030451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.638 [2024-10-21 12:13:45.030485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.638 qpair failed and we were unable to recover it. 00:29:08.638 [2024-10-21 12:13:45.030834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.638 [2024-10-21 12:13:45.030866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.638 qpair failed and we were unable to recover it. 00:29:08.638 [2024-10-21 12:13:45.031221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.638 [2024-10-21 12:13:45.031252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.638 qpair failed and we were unable to recover it. 00:29:08.638 [2024-10-21 12:13:45.031601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.638 [2024-10-21 12:13:45.031636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.638 qpair failed and we were unable to recover it. 00:29:08.638 [2024-10-21 12:13:45.032005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.638 [2024-10-21 12:13:45.032037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.638 qpair failed and we were unable to recover it. 00:29:08.638 [2024-10-21 12:13:45.032383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.638 [2024-10-21 12:13:45.032415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.638 qpair failed and we were unable to recover it. 00:29:08.638 [2024-10-21 12:13:45.032770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.638 [2024-10-21 12:13:45.032802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.638 qpair failed and we were unable to recover it. 00:29:08.638 [2024-10-21 12:13:45.033161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.638 [2024-10-21 12:13:45.033193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.638 qpair failed and we were unable to recover it. 00:29:08.638 [2024-10-21 12:13:45.033550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.638 [2024-10-21 12:13:45.033581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.638 qpair failed and we were unable to recover it. 00:29:08.638 [2024-10-21 12:13:45.033931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.638 [2024-10-21 12:13:45.033961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.638 qpair failed and we were unable to recover it. 00:29:08.638 [2024-10-21 12:13:45.034333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.638 [2024-10-21 12:13:45.034365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.638 qpair failed and we were unable to recover it. 00:29:08.638 [2024-10-21 12:13:45.034726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.638 [2024-10-21 12:13:45.034756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.638 qpair failed and we were unable to recover it. 00:29:08.638 [2024-10-21 12:13:45.035117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.638 [2024-10-21 12:13:45.035147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.638 qpair failed and we were unable to recover it. 00:29:08.638 [2024-10-21 12:13:45.035505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.638 [2024-10-21 12:13:45.035537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.638 qpair failed and we were unable to recover it. 00:29:08.638 [2024-10-21 12:13:45.035897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.638 [2024-10-21 12:13:45.035928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.638 qpair failed and we were unable to recover it. 00:29:08.638 [2024-10-21 12:13:45.036286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.638 [2024-10-21 12:13:45.036317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.638 qpair failed and we were unable to recover it. 00:29:08.638 [2024-10-21 12:13:45.036711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.638 [2024-10-21 12:13:45.036743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.638 qpair failed and we were unable to recover it. 00:29:08.638 [2024-10-21 12:13:45.037110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.638 [2024-10-21 12:13:45.037140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.638 qpair failed and we were unable to recover it. 00:29:08.638 [2024-10-21 12:13:45.037504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.638 [2024-10-21 12:13:45.037536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.638 qpair failed and we were unable to recover it. 00:29:08.638 [2024-10-21 12:13:45.037895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.638 [2024-10-21 12:13:45.037925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.639 qpair failed and we were unable to recover it. 00:29:08.639 [2024-10-21 12:13:45.038279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.639 [2024-10-21 12:13:45.038308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.639 qpair failed and we were unable to recover it. 00:29:08.639 [2024-10-21 12:13:45.038675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.639 [2024-10-21 12:13:45.038708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.639 qpair failed and we were unable to recover it. 00:29:08.639 [2024-10-21 12:13:45.039057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.639 [2024-10-21 12:13:45.039086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.639 qpair failed and we were unable to recover it. 00:29:08.639 [2024-10-21 12:13:45.039443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.639 [2024-10-21 12:13:45.039475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.639 qpair failed and we were unable to recover it. 00:29:08.639 [2024-10-21 12:13:45.039831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.639 [2024-10-21 12:13:45.039861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.639 qpair failed and we were unable to recover it. 00:29:08.639 [2024-10-21 12:13:45.040113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.639 [2024-10-21 12:13:45.040148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.639 qpair failed and we were unable to recover it. 00:29:08.639 [2024-10-21 12:13:45.040509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.639 [2024-10-21 12:13:45.040541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.639 qpair failed and we were unable to recover it. 00:29:08.639 [2024-10-21 12:13:45.040876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.639 [2024-10-21 12:13:45.040907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.639 qpair failed and we were unable to recover it. 00:29:08.639 [2024-10-21 12:13:45.041260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.639 [2024-10-21 12:13:45.041290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.639 qpair failed and we were unable to recover it. 00:29:08.639 [2024-10-21 12:13:45.041654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.639 [2024-10-21 12:13:45.041693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.639 qpair failed and we were unable to recover it. 00:29:08.639 [2024-10-21 12:13:45.042092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.639 [2024-10-21 12:13:45.042123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.639 qpair failed and we were unable to recover it. 00:29:08.639 [2024-10-21 12:13:45.042456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.639 [2024-10-21 12:13:45.042489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.639 qpair failed and we were unable to recover it. 00:29:08.639 [2024-10-21 12:13:45.042845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.639 [2024-10-21 12:13:45.042876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.639 qpair failed and we were unable to recover it. 00:29:08.639 [2024-10-21 12:13:45.043238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.639 [2024-10-21 12:13:45.043269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.639 qpair failed and we were unable to recover it. 00:29:08.639 [2024-10-21 12:13:45.043675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.639 [2024-10-21 12:13:45.043708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.639 qpair failed and we were unable to recover it. 00:29:08.639 [2024-10-21 12:13:45.044050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.639 [2024-10-21 12:13:45.044083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.639 qpair failed and we were unable to recover it. 00:29:08.639 [2024-10-21 12:13:45.044316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.639 [2024-10-21 12:13:45.044364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.639 qpair failed and we were unable to recover it. 00:29:08.639 [2024-10-21 12:13:45.044650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.639 [2024-10-21 12:13:45.044679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.639 qpair failed and we were unable to recover it. 00:29:08.639 [2024-10-21 12:13:45.045030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.639 [2024-10-21 12:13:45.045060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.639 qpair failed and we were unable to recover it. 00:29:08.639 [2024-10-21 12:13:45.045448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.639 [2024-10-21 12:13:45.045480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.639 qpair failed and we were unable to recover it. 00:29:08.639 [2024-10-21 12:13:45.045840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.639 [2024-10-21 12:13:45.045870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.639 qpair failed and we were unable to recover it. 00:29:08.639 [2024-10-21 12:13:45.046225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.639 [2024-10-21 12:13:45.046254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.639 qpair failed and we were unable to recover it. 00:29:08.639 [2024-10-21 12:13:45.046619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.639 [2024-10-21 12:13:45.046652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.639 qpair failed and we were unable to recover it. 00:29:08.639 [2024-10-21 12:13:45.047014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.639 [2024-10-21 12:13:45.047045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.639 qpair failed and we were unable to recover it. 00:29:08.639 [2024-10-21 12:13:45.047398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.639 [2024-10-21 12:13:45.047431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.639 qpair failed and we were unable to recover it. 00:29:08.639 [2024-10-21 12:13:45.047795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.639 [2024-10-21 12:13:45.047824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.639 qpair failed and we were unable to recover it. 00:29:08.639 [2024-10-21 12:13:45.048172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.639 [2024-10-21 12:13:45.048202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.639 qpair failed and we were unable to recover it. 00:29:08.639 [2024-10-21 12:13:45.048569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.639 [2024-10-21 12:13:45.048600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.639 qpair failed and we were unable to recover it. 00:29:08.639 [2024-10-21 12:13:45.048948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.639 [2024-10-21 12:13:45.048978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.639 qpair failed and we were unable to recover it. 00:29:08.639 [2024-10-21 12:13:45.049316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.639 [2024-10-21 12:13:45.049359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.639 qpair failed and we were unable to recover it. 00:29:08.639 [2024-10-21 12:13:45.049715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.639 [2024-10-21 12:13:45.049744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.639 qpair failed and we were unable to recover it. 00:29:08.639 [2024-10-21 12:13:45.050107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.639 [2024-10-21 12:13:45.050137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.639 qpair failed and we were unable to recover it. 00:29:08.639 [2024-10-21 12:13:45.050485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.639 [2024-10-21 12:13:45.050518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.639 qpair failed and we were unable to recover it. 00:29:08.639 [2024-10-21 12:13:45.050876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.639 [2024-10-21 12:13:45.050907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.639 qpair failed and we were unable to recover it. 00:29:08.639 [2024-10-21 12:13:45.051266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.639 [2024-10-21 12:13:45.051296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.639 qpair failed and we were unable to recover it. 00:29:08.639 [2024-10-21 12:13:45.051698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.640 [2024-10-21 12:13:45.051728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.640 qpair failed and we were unable to recover it. 00:29:08.640 [2024-10-21 12:13:45.052088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.640 [2024-10-21 12:13:45.052118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.640 qpair failed and we were unable to recover it. 00:29:08.640 [2024-10-21 12:13:45.052469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.640 [2024-10-21 12:13:45.052500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.640 qpair failed and we were unable to recover it. 00:29:08.640 [2024-10-21 12:13:45.052877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.640 [2024-10-21 12:13:45.052906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.640 qpair failed and we were unable to recover it. 00:29:08.640 [2024-10-21 12:13:45.053140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.640 [2024-10-21 12:13:45.053176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.640 qpair failed and we were unable to recover it. 00:29:08.640 [2024-10-21 12:13:45.053549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.640 [2024-10-21 12:13:45.053582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.640 qpair failed and we were unable to recover it. 00:29:08.640 [2024-10-21 12:13:45.053934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.640 [2024-10-21 12:13:45.053964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.640 qpair failed and we were unable to recover it. 00:29:08.640 [2024-10-21 12:13:45.054319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.640 [2024-10-21 12:13:45.054362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.640 qpair failed and we were unable to recover it. 00:29:08.640 [2024-10-21 12:13:45.054624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.640 [2024-10-21 12:13:45.054656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.640 qpair failed and we were unable to recover it. 00:29:08.640 [2024-10-21 12:13:45.054909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.640 [2024-10-21 12:13:45.054937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.640 qpair failed and we were unable to recover it. 00:29:08.640 [2024-10-21 12:13:45.055295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.640 [2024-10-21 12:13:45.055345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.640 qpair failed and we were unable to recover it. 00:29:08.640 [2024-10-21 12:13:45.055711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.640 [2024-10-21 12:13:45.055741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.640 qpair failed and we were unable to recover it. 00:29:08.640 [2024-10-21 12:13:45.056090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.640 [2024-10-21 12:13:45.056120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.640 qpair failed and we were unable to recover it. 00:29:08.640 [2024-10-21 12:13:45.056465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.640 [2024-10-21 12:13:45.056497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.640 qpair failed and we were unable to recover it. 00:29:08.640 [2024-10-21 12:13:45.056854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.640 [2024-10-21 12:13:45.056891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.640 qpair failed and we were unable to recover it. 00:29:08.640 [2024-10-21 12:13:45.057254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.640 [2024-10-21 12:13:45.057284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.640 qpair failed and we were unable to recover it. 00:29:08.640 [2024-10-21 12:13:45.057673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.640 [2024-10-21 12:13:45.057707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.640 qpair failed and we were unable to recover it. 00:29:08.640 [2024-10-21 12:13:45.058060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.640 [2024-10-21 12:13:45.058090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.640 qpair failed and we were unable to recover it. 00:29:08.640 [2024-10-21 12:13:45.058448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.640 [2024-10-21 12:13:45.058479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.640 qpair failed and we were unable to recover it. 00:29:08.640 [2024-10-21 12:13:45.058717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.640 [2024-10-21 12:13:45.058748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.640 qpair failed and we were unable to recover it. 00:29:08.640 [2024-10-21 12:13:45.059115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.640 [2024-10-21 12:13:45.059144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.640 qpair failed and we were unable to recover it. 00:29:08.640 [2024-10-21 12:13:45.059512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.640 [2024-10-21 12:13:45.059546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.640 qpair failed and we were unable to recover it. 00:29:08.640 [2024-10-21 12:13:45.059895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.640 [2024-10-21 12:13:45.059925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.640 qpair failed and we were unable to recover it. 00:29:08.640 [2024-10-21 12:13:45.060283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.640 [2024-10-21 12:13:45.060313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.640 qpair failed and we were unable to recover it. 00:29:08.640 [2024-10-21 12:13:45.060719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.640 [2024-10-21 12:13:45.060751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.640 qpair failed and we were unable to recover it. 00:29:08.640 [2024-10-21 12:13:45.061100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.640 [2024-10-21 12:13:45.061130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.640 qpair failed and we were unable to recover it. 00:29:08.640 [2024-10-21 12:13:45.061495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.640 [2024-10-21 12:13:45.061527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.640 qpair failed and we were unable to recover it. 00:29:08.640 [2024-10-21 12:13:45.061755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.640 [2024-10-21 12:13:45.061787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.640 qpair failed and we were unable to recover it. 00:29:08.640 [2024-10-21 12:13:45.062143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.640 [2024-10-21 12:13:45.062173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.640 qpair failed and we were unable to recover it. 00:29:08.640 [2024-10-21 12:13:45.062420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.640 [2024-10-21 12:13:45.062450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.640 qpair failed and we were unable to recover it. 00:29:08.640 [2024-10-21 12:13:45.062823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.640 [2024-10-21 12:13:45.062852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.640 qpair failed and we were unable to recover it. 00:29:08.640 [2024-10-21 12:13:45.063214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.640 [2024-10-21 12:13:45.063244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.640 qpair failed and we were unable to recover it. 00:29:08.640 [2024-10-21 12:13:45.063537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.640 [2024-10-21 12:13:45.063569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.640 qpair failed and we were unable to recover it. 00:29:08.640 [2024-10-21 12:13:45.063964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.640 [2024-10-21 12:13:45.063996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.640 qpair failed and we were unable to recover it. 00:29:08.640 [2024-10-21 12:13:45.064239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.640 [2024-10-21 12:13:45.064269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.640 qpair failed and we were unable to recover it. 00:29:08.640 [2024-10-21 12:13:45.064520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.641 [2024-10-21 12:13:45.064551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.641 qpair failed and we were unable to recover it. 00:29:08.641 [2024-10-21 12:13:45.064814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.641 [2024-10-21 12:13:45.064843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.641 qpair failed and we were unable to recover it. 00:29:08.641 [2024-10-21 12:13:45.065198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.641 [2024-10-21 12:13:45.065229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.641 qpair failed and we were unable to recover it. 00:29:08.641 [2024-10-21 12:13:45.065586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.641 [2024-10-21 12:13:45.065618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.641 qpair failed and we were unable to recover it. 00:29:08.641 [2024-10-21 12:13:45.065975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.641 [2024-10-21 12:13:45.066005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.641 qpair failed and we were unable to recover it. 00:29:08.641 [2024-10-21 12:13:45.066350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.641 [2024-10-21 12:13:45.066382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.641 qpair failed and we were unable to recover it. 00:29:08.641 [2024-10-21 12:13:45.066766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.641 [2024-10-21 12:13:45.066798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.641 qpair failed and we were unable to recover it. 00:29:08.641 [2024-10-21 12:13:45.067056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.641 [2024-10-21 12:13:45.067087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.641 qpair failed and we were unable to recover it. 00:29:08.641 [2024-10-21 12:13:45.067480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.641 [2024-10-21 12:13:45.067513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.641 qpair failed and we were unable to recover it. 00:29:08.641 [2024-10-21 12:13:45.067867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.641 [2024-10-21 12:13:45.067898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.641 qpair failed and we were unable to recover it. 00:29:08.641 [2024-10-21 12:13:45.068295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.641 [2024-10-21 12:13:45.068336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.641 qpair failed and we were unable to recover it. 00:29:08.641 [2024-10-21 12:13:45.068699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.641 [2024-10-21 12:13:45.068729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.641 qpair failed and we were unable to recover it. 00:29:08.641 [2024-10-21 12:13:45.069067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.641 [2024-10-21 12:13:45.069097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.641 qpair failed and we were unable to recover it. 00:29:08.641 [2024-10-21 12:13:45.069517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.641 [2024-10-21 12:13:45.069549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.641 qpair failed and we were unable to recover it. 00:29:08.641 [2024-10-21 12:13:45.069897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.641 [2024-10-21 12:13:45.069926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.641 qpair failed and we were unable to recover it. 00:29:08.641 [2024-10-21 12:13:45.070307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.641 [2024-10-21 12:13:45.070348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.641 qpair failed and we were unable to recover it. 00:29:08.641 [2024-10-21 12:13:45.070694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.641 [2024-10-21 12:13:45.070725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.641 qpair failed and we were unable to recover it. 00:29:08.641 [2024-10-21 12:13:45.071083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.641 [2024-10-21 12:13:45.071115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.641 qpair failed and we were unable to recover it. 00:29:08.641 [2024-10-21 12:13:45.071474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.641 [2024-10-21 12:13:45.071507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.641 qpair failed and we were unable to recover it. 00:29:08.641 [2024-10-21 12:13:45.071857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.641 [2024-10-21 12:13:45.071892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.641 qpair failed and we were unable to recover it. 00:29:08.641 [2024-10-21 12:13:45.072251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.641 [2024-10-21 12:13:45.072281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.641 qpair failed and we were unable to recover it. 00:29:08.641 [2024-10-21 12:13:45.072647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.641 [2024-10-21 12:13:45.072681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.641 qpair failed and we were unable to recover it. 00:29:08.641 [2024-10-21 12:13:45.073033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.641 [2024-10-21 12:13:45.073063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.641 qpair failed and we were unable to recover it. 00:29:08.641 [2024-10-21 12:13:45.073417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.641 [2024-10-21 12:13:45.073449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.641 qpair failed and we were unable to recover it. 00:29:08.641 [2024-10-21 12:13:45.073809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.641 [2024-10-21 12:13:45.073839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.641 qpair failed and we were unable to recover it. 00:29:08.641 [2024-10-21 12:13:45.074195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.641 [2024-10-21 12:13:45.074225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.641 qpair failed and we were unable to recover it. 00:29:08.641 [2024-10-21 12:13:45.074586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.641 [2024-10-21 12:13:45.074618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.641 qpair failed and we were unable to recover it. 00:29:08.641 [2024-10-21 12:13:45.074977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.641 [2024-10-21 12:13:45.075006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.641 qpair failed and we were unable to recover it. 00:29:08.641 [2024-10-21 12:13:45.075250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.641 [2024-10-21 12:13:45.075285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.641 qpair failed and we were unable to recover it. 00:29:08.641 [2024-10-21 12:13:45.075677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.641 [2024-10-21 12:13:45.075709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.641 qpair failed and we were unable to recover it. 00:29:08.641 [2024-10-21 12:13:45.076060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.641 [2024-10-21 12:13:45.076090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.641 qpair failed and we were unable to recover it. 00:29:08.641 [2024-10-21 12:13:45.076451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.641 [2024-10-21 12:13:45.076482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.641 qpair failed and we were unable to recover it. 00:29:08.641 [2024-10-21 12:13:45.076841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.641 [2024-10-21 12:13:45.076870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.641 qpair failed and we were unable to recover it. 00:29:08.641 [2024-10-21 12:13:45.077233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.641 [2024-10-21 12:13:45.077264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.641 qpair failed and we were unable to recover it. 00:29:08.641 [2024-10-21 12:13:45.077507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.641 [2024-10-21 12:13:45.077538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.641 qpair failed and we were unable to recover it. 00:29:08.641 [2024-10-21 12:13:45.077896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.641 [2024-10-21 12:13:45.077926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.642 qpair failed and we were unable to recover it. 00:29:08.642 [2024-10-21 12:13:45.078355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.642 [2024-10-21 12:13:45.078390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.642 qpair failed and we were unable to recover it. 00:29:08.642 [2024-10-21 12:13:45.078754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.642 [2024-10-21 12:13:45.078784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.642 qpair failed and we were unable to recover it. 00:29:08.642 [2024-10-21 12:13:45.079132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.642 [2024-10-21 12:13:45.079162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.642 qpair failed and we were unable to recover it. 00:29:08.642 [2024-10-21 12:13:45.079533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.642 [2024-10-21 12:13:45.079565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.642 qpair failed and we were unable to recover it. 00:29:08.642 [2024-10-21 12:13:45.079900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.642 [2024-10-21 12:13:45.079930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.642 qpair failed and we were unable to recover it. 00:29:08.642 [2024-10-21 12:13:45.080276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.642 [2024-10-21 12:13:45.080307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.642 qpair failed and we were unable to recover it. 00:29:08.642 [2024-10-21 12:13:45.080649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.642 [2024-10-21 12:13:45.080681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.642 qpair failed and we were unable to recover it. 00:29:08.642 [2024-10-21 12:13:45.081104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.642 [2024-10-21 12:13:45.081134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.642 qpair failed and we were unable to recover it. 00:29:08.642 [2024-10-21 12:13:45.081483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.642 [2024-10-21 12:13:45.081514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.642 qpair failed and we were unable to recover it. 00:29:08.642 [2024-10-21 12:13:45.081903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.642 [2024-10-21 12:13:45.081933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.642 qpair failed and we were unable to recover it. 00:29:08.642 [2024-10-21 12:13:45.082290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.642 [2024-10-21 12:13:45.082332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.642 qpair failed and we were unable to recover it. 00:29:08.642 [2024-10-21 12:13:45.082713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.642 [2024-10-21 12:13:45.082744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.642 qpair failed and we were unable to recover it. 00:29:08.642 [2024-10-21 12:13:45.083104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.642 [2024-10-21 12:13:45.083134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.642 qpair failed and we were unable to recover it. 00:29:08.642 [2024-10-21 12:13:45.083485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.642 [2024-10-21 12:13:45.083519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.642 qpair failed and we were unable to recover it. 00:29:08.642 [2024-10-21 12:13:45.083874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.642 [2024-10-21 12:13:45.083904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.642 qpair failed and we were unable to recover it. 00:29:08.642 [2024-10-21 12:13:45.084265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.642 [2024-10-21 12:13:45.084294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.642 qpair failed and we were unable to recover it. 00:29:08.642 [2024-10-21 12:13:45.084688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.642 [2024-10-21 12:13:45.084720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.642 qpair failed and we were unable to recover it. 00:29:08.642 [2024-10-21 12:13:45.085081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.642 [2024-10-21 12:13:45.085113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.642 qpair failed and we were unable to recover it. 00:29:08.642 [2024-10-21 12:13:45.085468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.642 [2024-10-21 12:13:45.085501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.642 qpair failed and we were unable to recover it. 00:29:08.642 [2024-10-21 12:13:45.085854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.642 [2024-10-21 12:13:45.085886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.642 qpair failed and we were unable to recover it. 00:29:08.642 [2024-10-21 12:13:45.086232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.642 [2024-10-21 12:13:45.086263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.642 qpair failed and we were unable to recover it. 00:29:08.642 [2024-10-21 12:13:45.086674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.642 [2024-10-21 12:13:45.086709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.642 qpair failed and we were unable to recover it. 00:29:08.642 [2024-10-21 12:13:45.087061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.642 [2024-10-21 12:13:45.087091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.642 qpair failed and we were unable to recover it. 00:29:08.642 [2024-10-21 12:13:45.087450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.642 [2024-10-21 12:13:45.087490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.642 qpair failed and we were unable to recover it. 00:29:08.642 [2024-10-21 12:13:45.087843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.642 [2024-10-21 12:13:45.087874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.642 qpair failed and we were unable to recover it. 00:29:08.642 [2024-10-21 12:13:45.088234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.642 [2024-10-21 12:13:45.088266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.642 qpair failed and we were unable to recover it. 00:29:08.642 [2024-10-21 12:13:45.088627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.642 [2024-10-21 12:13:45.088659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.642 qpair failed and we were unable to recover it. 00:29:08.642 [2024-10-21 12:13:45.089029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.642 [2024-10-21 12:13:45.089061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.642 qpair failed and we were unable to recover it. 00:29:08.642 [2024-10-21 12:13:45.089415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.642 [2024-10-21 12:13:45.089447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.642 qpair failed and we were unable to recover it. 00:29:08.642 [2024-10-21 12:13:45.089809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.642 [2024-10-21 12:13:45.089838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.642 qpair failed and we were unable to recover it. 00:29:08.642 [2024-10-21 12:13:45.090208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.643 [2024-10-21 12:13:45.090238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.643 qpair failed and we were unable to recover it. 00:29:08.643 [2024-10-21 12:13:45.090577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.643 [2024-10-21 12:13:45.090607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.643 qpair failed and we were unable to recover it. 00:29:08.643 [2024-10-21 12:13:45.090969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.643 [2024-10-21 12:13:45.090999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.643 qpair failed and we were unable to recover it. 00:29:08.643 [2024-10-21 12:13:45.091356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.643 [2024-10-21 12:13:45.091391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.643 qpair failed and we were unable to recover it. 00:29:08.643 [2024-10-21 12:13:45.091655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.643 [2024-10-21 12:13:45.091685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.643 qpair failed and we were unable to recover it. 00:29:08.643 [2024-10-21 12:13:45.092047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.643 [2024-10-21 12:13:45.092078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.643 qpair failed and we were unable to recover it. 00:29:08.643 [2024-10-21 12:13:45.092437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.643 [2024-10-21 12:13:45.092469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.643 qpair failed and we were unable to recover it. 00:29:08.643 [2024-10-21 12:13:45.092833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.643 [2024-10-21 12:13:45.092862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.643 qpair failed and we were unable to recover it. 00:29:08.643 [2024-10-21 12:13:45.093259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.643 [2024-10-21 12:13:45.093290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.643 qpair failed and we were unable to recover it. 00:29:08.643 [2024-10-21 12:13:45.093660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.643 [2024-10-21 12:13:45.093691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.643 qpair failed and we were unable to recover it. 00:29:08.643 [2024-10-21 12:13:45.094028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.643 [2024-10-21 12:13:45.094058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.643 qpair failed and we were unable to recover it. 00:29:08.643 [2024-10-21 12:13:45.094436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.643 [2024-10-21 12:13:45.094470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.643 qpair failed and we were unable to recover it. 00:29:08.643 [2024-10-21 12:13:45.094842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.643 [2024-10-21 12:13:45.094872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.643 qpair failed and we were unable to recover it. 00:29:08.643 [2024-10-21 12:13:45.095128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.643 [2024-10-21 12:13:45.095157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.643 qpair failed and we were unable to recover it. 00:29:08.643 [2024-10-21 12:13:45.095499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.643 [2024-10-21 12:13:45.095532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.643 qpair failed and we were unable to recover it. 00:29:08.643 [2024-10-21 12:13:45.095884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.643 [2024-10-21 12:13:45.095916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.643 qpair failed and we were unable to recover it. 00:29:08.643 [2024-10-21 12:13:45.096274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.643 [2024-10-21 12:13:45.096303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.643 qpair failed and we were unable to recover it. 00:29:08.643 [2024-10-21 12:13:45.096561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.643 [2024-10-21 12:13:45.096593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.643 qpair failed and we were unable to recover it. 00:29:08.643 [2024-10-21 12:13:45.096935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.643 [2024-10-21 12:13:45.096967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.643 qpair failed and we were unable to recover it. 00:29:08.643 [2024-10-21 12:13:45.097225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.643 [2024-10-21 12:13:45.097255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.643 qpair failed and we were unable to recover it. 00:29:08.643 [2024-10-21 12:13:45.097645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.643 [2024-10-21 12:13:45.097678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.643 qpair failed and we were unable to recover it. 00:29:08.643 [2024-10-21 12:13:45.098044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.643 [2024-10-21 12:13:45.098074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.643 qpair failed and we were unable to recover it. 00:29:08.643 [2024-10-21 12:13:45.098436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.643 [2024-10-21 12:13:45.098468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.643 qpair failed and we were unable to recover it. 00:29:08.643 [2024-10-21 12:13:45.098830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.643 [2024-10-21 12:13:45.098861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.643 qpair failed and we were unable to recover it. 00:29:08.643 [2024-10-21 12:13:45.099227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.643 [2024-10-21 12:13:45.099256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.643 qpair failed and we were unable to recover it. 00:29:08.643 [2024-10-21 12:13:45.099614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.643 [2024-10-21 12:13:45.099646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.643 qpair failed and we were unable to recover it. 00:29:08.643 [2024-10-21 12:13:45.099908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.643 [2024-10-21 12:13:45.099938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.643 qpair failed and we were unable to recover it. 00:29:08.643 [2024-10-21 12:13:45.100162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.643 [2024-10-21 12:13:45.100193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.643 qpair failed and we were unable to recover it. 00:29:08.643 [2024-10-21 12:13:45.100415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.643 [2024-10-21 12:13:45.100449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.643 qpair failed and we were unable to recover it. 00:29:08.643 [2024-10-21 12:13:45.100701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.643 [2024-10-21 12:13:45.100731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.643 qpair failed and we were unable to recover it. 00:29:08.643 [2024-10-21 12:13:45.101123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.643 [2024-10-21 12:13:45.101153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.643 qpair failed and we were unable to recover it. 00:29:08.643 [2024-10-21 12:13:45.101526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.643 [2024-10-21 12:13:45.101557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.643 qpair failed and we were unable to recover it. 00:29:08.643 [2024-10-21 12:13:45.101961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.643 [2024-10-21 12:13:45.101991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.643 qpair failed and we were unable to recover it. 00:29:08.643 [2024-10-21 12:13:45.102347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.643 [2024-10-21 12:13:45.102384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.643 qpair failed and we were unable to recover it. 00:29:08.643 [2024-10-21 12:13:45.102790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.643 [2024-10-21 12:13:45.102820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.643 qpair failed and we were unable to recover it. 00:29:08.643 [2024-10-21 12:13:45.103183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.643 [2024-10-21 12:13:45.103214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.643 qpair failed and we were unable to recover it. 00:29:08.643 [2024-10-21 12:13:45.103575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.644 [2024-10-21 12:13:45.103608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.644 qpair failed and we were unable to recover it. 00:29:08.644 [2024-10-21 12:13:45.103972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.644 [2024-10-21 12:13:45.104001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.644 qpair failed and we were unable to recover it. 00:29:08.644 [2024-10-21 12:13:45.104376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.644 [2024-10-21 12:13:45.104407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.644 qpair failed and we were unable to recover it. 00:29:08.644 [2024-10-21 12:13:45.104677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.644 [2024-10-21 12:13:45.104706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.644 qpair failed and we were unable to recover it. 00:29:08.644 [2024-10-21 12:13:45.105061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.644 [2024-10-21 12:13:45.105091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.644 qpair failed and we were unable to recover it. 00:29:08.644 [2024-10-21 12:13:45.105451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.644 [2024-10-21 12:13:45.105481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.644 qpair failed and we were unable to recover it. 00:29:08.644 [2024-10-21 12:13:45.105848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.644 [2024-10-21 12:13:45.105878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.644 qpair failed and we were unable to recover it. 00:29:08.644 [2024-10-21 12:13:45.106116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.644 [2024-10-21 12:13:45.106146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.644 qpair failed and we were unable to recover it. 00:29:08.644 [2024-10-21 12:13:45.106495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.644 [2024-10-21 12:13:45.106526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.644 qpair failed and we were unable to recover it. 00:29:08.644 [2024-10-21 12:13:45.106947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.644 [2024-10-21 12:13:45.106977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.644 qpair failed and we were unable to recover it. 00:29:08.644 [2024-10-21 12:13:45.107337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.644 [2024-10-21 12:13:45.107368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.644 qpair failed and we were unable to recover it. 00:29:08.644 [2024-10-21 12:13:45.107785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.644 [2024-10-21 12:13:45.107815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.644 qpair failed and we were unable to recover it. 00:29:08.644 [2024-10-21 12:13:45.108163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.644 [2024-10-21 12:13:45.108193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.644 qpair failed and we were unable to recover it. 00:29:08.644 [2024-10-21 12:13:45.108550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.644 [2024-10-21 12:13:45.108584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.644 qpair failed and we were unable to recover it. 00:29:08.644 [2024-10-21 12:13:45.108947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.644 [2024-10-21 12:13:45.108978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.644 qpair failed and we were unable to recover it. 00:29:08.644 [2024-10-21 12:13:45.109315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.644 [2024-10-21 12:13:45.109360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.644 qpair failed and we were unable to recover it. 00:29:08.644 [2024-10-21 12:13:45.109733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.644 [2024-10-21 12:13:45.109763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.644 qpair failed and we were unable to recover it. 00:29:08.644 [2024-10-21 12:13:45.110136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.644 [2024-10-21 12:13:45.110168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.644 qpair failed and we were unable to recover it. 00:29:08.644 [2024-10-21 12:13:45.110545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.644 [2024-10-21 12:13:45.110577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.644 qpair failed and we were unable to recover it. 00:29:08.644 [2024-10-21 12:13:45.110935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.644 [2024-10-21 12:13:45.110964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.644 qpair failed and we were unable to recover it. 00:29:08.644 [2024-10-21 12:13:45.111332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.644 [2024-10-21 12:13:45.111364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.644 qpair failed and we were unable to recover it. 00:29:08.644 [2024-10-21 12:13:45.111775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.644 [2024-10-21 12:13:45.111806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.644 qpair failed and we were unable to recover it. 00:29:08.644 [2024-10-21 12:13:45.112169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.644 [2024-10-21 12:13:45.112201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.644 qpair failed and we were unable to recover it. 00:29:08.644 [2024-10-21 12:13:45.112553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.644 [2024-10-21 12:13:45.112587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.644 qpair failed and we were unable to recover it. 00:29:08.644 [2024-10-21 12:13:45.112971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.644 [2024-10-21 12:13:45.113002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.644 qpair failed and we were unable to recover it. 00:29:08.644 [2024-10-21 12:13:45.113360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.644 [2024-10-21 12:13:45.113392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.644 qpair failed and we were unable to recover it. 00:29:08.644 [2024-10-21 12:13:45.113774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.644 [2024-10-21 12:13:45.113803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.644 qpair failed and we were unable to recover it. 00:29:08.644 [2024-10-21 12:13:45.114169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.644 [2024-10-21 12:13:45.114201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.644 qpair failed and we were unable to recover it. 00:29:08.644 [2024-10-21 12:13:45.114444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.644 [2024-10-21 12:13:45.114477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.644 qpair failed and we were unable to recover it. 00:29:08.644 [2024-10-21 12:13:45.114830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.644 [2024-10-21 12:13:45.114859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.644 qpair failed and we were unable to recover it. 00:29:08.644 [2024-10-21 12:13:45.115227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.644 [2024-10-21 12:13:45.115258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.644 qpair failed and we were unable to recover it. 00:29:08.644 [2024-10-21 12:13:45.115506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.644 [2024-10-21 12:13:45.115536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.644 qpair failed and we were unable to recover it. 00:29:08.644 [2024-10-21 12:13:45.115750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.644 [2024-10-21 12:13:45.115782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.644 qpair failed and we were unable to recover it. 00:29:08.644 [2024-10-21 12:13:45.116131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.644 [2024-10-21 12:13:45.116162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.644 qpair failed and we were unable to recover it. 00:29:08.644 [2024-10-21 12:13:45.116501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.644 [2024-10-21 12:13:45.116537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.644 qpair failed and we were unable to recover it. 00:29:08.644 [2024-10-21 12:13:45.116907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.644 [2024-10-21 12:13:45.116937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.644 qpair failed and we were unable to recover it. 00:29:08.644 [2024-10-21 12:13:45.117305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.645 [2024-10-21 12:13:45.117346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.645 qpair failed and we were unable to recover it. 00:29:08.645 [2024-10-21 12:13:45.117737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.645 [2024-10-21 12:13:45.117783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.645 qpair failed and we were unable to recover it. 00:29:08.645 [2024-10-21 12:13:45.118132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.645 [2024-10-21 12:13:45.118162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.645 qpair failed and we were unable to recover it. 00:29:08.645 [2024-10-21 12:13:45.118403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.645 [2024-10-21 12:13:45.118435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.645 qpair failed and we were unable to recover it. 00:29:08.645 [2024-10-21 12:13:45.118830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.645 [2024-10-21 12:13:45.118862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.645 qpair failed and we were unable to recover it. 00:29:08.645 [2024-10-21 12:13:45.119193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.645 [2024-10-21 12:13:45.119224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.645 qpair failed and we were unable to recover it. 00:29:08.645 [2024-10-21 12:13:45.119559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.645 [2024-10-21 12:13:45.119592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.645 qpair failed and we were unable to recover it. 00:29:08.645 [2024-10-21 12:13:45.119953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.645 [2024-10-21 12:13:45.119983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.645 qpair failed and we were unable to recover it. 00:29:08.645 [2024-10-21 12:13:45.120358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.645 [2024-10-21 12:13:45.120392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.645 qpair failed and we were unable to recover it. 00:29:08.645 [2024-10-21 12:13:45.120776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.645 [2024-10-21 12:13:45.120807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.645 qpair failed and we were unable to recover it. 00:29:08.645 [2024-10-21 12:13:45.121171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.645 [2024-10-21 12:13:45.121201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.645 qpair failed and we were unable to recover it. 00:29:08.645 [2024-10-21 12:13:45.121448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.645 [2024-10-21 12:13:45.121480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.645 qpair failed and we were unable to recover it. 00:29:08.645 [2024-10-21 12:13:45.121856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.645 [2024-10-21 12:13:45.121887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.645 qpair failed and we were unable to recover it. 00:29:08.645 [2024-10-21 12:13:45.122273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.645 [2024-10-21 12:13:45.122305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.645 qpair failed and we were unable to recover it. 00:29:08.645 [2024-10-21 12:13:45.122677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.645 [2024-10-21 12:13:45.122708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.645 qpair failed and we were unable to recover it. 00:29:08.645 [2024-10-21 12:13:45.123073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.645 [2024-10-21 12:13:45.123103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.645 qpair failed and we were unable to recover it. 00:29:08.645 [2024-10-21 12:13:45.123468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.645 [2024-10-21 12:13:45.123499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.645 qpair failed and we were unable to recover it. 00:29:08.645 [2024-10-21 12:13:45.123870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.645 [2024-10-21 12:13:45.123899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.645 qpair failed and we were unable to recover it. 00:29:08.645 [2024-10-21 12:13:45.124233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.645 [2024-10-21 12:13:45.124264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.645 qpair failed and we were unable to recover it. 00:29:08.645 [2024-10-21 12:13:45.124626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.645 [2024-10-21 12:13:45.124659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.645 qpair failed and we were unable to recover it. 00:29:08.645 [2024-10-21 12:13:45.125012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.645 [2024-10-21 12:13:45.125043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.645 qpair failed and we were unable to recover it. 00:29:08.645 [2024-10-21 12:13:45.125396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.645 [2024-10-21 12:13:45.125429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.645 qpair failed and we were unable to recover it. 00:29:08.645 [2024-10-21 12:13:45.125835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.645 [2024-10-21 12:13:45.125867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.645 qpair failed and we were unable to recover it. 00:29:08.645 [2024-10-21 12:13:45.126234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.645 [2024-10-21 12:13:45.126267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.645 qpair failed and we were unable to recover it. 00:29:08.645 [2024-10-21 12:13:45.126626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.645 [2024-10-21 12:13:45.126659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.645 qpair failed and we were unable to recover it. 00:29:08.645 [2024-10-21 12:13:45.127011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.645 [2024-10-21 12:13:45.127043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.645 qpair failed and we were unable to recover it. 00:29:08.645 [2024-10-21 12:13:45.127291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.645 [2024-10-21 12:13:45.127338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.645 qpair failed and we were unable to recover it. 00:29:08.645 [2024-10-21 12:13:45.127588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.645 [2024-10-21 12:13:45.127619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.645 qpair failed and we were unable to recover it. 00:29:08.645 [2024-10-21 12:13:45.127981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.645 [2024-10-21 12:13:45.128014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.645 qpair failed and we were unable to recover it. 00:29:08.645 [2024-10-21 12:13:45.128376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.645 [2024-10-21 12:13:45.128409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.645 qpair failed and we were unable to recover it. 00:29:08.645 [2024-10-21 12:13:45.128779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.645 [2024-10-21 12:13:45.128810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.645 qpair failed and we were unable to recover it. 00:29:08.645 [2024-10-21 12:13:45.129164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.645 [2024-10-21 12:13:45.129196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.645 qpair failed and we were unable to recover it. 00:29:08.645 [2024-10-21 12:13:45.129546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.645 [2024-10-21 12:13:45.129580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.645 qpair failed and we were unable to recover it. 00:29:08.645 [2024-10-21 12:13:45.129936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.645 [2024-10-21 12:13:45.129968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.645 qpair failed and we were unable to recover it. 00:29:08.645 [2024-10-21 12:13:45.130339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.645 [2024-10-21 12:13:45.130372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.645 qpair failed and we were unable to recover it. 00:29:08.645 [2024-10-21 12:13:45.130607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.645 [2024-10-21 12:13:45.130638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.645 qpair failed and we were unable to recover it. 00:29:08.645 [2024-10-21 12:13:45.131002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.646 [2024-10-21 12:13:45.131033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.646 qpair failed and we were unable to recover it. 00:29:08.646 [2024-10-21 12:13:45.131394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.646 [2024-10-21 12:13:45.131426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.646 qpair failed and we were unable to recover it. 00:29:08.646 [2024-10-21 12:13:45.131791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.646 [2024-10-21 12:13:45.131823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.646 qpair failed and we were unable to recover it. 00:29:08.646 [2024-10-21 12:13:45.132180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.646 [2024-10-21 12:13:45.132213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.646 qpair failed and we were unable to recover it. 00:29:08.646 [2024-10-21 12:13:45.132586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.646 [2024-10-21 12:13:45.132618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.646 qpair failed and we were unable to recover it. 00:29:08.646 [2024-10-21 12:13:45.132973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.646 [2024-10-21 12:13:45.133011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.646 qpair failed and we were unable to recover it. 00:29:08.646 [2024-10-21 12:13:45.133380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.646 [2024-10-21 12:13:45.133414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.646 qpair failed and we were unable to recover it. 00:29:08.646 [2024-10-21 12:13:45.133759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.646 [2024-10-21 12:13:45.133789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.646 qpair failed and we were unable to recover it. 00:29:08.646 [2024-10-21 12:13:45.134161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.646 [2024-10-21 12:13:45.134194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.646 qpair failed and we were unable to recover it. 00:29:08.646 [2024-10-21 12:13:45.134446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.646 [2024-10-21 12:13:45.134479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.646 qpair failed and we were unable to recover it. 00:29:08.646 [2024-10-21 12:13:45.134843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.646 [2024-10-21 12:13:45.134874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.646 qpair failed and we were unable to recover it. 00:29:08.646 [2024-10-21 12:13:45.135128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.646 [2024-10-21 12:13:45.135160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.646 qpair failed and we were unable to recover it. 00:29:08.646 [2024-10-21 12:13:45.135409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.646 [2024-10-21 12:13:45.135446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.646 qpair failed and we were unable to recover it. 00:29:08.646 [2024-10-21 12:13:45.135877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.646 [2024-10-21 12:13:45.135908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.646 qpair failed and we were unable to recover it. 00:29:08.646 [2024-10-21 12:13:45.136269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.646 [2024-10-21 12:13:45.136300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.646 qpair failed and we were unable to recover it. 00:29:08.646 [2024-10-21 12:13:45.136707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.646 [2024-10-21 12:13:45.136739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.646 qpair failed and we were unable to recover it. 00:29:08.646 [2024-10-21 12:13:45.137098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.646 [2024-10-21 12:13:45.137128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.646 qpair failed and we were unable to recover it. 00:29:08.646 [2024-10-21 12:13:45.137503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.646 [2024-10-21 12:13:45.137537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.646 qpair failed and we were unable to recover it. 00:29:08.646 [2024-10-21 12:13:45.137927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.646 [2024-10-21 12:13:45.137959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.646 qpair failed and we were unable to recover it. 00:29:08.646 [2024-10-21 12:13:45.138228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.646 [2024-10-21 12:13:45.138262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.646 qpair failed and we were unable to recover it. 00:29:08.646 [2024-10-21 12:13:45.138618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.646 [2024-10-21 12:13:45.138650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.646 qpair failed and we were unable to recover it. 00:29:08.646 [2024-10-21 12:13:45.138896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.646 [2024-10-21 12:13:45.138928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.646 qpair failed and we were unable to recover it. 00:29:08.646 [2024-10-21 12:13:45.139286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.646 [2024-10-21 12:13:45.139317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.646 qpair failed and we were unable to recover it. 00:29:08.646 [2024-10-21 12:13:45.139725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.646 [2024-10-21 12:13:45.139756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.646 qpair failed and we were unable to recover it. 00:29:08.646 [2024-10-21 12:13:45.140117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.646 [2024-10-21 12:13:45.140150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.646 qpair failed and we were unable to recover it. 00:29:08.646 [2024-10-21 12:13:45.140374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.646 [2024-10-21 12:13:45.140406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.646 qpair failed and we were unable to recover it. 00:29:08.646 [2024-10-21 12:13:45.140672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.646 [2024-10-21 12:13:45.140703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.646 qpair failed and we were unable to recover it. 00:29:08.646 [2024-10-21 12:13:45.141086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.646 [2024-10-21 12:13:45.141116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.646 qpair failed and we were unable to recover it. 00:29:08.646 [2024-10-21 12:13:45.141365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.646 [2024-10-21 12:13:45.141399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.646 qpair failed and we were unable to recover it. 00:29:08.646 [2024-10-21 12:13:45.141757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.646 [2024-10-21 12:13:45.141788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.646 qpair failed and we were unable to recover it. 00:29:08.646 [2024-10-21 12:13:45.142009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.646 [2024-10-21 12:13:45.142038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.646 qpair failed and we were unable to recover it. 00:29:08.646 [2024-10-21 12:13:45.142441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.646 [2024-10-21 12:13:45.142473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.646 qpair failed and we were unable to recover it. 00:29:08.646 [2024-10-21 12:13:45.142839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.646 [2024-10-21 12:13:45.142869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.646 qpair failed and we were unable to recover it. 00:29:08.646 [2024-10-21 12:13:45.143233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.646 [2024-10-21 12:13:45.143263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.647 qpair failed and we were unable to recover it. 00:29:08.647 [2024-10-21 12:13:45.143637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.647 [2024-10-21 12:13:45.143670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.647 qpair failed and we were unable to recover it. 00:29:08.647 [2024-10-21 12:13:45.144030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.647 [2024-10-21 12:13:45.144061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.647 qpair failed and we were unable to recover it. 00:29:08.647 [2024-10-21 12:13:45.144440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.647 [2024-10-21 12:13:45.144472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.647 qpair failed and we were unable to recover it. 00:29:08.647 [2024-10-21 12:13:45.144850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.647 [2024-10-21 12:13:45.144884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.647 qpair failed and we were unable to recover it. 00:29:08.647 [2024-10-21 12:13:45.145223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.647 [2024-10-21 12:13:45.145253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.647 qpair failed and we were unable to recover it. 00:29:08.647 [2024-10-21 12:13:45.145648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.647 [2024-10-21 12:13:45.145682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.647 qpair failed and we were unable to recover it. 00:29:08.647 [2024-10-21 12:13:45.146037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.647 [2024-10-21 12:13:45.146069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.647 qpair failed and we were unable to recover it. 00:29:08.647 [2024-10-21 12:13:45.146432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.647 [2024-10-21 12:13:45.146463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.647 qpair failed and we were unable to recover it. 00:29:08.647 [2024-10-21 12:13:45.146832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.647 [2024-10-21 12:13:45.146861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.647 qpair failed and we were unable to recover it. 00:29:08.647 [2024-10-21 12:13:45.147266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.647 [2024-10-21 12:13:45.147297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.647 qpair failed and we were unable to recover it. 00:29:08.647 [2024-10-21 12:13:45.147676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.647 [2024-10-21 12:13:45.147708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.647 qpair failed and we were unable to recover it. 00:29:08.647 [2024-10-21 12:13:45.148069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.647 [2024-10-21 12:13:45.148106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.647 qpair failed and we were unable to recover it. 00:29:08.647 [2024-10-21 12:13:45.148461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.647 [2024-10-21 12:13:45.148493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.647 qpair failed and we were unable to recover it. 00:29:08.647 [2024-10-21 12:13:45.148856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.647 [2024-10-21 12:13:45.148885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.647 qpair failed and we were unable to recover it. 00:29:08.647 [2024-10-21 12:13:45.149248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.647 [2024-10-21 12:13:45.149279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.647 qpair failed and we were unable to recover it. 00:29:08.647 [2024-10-21 12:13:45.149679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.647 [2024-10-21 12:13:45.149711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.647 qpair failed and we were unable to recover it. 00:29:08.647 [2024-10-21 12:13:45.150071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.647 [2024-10-21 12:13:45.150100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.647 qpair failed and we were unable to recover it. 00:29:08.647 [2024-10-21 12:13:45.150474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.647 [2024-10-21 12:13:45.150506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.647 qpair failed and we were unable to recover it. 00:29:08.647 [2024-10-21 12:13:45.150862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.647 [2024-10-21 12:13:45.150891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.647 qpair failed and we were unable to recover it. 00:29:08.647 [2024-10-21 12:13:45.151236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.647 [2024-10-21 12:13:45.151267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.647 qpair failed and we were unable to recover it. 00:29:08.647 [2024-10-21 12:13:45.151626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.647 [2024-10-21 12:13:45.151657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.647 qpair failed and we were unable to recover it. 00:29:08.647 [2024-10-21 12:13:45.152017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.647 [2024-10-21 12:13:45.152049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.647 qpair failed and we were unable to recover it. 00:29:08.647 [2024-10-21 12:13:45.152416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.647 [2024-10-21 12:13:45.152449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.647 qpair failed and we were unable to recover it. 00:29:08.647 [2024-10-21 12:13:45.152804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.647 [2024-10-21 12:13:45.152834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.647 qpair failed and we were unable to recover it. 00:29:08.647 [2024-10-21 12:13:45.153077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.647 [2024-10-21 12:13:45.153108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.647 qpair failed and we were unable to recover it. 00:29:08.647 [2024-10-21 12:13:45.153446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.647 [2024-10-21 12:13:45.153479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.647 qpair failed and we were unable to recover it. 00:29:08.647 [2024-10-21 12:13:45.153879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.647 [2024-10-21 12:13:45.153910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.647 qpair failed and we were unable to recover it. 00:29:08.647 [2024-10-21 12:13:45.154260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.647 [2024-10-21 12:13:45.154290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.647 qpair failed and we were unable to recover it. 00:29:08.647 [2024-10-21 12:13:45.154672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.647 [2024-10-21 12:13:45.154704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.647 qpair failed and we were unable to recover it. 00:29:08.647 [2024-10-21 12:13:45.155042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.647 [2024-10-21 12:13:45.155071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.647 qpair failed and we were unable to recover it. 00:29:08.647 [2024-10-21 12:13:45.155442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.647 [2024-10-21 12:13:45.155473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.647 qpair failed and we were unable to recover it. 00:29:08.647 [2024-10-21 12:13:45.155837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.647 [2024-10-21 12:13:45.155868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.647 qpair failed and we were unable to recover it. 00:29:08.647 [2024-10-21 12:13:45.156221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.647 [2024-10-21 12:13:45.156252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.647 qpair failed and we were unable to recover it. 00:29:08.648 [2024-10-21 12:13:45.156496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.648 [2024-10-21 12:13:45.156527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.648 qpair failed and we were unable to recover it. 00:29:08.648 [2024-10-21 12:13:45.156864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.648 [2024-10-21 12:13:45.156894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.648 qpair failed and we were unable to recover it. 00:29:08.648 [2024-10-21 12:13:45.157260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.648 [2024-10-21 12:13:45.157291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.648 qpair failed and we were unable to recover it. 00:29:08.648 [2024-10-21 12:13:45.157666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.648 [2024-10-21 12:13:45.157698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.648 qpair failed and we were unable to recover it. 00:29:08.648 [2024-10-21 12:13:45.158059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.648 [2024-10-21 12:13:45.158088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.648 qpair failed and we were unable to recover it. 00:29:08.648 [2024-10-21 12:13:45.158452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.648 [2024-10-21 12:13:45.158484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.648 qpair failed and we were unable to recover it. 00:29:08.648 [2024-10-21 12:13:45.158833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.648 [2024-10-21 12:13:45.158863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.648 qpair failed and we were unable to recover it. 00:29:08.648 [2024-10-21 12:13:45.159193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.648 [2024-10-21 12:13:45.159223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.648 qpair failed and we were unable to recover it. 00:29:08.648 [2024-10-21 12:13:45.159564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.648 [2024-10-21 12:13:45.159596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.648 qpair failed and we were unable to recover it. 00:29:08.648 [2024-10-21 12:13:45.159951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.648 [2024-10-21 12:13:45.159981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.648 qpair failed and we were unable to recover it. 00:29:08.648 [2024-10-21 12:13:45.160349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.648 [2024-10-21 12:13:45.160382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.648 qpair failed and we were unable to recover it. 00:29:08.648 [2024-10-21 12:13:45.160739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.648 [2024-10-21 12:13:45.160767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.648 qpair failed and we were unable to recover it. 00:29:08.648 [2024-10-21 12:13:45.161123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.648 [2024-10-21 12:13:45.161153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.648 qpair failed and we were unable to recover it. 00:29:08.648 [2024-10-21 12:13:45.161578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.648 [2024-10-21 12:13:45.161611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.648 qpair failed and we were unable to recover it. 00:29:08.648 [2024-10-21 12:13:45.161957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.648 [2024-10-21 12:13:45.161989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.648 qpair failed and we were unable to recover it. 00:29:08.648 [2024-10-21 12:13:45.162342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.648 [2024-10-21 12:13:45.162374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.648 qpair failed and we were unable to recover it. 00:29:08.648 [2024-10-21 12:13:45.162773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.648 [2024-10-21 12:13:45.162804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.648 qpair failed and we were unable to recover it. 00:29:08.648 [2024-10-21 12:13:45.163151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.648 [2024-10-21 12:13:45.163180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.648 qpair failed and we were unable to recover it. 00:29:08.648 [2024-10-21 12:13:45.163507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.648 [2024-10-21 12:13:45.163540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.648 qpair failed and we were unable to recover it. 00:29:08.648 [2024-10-21 12:13:45.163928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.648 [2024-10-21 12:13:45.163959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.648 qpair failed and we were unable to recover it. 00:29:08.648 [2024-10-21 12:13:45.164313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.648 [2024-10-21 12:13:45.164354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.648 qpair failed and we were unable to recover it. 00:29:08.648 [2024-10-21 12:13:45.164709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.648 [2024-10-21 12:13:45.164738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.648 qpair failed and we were unable to recover it. 00:29:08.648 [2024-10-21 12:13:45.165104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.648 [2024-10-21 12:13:45.165134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.648 qpair failed and we were unable to recover it. 00:29:08.648 [2024-10-21 12:13:45.165479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.648 [2024-10-21 12:13:45.165512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.648 qpair failed and we were unable to recover it. 00:29:08.648 [2024-10-21 12:13:45.165870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.648 [2024-10-21 12:13:45.165900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.648 qpair failed and we were unable to recover it. 00:29:08.648 [2024-10-21 12:13:45.166263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.648 [2024-10-21 12:13:45.166293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.648 qpair failed and we were unable to recover it. 00:29:08.648 [2024-10-21 12:13:45.166650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.648 [2024-10-21 12:13:45.166681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.648 qpair failed and we were unable to recover it. 00:29:08.648 [2024-10-21 12:13:45.167034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.648 [2024-10-21 12:13:45.167063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.648 qpair failed and we were unable to recover it. 00:29:08.648 [2024-10-21 12:13:45.167422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.648 [2024-10-21 12:13:45.167454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.648 qpair failed and we were unable to recover it. 00:29:08.648 [2024-10-21 12:13:45.167805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.648 [2024-10-21 12:13:45.167833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.648 qpair failed and we were unable to recover it. 00:29:08.648 [2024-10-21 12:13:45.168203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.648 [2024-10-21 12:13:45.168232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.648 qpair failed and we were unable to recover it. 00:29:08.648 [2024-10-21 12:13:45.168481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.648 [2024-10-21 12:13:45.168512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.648 qpair failed and we were unable to recover it. 00:29:08.648 [2024-10-21 12:13:45.168872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.648 [2024-10-21 12:13:45.168902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.648 qpair failed and we were unable to recover it. 00:29:08.648 [2024-10-21 12:13:45.169256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.648 [2024-10-21 12:13:45.169287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.648 qpair failed and we were unable to recover it. 00:29:08.648 [2024-10-21 12:13:45.169672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.648 [2024-10-21 12:13:45.169704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.648 qpair failed and we were unable to recover it. 00:29:08.648 [2024-10-21 12:13:45.170054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.649 [2024-10-21 12:13:45.170085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.649 qpair failed and we were unable to recover it. 00:29:08.649 [2024-10-21 12:13:45.170462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.649 [2024-10-21 12:13:45.170496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.649 qpair failed and we were unable to recover it. 00:29:08.649 [2024-10-21 12:13:45.170848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.649 [2024-10-21 12:13:45.170878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.649 qpair failed and we were unable to recover it. 00:29:08.649 [2024-10-21 12:13:45.171240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.649 [2024-10-21 12:13:45.171271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.649 qpair failed and we were unable to recover it. 00:29:08.649 [2024-10-21 12:13:45.171656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.649 [2024-10-21 12:13:45.171687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.649 qpair failed and we were unable to recover it. 00:29:08.649 [2024-10-21 12:13:45.172038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.649 [2024-10-21 12:13:45.172069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.649 qpair failed and we were unable to recover it. 00:29:08.649 [2024-10-21 12:13:45.172423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.649 [2024-10-21 12:13:45.172455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.649 qpair failed and we were unable to recover it. 00:29:08.649 [2024-10-21 12:13:45.172823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.649 [2024-10-21 12:13:45.172854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.649 qpair failed and we were unable to recover it. 00:29:08.649 [2024-10-21 12:13:45.173208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.649 [2024-10-21 12:13:45.173239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.649 qpair failed and we were unable to recover it. 00:29:08.649 [2024-10-21 12:13:45.173600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.649 [2024-10-21 12:13:45.173632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.649 qpair failed and we were unable to recover it. 00:29:08.649 [2024-10-21 12:13:45.173879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.649 [2024-10-21 12:13:45.173920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.649 qpair failed and we were unable to recover it. 00:29:08.649 [2024-10-21 12:13:45.174299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.649 [2024-10-21 12:13:45.174341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.649 qpair failed and we were unable to recover it. 00:29:08.649 [2024-10-21 12:13:45.174691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.649 [2024-10-21 12:13:45.174721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.649 qpair failed and we were unable to recover it. 00:29:08.649 [2024-10-21 12:13:45.175076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.649 [2024-10-21 12:13:45.175105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.649 qpair failed and we were unable to recover it. 00:29:08.649 [2024-10-21 12:13:45.175485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.649 [2024-10-21 12:13:45.175517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.649 qpair failed and we were unable to recover it. 00:29:08.649 [2024-10-21 12:13:45.175870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.649 [2024-10-21 12:13:45.175901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.649 qpair failed and we were unable to recover it. 00:29:08.649 [2024-10-21 12:13:45.176259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.649 [2024-10-21 12:13:45.176288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.649 qpair failed and we were unable to recover it. 00:29:08.649 [2024-10-21 12:13:45.176695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.649 [2024-10-21 12:13:45.176727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.649 qpair failed and we were unable to recover it. 00:29:08.649 [2024-10-21 12:13:45.177079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.649 [2024-10-21 12:13:45.177110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.649 qpair failed and we were unable to recover it. 00:29:08.649 [2024-10-21 12:13:45.177473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.649 [2024-10-21 12:13:45.177506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.649 qpair failed and we were unable to recover it. 00:29:08.649 [2024-10-21 12:13:45.177836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.649 [2024-10-21 12:13:45.177867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.649 qpair failed and we were unable to recover it. 00:29:08.649 [2024-10-21 12:13:45.178203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.649 [2024-10-21 12:13:45.178234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.649 qpair failed and we were unable to recover it. 00:29:08.649 [2024-10-21 12:13:45.178582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.649 [2024-10-21 12:13:45.178615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.649 qpair failed and we were unable to recover it. 00:29:08.649 [2024-10-21 12:13:45.178979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.649 [2024-10-21 12:13:45.179010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.649 qpair failed and we were unable to recover it. 00:29:08.649 [2024-10-21 12:13:45.179379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.649 [2024-10-21 12:13:45.179413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.649 qpair failed and we were unable to recover it. 00:29:08.649 [2024-10-21 12:13:45.179793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.649 [2024-10-21 12:13:45.179824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.649 qpair failed and we were unable to recover it. 00:29:08.649 [2024-10-21 12:13:45.180183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.649 [2024-10-21 12:13:45.180214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.649 qpair failed and we were unable to recover it. 00:29:08.649 [2024-10-21 12:13:45.180569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.649 [2024-10-21 12:13:45.180601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.649 qpair failed and we were unable to recover it. 00:29:08.649 [2024-10-21 12:13:45.180953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.649 [2024-10-21 12:13:45.180985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.649 qpair failed and we were unable to recover it. 00:29:08.649 [2024-10-21 12:13:45.181345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.649 [2024-10-21 12:13:45.181378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.649 qpair failed and we were unable to recover it. 00:29:08.649 [2024-10-21 12:13:45.181725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.649 [2024-10-21 12:13:45.181756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.649 qpair failed and we were unable to recover it. 00:29:08.649 [2024-10-21 12:13:45.182116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.649 [2024-10-21 12:13:45.182147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.649 qpair failed and we were unable to recover it. 00:29:08.649 [2024-10-21 12:13:45.182500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.649 [2024-10-21 12:13:45.182533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.649 qpair failed and we were unable to recover it. 00:29:08.649 [2024-10-21 12:13:45.182700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.649 [2024-10-21 12:13:45.182734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.649 qpair failed and we were unable to recover it. 00:29:08.649 [2024-10-21 12:13:45.183102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.649 [2024-10-21 12:13:45.183133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.649 qpair failed and we were unable to recover it. 00:29:08.649 [2024-10-21 12:13:45.183492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.649 [2024-10-21 12:13:45.183524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.649 qpair failed and we were unable to recover it. 00:29:08.650 [2024-10-21 12:13:45.183877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.650 [2024-10-21 12:13:45.183909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.650 qpair failed and we were unable to recover it. 00:29:08.650 [2024-10-21 12:13:45.184276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.650 [2024-10-21 12:13:45.184307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.650 qpair failed and we were unable to recover it. 00:29:08.650 [2024-10-21 12:13:45.184686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.650 [2024-10-21 12:13:45.184717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.650 qpair failed and we were unable to recover it. 00:29:08.650 [2024-10-21 12:13:45.185075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.650 [2024-10-21 12:13:45.185104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.650 qpair failed and we were unable to recover it. 00:29:08.650 [2024-10-21 12:13:45.185464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.650 [2024-10-21 12:13:45.185496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.650 qpair failed and we were unable to recover it. 00:29:08.650 [2024-10-21 12:13:45.185736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.650 [2024-10-21 12:13:45.185769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.650 qpair failed and we were unable to recover it. 00:29:08.650 [2024-10-21 12:13:45.186118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.650 [2024-10-21 12:13:45.186149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.650 qpair failed and we were unable to recover it. 00:29:08.650 [2024-10-21 12:13:45.186507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.650 [2024-10-21 12:13:45.186540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.650 qpair failed and we were unable to recover it. 00:29:08.650 [2024-10-21 12:13:45.186895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.650 [2024-10-21 12:13:45.186925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.650 qpair failed and we were unable to recover it. 00:29:08.650 [2024-10-21 12:13:45.187356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.650 [2024-10-21 12:13:45.187388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.650 qpair failed and we were unable to recover it. 00:29:08.650 [2024-10-21 12:13:45.187735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.650 [2024-10-21 12:13:45.187767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.650 qpair failed and we were unable to recover it. 00:29:08.650 [2024-10-21 12:13:45.188124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.650 [2024-10-21 12:13:45.188155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.650 qpair failed and we were unable to recover it. 00:29:08.650 [2024-10-21 12:13:45.188536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.650 [2024-10-21 12:13:45.188567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.650 qpair failed and we were unable to recover it. 00:29:08.650 [2024-10-21 12:13:45.188997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.650 [2024-10-21 12:13:45.189027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.650 qpair failed and we were unable to recover it. 00:29:08.650 [2024-10-21 12:13:45.189377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.650 [2024-10-21 12:13:45.189417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.650 qpair failed and we were unable to recover it. 00:29:08.650 [2024-10-21 12:13:45.189776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.650 [2024-10-21 12:13:45.189807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.650 qpair failed and we were unable to recover it. 00:29:08.650 [2024-10-21 12:13:45.190163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.650 [2024-10-21 12:13:45.190192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.650 qpair failed and we were unable to recover it. 00:29:08.650 [2024-10-21 12:13:45.190570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.650 [2024-10-21 12:13:45.190601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.650 qpair failed and we were unable to recover it. 00:29:08.650 [2024-10-21 12:13:45.190964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.650 [2024-10-21 12:13:45.190994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.650 qpair failed and we were unable to recover it. 00:29:08.650 [2024-10-21 12:13:45.191344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.650 [2024-10-21 12:13:45.191376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.650 qpair failed and we were unable to recover it. 00:29:08.650 [2024-10-21 12:13:45.191743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.650 [2024-10-21 12:13:45.191773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.650 qpair failed and we were unable to recover it. 00:29:08.650 [2024-10-21 12:13:45.192129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.650 [2024-10-21 12:13:45.192159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.650 qpair failed and we were unable to recover it. 00:29:08.650 [2024-10-21 12:13:45.192554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.650 [2024-10-21 12:13:45.192588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.650 qpair failed and we were unable to recover it. 00:29:08.650 [2024-10-21 12:13:45.192939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.650 [2024-10-21 12:13:45.192970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.650 qpair failed and we were unable to recover it. 00:29:08.650 [2024-10-21 12:13:45.193303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.650 [2024-10-21 12:13:45.193366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.650 qpair failed and we were unable to recover it. 00:29:08.650 [2024-10-21 12:13:45.193695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.650 [2024-10-21 12:13:45.193725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.650 qpair failed and we were unable to recover it. 00:29:08.650 [2024-10-21 12:13:45.194155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.650 [2024-10-21 12:13:45.194186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.650 qpair failed and we were unable to recover it. 00:29:08.650 [2024-10-21 12:13:45.194551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.650 [2024-10-21 12:13:45.194585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.650 qpair failed and we were unable to recover it. 00:29:08.650 [2024-10-21 12:13:45.194942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.650 [2024-10-21 12:13:45.194973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.650 qpair failed and we were unable to recover it. 00:29:08.650 [2024-10-21 12:13:45.195343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.650 [2024-10-21 12:13:45.195375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.650 qpair failed and we were unable to recover it. 00:29:08.650 [2024-10-21 12:13:45.195729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.650 [2024-10-21 12:13:45.195759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.650 qpair failed and we were unable to recover it. 00:29:08.650 [2024-10-21 12:13:45.196119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.650 [2024-10-21 12:13:45.196149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.650 qpair failed and we were unable to recover it. 00:29:08.650 [2024-10-21 12:13:45.196508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.650 [2024-10-21 12:13:45.196539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.650 qpair failed and we were unable to recover it. 00:29:08.650 [2024-10-21 12:13:45.196893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.650 [2024-10-21 12:13:45.196923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.650 qpair failed and we were unable to recover it. 00:29:08.650 [2024-10-21 12:13:45.197285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.650 [2024-10-21 12:13:45.197315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.650 qpair failed and we were unable to recover it. 00:29:08.651 [2024-10-21 12:13:45.197728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.651 [2024-10-21 12:13:45.197758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.651 qpair failed and we were unable to recover it. 00:29:08.651 [2024-10-21 12:13:45.198110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.651 [2024-10-21 12:13:45.198141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.651 qpair failed and we were unable to recover it. 00:29:08.651 [2024-10-21 12:13:45.198534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.651 [2024-10-21 12:13:45.198566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.651 qpair failed and we were unable to recover it. 00:29:08.651 [2024-10-21 12:13:45.198911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.651 [2024-10-21 12:13:45.198941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.651 qpair failed and we were unable to recover it. 00:29:08.651 [2024-10-21 12:13:45.199305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.651 [2024-10-21 12:13:45.199345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.651 qpair failed and we were unable to recover it. 00:29:08.651 [2024-10-21 12:13:45.199704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.651 [2024-10-21 12:13:45.199735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.651 qpair failed and we were unable to recover it. 00:29:08.651 [2024-10-21 12:13:45.200094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.651 [2024-10-21 12:13:45.200124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.651 qpair failed and we were unable to recover it. 00:29:08.651 [2024-10-21 12:13:45.200479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.651 [2024-10-21 12:13:45.200509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.651 qpair failed and we were unable to recover it. 00:29:08.651 [2024-10-21 12:13:45.200861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.651 [2024-10-21 12:13:45.200892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.651 qpair failed and we were unable to recover it. 00:29:08.651 [2024-10-21 12:13:45.201254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.651 [2024-10-21 12:13:45.201284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.651 qpair failed and we were unable to recover it. 00:29:08.930 [2024-10-21 12:13:45.201660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.930 [2024-10-21 12:13:45.201697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.930 qpair failed and we were unable to recover it. 00:29:08.930 [2024-10-21 12:13:45.202090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.930 [2024-10-21 12:13:45.202121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.930 qpair failed and we were unable to recover it. 00:29:08.930 [2024-10-21 12:13:45.202476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.930 [2024-10-21 12:13:45.202508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.930 qpair failed and we were unable to recover it. 00:29:08.930 [2024-10-21 12:13:45.202872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.930 [2024-10-21 12:13:45.202901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.930 qpair failed and we were unable to recover it. 00:29:08.930 [2024-10-21 12:13:45.203262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.930 [2024-10-21 12:13:45.203293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.930 qpair failed and we were unable to recover it. 00:29:08.930 [2024-10-21 12:13:45.203677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.930 [2024-10-21 12:13:45.203709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.930 qpair failed and we were unable to recover it. 00:29:08.930 [2024-10-21 12:13:45.204064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.930 [2024-10-21 12:13:45.204095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.930 qpair failed and we were unable to recover it. 00:29:08.930 [2024-10-21 12:13:45.204450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.930 [2024-10-21 12:13:45.204482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.930 qpair failed and we were unable to recover it. 00:29:08.930 [2024-10-21 12:13:45.204847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.930 [2024-10-21 12:13:45.204880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.930 qpair failed and we were unable to recover it. 00:29:08.930 [2024-10-21 12:13:45.205233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.930 [2024-10-21 12:13:45.205271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.930 qpair failed and we were unable to recover it. 00:29:08.930 [2024-10-21 12:13:45.205674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.930 [2024-10-21 12:13:45.205705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.930 qpair failed and we were unable to recover it. 00:29:08.930 [2024-10-21 12:13:45.206064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.930 [2024-10-21 12:13:45.206096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.930 qpair failed and we were unable to recover it. 00:29:08.930 [2024-10-21 12:13:45.206453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.930 [2024-10-21 12:13:45.206485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.930 qpair failed and we were unable to recover it. 00:29:08.930 [2024-10-21 12:13:45.206842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.930 [2024-10-21 12:13:45.206871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.930 qpair failed and we were unable to recover it. 00:29:08.930 [2024-10-21 12:13:45.207228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.930 [2024-10-21 12:13:45.207258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.930 qpair failed and we were unable to recover it. 00:29:08.930 [2024-10-21 12:13:45.207604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.930 [2024-10-21 12:13:45.207638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.930 qpair failed and we were unable to recover it. 00:29:08.930 [2024-10-21 12:13:45.207989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.930 [2024-10-21 12:13:45.208019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.930 qpair failed and we were unable to recover it. 00:29:08.930 [2024-10-21 12:13:45.208377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.930 [2024-10-21 12:13:45.208408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.930 qpair failed and we were unable to recover it. 00:29:08.930 [2024-10-21 12:13:45.208831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.930 [2024-10-21 12:13:45.208861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.930 qpair failed and we were unable to recover it. 00:29:08.930 [2024-10-21 12:13:45.209216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.930 [2024-10-21 12:13:45.209246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.930 qpair failed and we were unable to recover it. 00:29:08.930 [2024-10-21 12:13:45.209606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.930 [2024-10-21 12:13:45.209637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.930 qpair failed and we were unable to recover it. 00:29:08.930 [2024-10-21 12:13:45.210009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.930 [2024-10-21 12:13:45.210040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.930 qpair failed and we were unable to recover it. 00:29:08.930 [2024-10-21 12:13:45.210389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.930 [2024-10-21 12:13:45.210422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.930 qpair failed and we were unable to recover it. 00:29:08.930 [2024-10-21 12:13:45.210783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.930 [2024-10-21 12:13:45.210814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.930 qpair failed and we were unable to recover it. 00:29:08.930 [2024-10-21 12:13:45.211266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.930 [2024-10-21 12:13:45.211296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.931 qpair failed and we were unable to recover it. 00:29:08.931 [2024-10-21 12:13:45.211670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.931 [2024-10-21 12:13:45.211701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.931 qpair failed and we were unable to recover it. 00:29:08.931 [2024-10-21 12:13:45.212056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.931 [2024-10-21 12:13:45.212087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.931 qpair failed and we were unable to recover it. 00:29:08.931 [2024-10-21 12:13:45.212440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.931 [2024-10-21 12:13:45.212473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.931 qpair failed and we were unable to recover it. 00:29:08.931 [2024-10-21 12:13:45.212839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.931 [2024-10-21 12:13:45.212870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.931 qpair failed and we were unable to recover it. 00:29:08.931 [2024-10-21 12:13:45.213224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.931 [2024-10-21 12:13:45.213254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.931 qpair failed and we were unable to recover it. 00:29:08.931 [2024-10-21 12:13:45.213620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.931 [2024-10-21 12:13:45.213652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.931 qpair failed and we were unable to recover it. 00:29:08.931 [2024-10-21 12:13:45.214009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.931 [2024-10-21 12:13:45.214038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.931 qpair failed and we were unable to recover it. 00:29:08.931 [2024-10-21 12:13:45.214402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.931 [2024-10-21 12:13:45.214434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.931 qpair failed and we were unable to recover it. 00:29:08.931 [2024-10-21 12:13:45.214787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.931 [2024-10-21 12:13:45.214817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.931 qpair failed and we were unable to recover it. 00:29:08.931 [2024-10-21 12:13:45.215179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.931 [2024-10-21 12:13:45.215210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.931 qpair failed and we were unable to recover it. 00:29:08.931 [2024-10-21 12:13:45.215576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.931 [2024-10-21 12:13:45.215607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.931 qpair failed and we were unable to recover it. 00:29:08.931 [2024-10-21 12:13:45.215988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.931 [2024-10-21 12:13:45.216019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.931 qpair failed and we were unable to recover it. 00:29:08.931 [2024-10-21 12:13:45.216370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.931 [2024-10-21 12:13:45.216401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.931 qpair failed and we were unable to recover it. 00:29:08.931 [2024-10-21 12:13:45.216758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.931 [2024-10-21 12:13:45.216790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.931 qpair failed and we were unable to recover it. 00:29:08.931 [2024-10-21 12:13:45.217154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.931 [2024-10-21 12:13:45.217184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.931 qpair failed and we were unable to recover it. 00:29:08.931 [2024-10-21 12:13:45.217538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.931 [2024-10-21 12:13:45.217568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.931 qpair failed and we were unable to recover it. 00:29:08.931 [2024-10-21 12:13:45.217810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.931 [2024-10-21 12:13:45.217844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.931 qpair failed and we were unable to recover it. 00:29:08.931 [2024-10-21 12:13:45.218196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.931 [2024-10-21 12:13:45.218226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.931 qpair failed and we were unable to recover it. 00:29:08.931 [2024-10-21 12:13:45.218473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.931 [2024-10-21 12:13:45.218504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.931 qpair failed and we were unable to recover it. 00:29:08.931 [2024-10-21 12:13:45.218865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.931 [2024-10-21 12:13:45.218895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.931 qpair failed and we were unable to recover it. 00:29:08.931 [2024-10-21 12:13:45.219245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.931 [2024-10-21 12:13:45.219276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.931 qpair failed and we were unable to recover it. 00:29:08.931 [2024-10-21 12:13:45.219657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.931 [2024-10-21 12:13:45.219689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.931 qpair failed and we were unable to recover it. 00:29:08.931 [2024-10-21 12:13:45.220044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.931 [2024-10-21 12:13:45.220074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.931 qpair failed and we were unable to recover it. 00:29:08.931 [2024-10-21 12:13:45.220437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.931 [2024-10-21 12:13:45.220469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.931 qpair failed and we were unable to recover it. 00:29:08.931 [2024-10-21 12:13:45.220831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.931 [2024-10-21 12:13:45.220868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.931 qpair failed and we were unable to recover it. 00:29:08.931 [2024-10-21 12:13:45.221231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.931 [2024-10-21 12:13:45.221260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.931 qpair failed and we were unable to recover it. 00:29:08.931 [2024-10-21 12:13:45.221687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.931 [2024-10-21 12:13:45.221719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.931 qpair failed and we were unable to recover it. 00:29:08.931 [2024-10-21 12:13:45.222069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.931 [2024-10-21 12:13:45.222098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.931 qpair failed and we were unable to recover it. 00:29:08.931 [2024-10-21 12:13:45.222456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.931 [2024-10-21 12:13:45.222488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.931 qpair failed and we were unable to recover it. 00:29:08.931 [2024-10-21 12:13:45.222866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.931 [2024-10-21 12:13:45.222895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.931 qpair failed and we were unable to recover it. 00:29:08.931 [2024-10-21 12:13:45.223251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.931 [2024-10-21 12:13:45.223282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.931 qpair failed and we were unable to recover it. 00:29:08.931 [2024-10-21 12:13:45.223645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.931 [2024-10-21 12:13:45.223676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.931 qpair failed and we were unable to recover it. 00:29:08.931 [2024-10-21 12:13:45.224027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.931 [2024-10-21 12:13:45.224057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.931 qpair failed and we were unable to recover it. 00:29:08.931 [2024-10-21 12:13:45.224426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.931 [2024-10-21 12:13:45.224460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.931 qpair failed and we were unable to recover it. 00:29:08.931 [2024-10-21 12:13:45.224791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.931 [2024-10-21 12:13:45.224821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.931 qpair failed and we were unable to recover it. 00:29:08.932 [2024-10-21 12:13:45.225177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.932 [2024-10-21 12:13:45.225206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.932 qpair failed and we were unable to recover it. 00:29:08.932 [2024-10-21 12:13:45.225572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.932 [2024-10-21 12:13:45.225604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.932 qpair failed and we were unable to recover it. 00:29:08.932 [2024-10-21 12:13:45.225961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.932 [2024-10-21 12:13:45.225991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.932 qpair failed and we were unable to recover it. 00:29:08.932 [2024-10-21 12:13:45.226353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.932 [2024-10-21 12:13:45.226385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.932 qpair failed and we were unable to recover it. 00:29:08.932 [2024-10-21 12:13:45.226748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.932 [2024-10-21 12:13:45.226779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.932 qpair failed and we were unable to recover it. 00:29:08.932 [2024-10-21 12:13:45.227120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.932 [2024-10-21 12:13:45.227150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.932 qpair failed and we were unable to recover it. 00:29:08.932 [2024-10-21 12:13:45.227504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.932 [2024-10-21 12:13:45.227535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.932 qpair failed and we were unable to recover it. 00:29:08.932 [2024-10-21 12:13:45.227898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.932 [2024-10-21 12:13:45.227927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.932 qpair failed and we were unable to recover it. 00:29:08.932 [2024-10-21 12:13:45.228293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.932 [2024-10-21 12:13:45.228332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.932 qpair failed and we were unable to recover it. 00:29:08.932 [2024-10-21 12:13:45.228674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.932 [2024-10-21 12:13:45.228704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.932 qpair failed and we were unable to recover it. 00:29:08.932 [2024-10-21 12:13:45.229062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.932 [2024-10-21 12:13:45.229092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.932 qpair failed and we were unable to recover it. 00:29:08.932 [2024-10-21 12:13:45.229464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.932 [2024-10-21 12:13:45.229496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.932 qpair failed and we were unable to recover it. 00:29:08.932 [2024-10-21 12:13:45.229920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.932 [2024-10-21 12:13:45.229950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.932 qpair failed and we were unable to recover it. 00:29:08.932 [2024-10-21 12:13:45.230304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.932 [2024-10-21 12:13:45.230348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.932 qpair failed and we were unable to recover it. 00:29:08.932 [2024-10-21 12:13:45.230640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.932 [2024-10-21 12:13:45.230671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.932 qpair failed and we were unable to recover it. 00:29:08.932 [2024-10-21 12:13:45.231022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.932 [2024-10-21 12:13:45.231052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.932 qpair failed and we were unable to recover it. 00:29:08.932 [2024-10-21 12:13:45.231408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.932 [2024-10-21 12:13:45.231441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.932 qpair failed and we were unable to recover it. 00:29:08.932 [2024-10-21 12:13:45.231694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.932 [2024-10-21 12:13:45.231726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.932 qpair failed and we were unable to recover it. 00:29:08.932 [2024-10-21 12:13:45.232111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.932 [2024-10-21 12:13:45.232142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.932 qpair failed and we were unable to recover it. 00:29:08.932 [2024-10-21 12:13:45.232500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.932 [2024-10-21 12:13:45.232532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.932 qpair failed and we were unable to recover it. 00:29:08.932 [2024-10-21 12:13:45.232894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.932 [2024-10-21 12:13:45.232925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.932 qpair failed and we were unable to recover it. 00:29:08.932 [2024-10-21 12:13:45.233283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.932 [2024-10-21 12:13:45.233312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.932 qpair failed and we were unable to recover it. 00:29:08.932 [2024-10-21 12:13:45.233692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.932 [2024-10-21 12:13:45.233723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.932 qpair failed and we were unable to recover it. 00:29:08.932 [2024-10-21 12:13:45.234081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.932 [2024-10-21 12:13:45.234111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.932 qpair failed and we were unable to recover it. 00:29:08.932 [2024-10-21 12:13:45.234470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.932 [2024-10-21 12:13:45.234502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.932 qpair failed and we were unable to recover it. 00:29:08.932 [2024-10-21 12:13:45.234874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.932 [2024-10-21 12:13:45.234906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.932 qpair failed and we were unable to recover it. 00:29:08.932 [2024-10-21 12:13:45.235253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.932 [2024-10-21 12:13:45.235284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.932 qpair failed and we were unable to recover it. 00:29:08.932 [2024-10-21 12:13:45.235651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.932 [2024-10-21 12:13:45.235682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.932 qpair failed and we were unable to recover it. 00:29:08.932 [2024-10-21 12:13:45.236052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.932 [2024-10-21 12:13:45.236082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.932 qpair failed and we were unable to recover it. 00:29:08.932 [2024-10-21 12:13:45.236345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.932 [2024-10-21 12:13:45.236384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.932 qpair failed and we were unable to recover it. 00:29:08.932 [2024-10-21 12:13:45.236762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.932 [2024-10-21 12:13:45.236793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.932 qpair failed and we were unable to recover it. 00:29:08.932 [2024-10-21 12:13:45.237224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.932 [2024-10-21 12:13:45.237254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.932 qpair failed and we were unable to recover it. 00:29:08.932 [2024-10-21 12:13:45.237611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.932 [2024-10-21 12:13:45.237644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.932 qpair failed and we were unable to recover it. 00:29:08.932 [2024-10-21 12:13:45.237997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.932 [2024-10-21 12:13:45.238028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.932 qpair failed and we were unable to recover it. 00:29:08.932 [2024-10-21 12:13:45.238385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.932 [2024-10-21 12:13:45.238416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.932 qpair failed and we were unable to recover it. 00:29:08.933 [2024-10-21 12:13:45.238780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.933 [2024-10-21 12:13:45.238811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.933 qpair failed and we were unable to recover it. 00:29:08.933 [2024-10-21 12:13:45.239174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.933 [2024-10-21 12:13:45.239205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.933 qpair failed and we were unable to recover it. 00:29:08.933 [2024-10-21 12:13:45.239569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.933 [2024-10-21 12:13:45.239600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.933 qpair failed and we were unable to recover it. 00:29:08.933 [2024-10-21 12:13:45.240018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.933 [2024-10-21 12:13:45.240050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.933 qpair failed and we were unable to recover it. 00:29:08.933 [2024-10-21 12:13:45.240395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.933 [2024-10-21 12:13:45.240428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.933 qpair failed and we were unable to recover it. 00:29:08.933 [2024-10-21 12:13:45.240784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.933 [2024-10-21 12:13:45.240813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.933 qpair failed and we were unable to recover it. 00:29:08.933 [2024-10-21 12:13:45.241237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.933 [2024-10-21 12:13:45.241267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.933 qpair failed and we were unable to recover it. 00:29:08.933 [2024-10-21 12:13:45.241645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.933 [2024-10-21 12:13:45.241678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.933 qpair failed and we were unable to recover it. 00:29:08.933 [2024-10-21 12:13:45.242031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.933 [2024-10-21 12:13:45.242063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.933 qpair failed and we were unable to recover it. 00:29:08.933 [2024-10-21 12:13:45.242397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.933 [2024-10-21 12:13:45.242429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.933 qpair failed and we were unable to recover it. 00:29:08.933 [2024-10-21 12:13:45.242789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.933 [2024-10-21 12:13:45.242818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.933 qpair failed and we were unable to recover it. 00:29:08.933 [2024-10-21 12:13:45.243178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.933 [2024-10-21 12:13:45.243207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.933 qpair failed and we were unable to recover it. 00:29:08.933 [2024-10-21 12:13:45.243581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.933 [2024-10-21 12:13:45.243614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.933 qpair failed and we were unable to recover it. 00:29:08.933 [2024-10-21 12:13:45.243969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.933 [2024-10-21 12:13:45.243999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.933 qpair failed and we were unable to recover it. 00:29:08.933 [2024-10-21 12:13:45.244358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.933 [2024-10-21 12:13:45.244390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.933 qpair failed and we were unable to recover it. 00:29:08.933 [2024-10-21 12:13:45.244783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.933 [2024-10-21 12:13:45.244813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.933 qpair failed and we were unable to recover it. 00:29:08.933 [2024-10-21 12:13:45.245173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.933 [2024-10-21 12:13:45.245202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.933 qpair failed and we were unable to recover it. 00:29:08.933 [2024-10-21 12:13:45.245610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.933 [2024-10-21 12:13:45.245643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.933 qpair failed and we were unable to recover it. 00:29:08.933 [2024-10-21 12:13:45.245989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.933 [2024-10-21 12:13:45.246019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.933 qpair failed and we were unable to recover it. 00:29:08.933 [2024-10-21 12:13:45.246374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.933 [2024-10-21 12:13:45.246406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.933 qpair failed and we were unable to recover it. 00:29:08.933 [2024-10-21 12:13:45.246766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.933 [2024-10-21 12:13:45.246796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.933 qpair failed and we were unable to recover it. 00:29:08.933 [2024-10-21 12:13:45.247156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.933 [2024-10-21 12:13:45.247189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.933 qpair failed and we were unable to recover it. 00:29:08.933 [2024-10-21 12:13:45.247568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.933 [2024-10-21 12:13:45.247600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.933 qpair failed and we were unable to recover it. 00:29:08.933 [2024-10-21 12:13:45.247952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.933 [2024-10-21 12:13:45.247983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.933 qpair failed and we were unable to recover it. 00:29:08.933 [2024-10-21 12:13:45.248344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.933 [2024-10-21 12:13:45.248377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.933 qpair failed and we were unable to recover it. 00:29:08.933 [2024-10-21 12:13:45.248731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.933 [2024-10-21 12:13:45.248761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.933 qpair failed and we were unable to recover it. 00:29:08.933 [2024-10-21 12:13:45.249124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.933 [2024-10-21 12:13:45.249154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.933 qpair failed and we were unable to recover it. 00:29:08.933 [2024-10-21 12:13:45.249550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.933 [2024-10-21 12:13:45.249581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.933 qpair failed and we were unable to recover it. 00:29:08.933 [2024-10-21 12:13:45.249942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.933 [2024-10-21 12:13:45.249973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.933 qpair failed and we were unable to recover it. 00:29:08.933 [2024-10-21 12:13:45.250350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.933 [2024-10-21 12:13:45.250382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.933 qpair failed and we were unable to recover it. 00:29:08.933 [2024-10-21 12:13:45.250738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.933 [2024-10-21 12:13:45.250769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.933 qpair failed and we were unable to recover it. 00:29:08.933 [2024-10-21 12:13:45.251129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.933 [2024-10-21 12:13:45.251157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.933 qpair failed and we were unable to recover it. 00:29:08.933 [2024-10-21 12:13:45.251525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.933 [2024-10-21 12:13:45.251557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.933 qpair failed and we were unable to recover it. 00:29:08.933 [2024-10-21 12:13:45.251918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.933 [2024-10-21 12:13:45.251947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.933 qpair failed and we were unable to recover it. 00:29:08.933 [2024-10-21 12:13:45.252293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.933 [2024-10-21 12:13:45.252347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.934 qpair failed and we were unable to recover it. 00:29:08.934 [2024-10-21 12:13:45.252725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.934 [2024-10-21 12:13:45.252757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.934 qpair failed and we were unable to recover it. 00:29:08.934 [2024-10-21 12:13:45.253111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.934 [2024-10-21 12:13:45.253142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.934 qpair failed and we were unable to recover it. 00:29:08.934 [2024-10-21 12:13:45.253518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.934 [2024-10-21 12:13:45.253549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.934 qpair failed and we were unable to recover it. 00:29:08.934 [2024-10-21 12:13:45.253915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.934 [2024-10-21 12:13:45.253945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.934 qpair failed and we were unable to recover it. 00:29:08.934 [2024-10-21 12:13:45.254300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.934 [2024-10-21 12:13:45.254340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.934 qpair failed and we were unable to recover it. 00:29:08.934 [2024-10-21 12:13:45.254698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.934 [2024-10-21 12:13:45.254728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.934 qpair failed and we were unable to recover it. 00:29:08.934 [2024-10-21 12:13:45.254970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.934 [2024-10-21 12:13:45.255004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.934 qpair failed and we were unable to recover it. 00:29:08.934 [2024-10-21 12:13:45.255352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.934 [2024-10-21 12:13:45.255383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.934 qpair failed and we were unable to recover it. 00:29:08.934 [2024-10-21 12:13:45.255753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.934 [2024-10-21 12:13:45.255784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.934 qpair failed and we were unable to recover it. 00:29:08.934 [2024-10-21 12:13:45.256140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.934 [2024-10-21 12:13:45.256170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.934 qpair failed and we were unable to recover it. 00:29:08.934 [2024-10-21 12:13:45.256529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.934 [2024-10-21 12:13:45.256559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.934 qpair failed and we were unable to recover it. 00:29:08.934 [2024-10-21 12:13:45.256914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.934 [2024-10-21 12:13:45.256944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.934 qpair failed and we were unable to recover it. 00:29:08.934 [2024-10-21 12:13:45.257317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.934 [2024-10-21 12:13:45.257360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.934 qpair failed and we were unable to recover it. 00:29:08.934 [2024-10-21 12:13:45.257717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.934 [2024-10-21 12:13:45.257748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.934 qpair failed and we were unable to recover it. 00:29:08.934 [2024-10-21 12:13:45.257991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.934 [2024-10-21 12:13:45.258024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.934 qpair failed and we were unable to recover it. 00:29:08.934 [2024-10-21 12:13:45.258376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.934 [2024-10-21 12:13:45.258410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.934 qpair failed and we were unable to recover it. 00:29:08.934 [2024-10-21 12:13:45.258762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.934 [2024-10-21 12:13:45.258790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.934 qpair failed and we were unable to recover it. 00:29:08.934 [2024-10-21 12:13:45.259146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.934 [2024-10-21 12:13:45.259176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.934 qpair failed and we were unable to recover it. 00:29:08.934 [2024-10-21 12:13:45.259553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.934 [2024-10-21 12:13:45.259584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.934 qpair failed and we were unable to recover it. 00:29:08.934 [2024-10-21 12:13:45.259949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.934 [2024-10-21 12:13:45.259979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.934 qpair failed and we were unable to recover it. 00:29:08.934 [2024-10-21 12:13:45.260337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.934 [2024-10-21 12:13:45.260369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.934 qpair failed and we were unable to recover it. 00:29:08.934 [2024-10-21 12:13:45.260610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.934 [2024-10-21 12:13:45.260643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.934 qpair failed and we were unable to recover it. 00:29:08.934 [2024-10-21 12:13:45.261002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.934 [2024-10-21 12:13:45.261031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.934 qpair failed and we were unable to recover it. 00:29:08.934 [2024-10-21 12:13:45.261384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.934 [2024-10-21 12:13:45.261415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.934 qpair failed and we were unable to recover it. 00:29:08.934 [2024-10-21 12:13:45.261790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.934 [2024-10-21 12:13:45.261820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.934 qpair failed and we were unable to recover it. 00:29:08.934 [2024-10-21 12:13:45.262179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.934 [2024-10-21 12:13:45.262208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.934 qpair failed and we were unable to recover it. 00:29:08.934 [2024-10-21 12:13:45.262581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.934 [2024-10-21 12:13:45.262613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.934 qpair failed and we were unable to recover it. 00:29:08.934 [2024-10-21 12:13:45.262966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.934 [2024-10-21 12:13:45.262995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.934 qpair failed and we were unable to recover it. 00:29:08.934 [2024-10-21 12:13:45.263359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.934 [2024-10-21 12:13:45.263390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.934 qpair failed and we were unable to recover it. 00:29:08.934 [2024-10-21 12:13:45.263749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.934 [2024-10-21 12:13:45.263778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.934 qpair failed and we were unable to recover it. 00:29:08.934 [2024-10-21 12:13:45.264138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.934 [2024-10-21 12:13:45.264167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.934 qpair failed and we were unable to recover it. 00:29:08.934 [2024-10-21 12:13:45.264547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.934 [2024-10-21 12:13:45.264580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.934 qpair failed and we were unable to recover it. 00:29:08.934 [2024-10-21 12:13:45.264948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.934 [2024-10-21 12:13:45.264979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.934 qpair failed and we were unable to recover it. 00:29:08.934 [2024-10-21 12:13:45.265340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.934 [2024-10-21 12:13:45.265371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.934 qpair failed and we were unable to recover it. 00:29:08.934 [2024-10-21 12:13:45.265670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.935 [2024-10-21 12:13:45.265699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.935 qpair failed and we were unable to recover it. 00:29:08.935 [2024-10-21 12:13:45.266049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.935 [2024-10-21 12:13:45.266079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.935 qpair failed and we were unable to recover it. 00:29:08.935 [2024-10-21 12:13:45.266437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.935 [2024-10-21 12:13:45.266470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.935 qpair failed and we were unable to recover it. 00:29:08.935 [2024-10-21 12:13:45.266808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.935 [2024-10-21 12:13:45.266837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.935 qpair failed and we were unable to recover it. 00:29:08.935 [2024-10-21 12:13:45.267189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.935 [2024-10-21 12:13:45.267219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.935 qpair failed and we were unable to recover it. 00:29:08.935 [2024-10-21 12:13:45.267592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.935 [2024-10-21 12:13:45.267628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.935 qpair failed and we were unable to recover it. 00:29:08.935 [2024-10-21 12:13:45.267984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.935 [2024-10-21 12:13:45.268013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.935 qpair failed and we were unable to recover it. 00:29:08.935 [2024-10-21 12:13:45.268368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.935 [2024-10-21 12:13:45.268399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.935 qpair failed and we were unable to recover it. 00:29:08.935 [2024-10-21 12:13:45.268665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.935 [2024-10-21 12:13:45.268699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.935 qpair failed and we were unable to recover it. 00:29:08.935 [2024-10-21 12:13:45.269048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.935 [2024-10-21 12:13:45.269077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.935 qpair failed and we were unable to recover it. 00:29:08.935 [2024-10-21 12:13:45.269312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.935 [2024-10-21 12:13:45.269355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.935 qpair failed and we were unable to recover it. 00:29:08.935 [2024-10-21 12:13:45.269715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.935 [2024-10-21 12:13:45.269745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.935 qpair failed and we were unable to recover it. 00:29:08.935 [2024-10-21 12:13:45.270105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.935 [2024-10-21 12:13:45.270135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.935 qpair failed and we were unable to recover it. 00:29:08.935 [2024-10-21 12:13:45.270481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.935 [2024-10-21 12:13:45.270513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.935 qpair failed and we were unable to recover it. 00:29:08.935 [2024-10-21 12:13:45.270869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.935 [2024-10-21 12:13:45.270899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.935 qpair failed and we were unable to recover it. 00:29:08.935 [2024-10-21 12:13:45.271254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.935 [2024-10-21 12:13:45.271283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.935 qpair failed and we were unable to recover it. 00:29:08.935 [2024-10-21 12:13:45.271648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.935 [2024-10-21 12:13:45.271680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.935 qpair failed and we were unable to recover it. 00:29:08.935 [2024-10-21 12:13:45.272040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.935 [2024-10-21 12:13:45.272070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.935 qpair failed and we were unable to recover it. 00:29:08.935 [2024-10-21 12:13:45.272337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.935 [2024-10-21 12:13:45.272368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.935 qpair failed and we were unable to recover it. 00:29:08.935 [2024-10-21 12:13:45.272735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.935 [2024-10-21 12:13:45.272765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.935 qpair failed and we were unable to recover it. 00:29:08.935 [2024-10-21 12:13:45.273126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.935 [2024-10-21 12:13:45.273157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.935 qpair failed and we were unable to recover it. 00:29:08.935 [2024-10-21 12:13:45.273510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.935 [2024-10-21 12:13:45.273541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.935 qpair failed and we were unable to recover it. 00:29:08.935 [2024-10-21 12:13:45.273902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.935 [2024-10-21 12:13:45.273932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.935 qpair failed and we were unable to recover it. 00:29:08.935 [2024-10-21 12:13:45.274297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.935 [2024-10-21 12:13:45.274339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.935 qpair failed and we were unable to recover it. 00:29:08.935 [2024-10-21 12:13:45.274698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.935 [2024-10-21 12:13:45.274728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.935 qpair failed and we were unable to recover it. 00:29:08.935 [2024-10-21 12:13:45.275087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.935 [2024-10-21 12:13:45.275116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.935 qpair failed and we were unable to recover it. 00:29:08.935 [2024-10-21 12:13:45.275466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.935 [2024-10-21 12:13:45.275498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.935 qpair failed and we were unable to recover it. 00:29:08.935 [2024-10-21 12:13:45.275868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.935 [2024-10-21 12:13:45.275897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.935 qpair failed and we were unable to recover it. 00:29:08.935 [2024-10-21 12:13:45.276251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.935 [2024-10-21 12:13:45.276281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.935 qpair failed and we were unable to recover it. 00:29:08.936 [2024-10-21 12:13:45.276639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.936 [2024-10-21 12:13:45.276670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.936 qpair failed and we were unable to recover it. 00:29:08.936 [2024-10-21 12:13:45.276907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.936 [2024-10-21 12:13:45.276940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.936 qpair failed and we were unable to recover it. 00:29:08.936 [2024-10-21 12:13:45.277306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.936 [2024-10-21 12:13:45.277346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.936 qpair failed and we were unable to recover it. 00:29:08.936 [2024-10-21 12:13:45.277704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.936 [2024-10-21 12:13:45.277735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.936 qpair failed and we were unable to recover it. 00:29:08.936 [2024-10-21 12:13:45.278100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.936 [2024-10-21 12:13:45.278129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.936 qpair failed and we were unable to recover it. 00:29:08.936 [2024-10-21 12:13:45.278497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.936 [2024-10-21 12:13:45.278529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.936 qpair failed and we were unable to recover it. 00:29:08.936 [2024-10-21 12:13:45.278888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.936 [2024-10-21 12:13:45.278920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.936 qpair failed and we were unable to recover it. 00:29:08.936 [2024-10-21 12:13:45.279279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.936 [2024-10-21 12:13:45.279310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.936 qpair failed and we were unable to recover it. 00:29:08.936 [2024-10-21 12:13:45.279701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.936 [2024-10-21 12:13:45.279734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.936 qpair failed and we were unable to recover it. 00:29:08.936 [2024-10-21 12:13:45.280089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.936 [2024-10-21 12:13:45.280119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.936 qpair failed and we were unable to recover it. 00:29:08.936 [2024-10-21 12:13:45.280455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.936 [2024-10-21 12:13:45.280487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.936 qpair failed and we were unable to recover it. 00:29:08.936 [2024-10-21 12:13:45.280845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.936 [2024-10-21 12:13:45.280874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.936 qpair failed and we were unable to recover it. 00:29:08.936 [2024-10-21 12:13:45.281229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.936 [2024-10-21 12:13:45.281258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.936 qpair failed and we were unable to recover it. 00:29:08.936 [2024-10-21 12:13:45.281499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.936 [2024-10-21 12:13:45.281534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.936 qpair failed and we were unable to recover it. 00:29:08.936 [2024-10-21 12:13:45.281890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.936 [2024-10-21 12:13:45.281921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.936 qpair failed and we were unable to recover it. 00:29:08.936 [2024-10-21 12:13:45.282277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.936 [2024-10-21 12:13:45.282308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.936 qpair failed and we were unable to recover it. 00:29:08.936 [2024-10-21 12:13:45.282702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.936 [2024-10-21 12:13:45.282739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.936 qpair failed and we were unable to recover it. 00:29:08.936 [2024-10-21 12:13:45.283091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.936 [2024-10-21 12:13:45.283121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.936 qpair failed and we were unable to recover it. 00:29:08.936 [2024-10-21 12:13:45.283424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.936 [2024-10-21 12:13:45.283457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.936 qpair failed and we were unable to recover it. 00:29:08.936 [2024-10-21 12:13:45.283848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.936 [2024-10-21 12:13:45.283877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.936 qpair failed and we were unable to recover it. 00:29:08.936 [2024-10-21 12:13:45.284234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.936 [2024-10-21 12:13:45.284265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.936 qpair failed and we were unable to recover it. 00:29:08.936 [2024-10-21 12:13:45.284627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.936 [2024-10-21 12:13:45.284658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.936 qpair failed and we were unable to recover it. 00:29:08.936 [2024-10-21 12:13:45.285016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.936 [2024-10-21 12:13:45.285048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.936 qpair failed and we were unable to recover it. 00:29:08.936 [2024-10-21 12:13:45.285409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.936 [2024-10-21 12:13:45.285442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.936 qpair failed and we were unable to recover it. 00:29:08.936 [2024-10-21 12:13:45.285830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.936 [2024-10-21 12:13:45.285860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.936 qpair failed and we were unable to recover it. 00:29:08.936 [2024-10-21 12:13:45.286220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.936 [2024-10-21 12:13:45.286249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.936 qpair failed and we were unable to recover it. 00:29:08.936 [2024-10-21 12:13:45.286607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.936 [2024-10-21 12:13:45.286638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.936 qpair failed and we were unable to recover it. 00:29:08.936 [2024-10-21 12:13:45.286982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.936 [2024-10-21 12:13:45.287012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.936 qpair failed and we were unable to recover it. 00:29:08.936 [2024-10-21 12:13:45.287367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.936 [2024-10-21 12:13:45.287399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.936 qpair failed and we were unable to recover it. 00:29:08.936 [2024-10-21 12:13:45.287749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.936 [2024-10-21 12:13:45.287778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.936 qpair failed and we were unable to recover it. 00:29:08.936 [2024-10-21 12:13:45.288138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.936 [2024-10-21 12:13:45.288168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.936 qpair failed and we were unable to recover it. 00:29:08.936 [2024-10-21 12:13:45.288535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.936 [2024-10-21 12:13:45.288568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.936 qpair failed and we were unable to recover it. 00:29:08.936 [2024-10-21 12:13:45.288912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.936 [2024-10-21 12:13:45.288944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.936 qpair failed and we were unable to recover it. 00:29:08.936 [2024-10-21 12:13:45.289281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.936 [2024-10-21 12:13:45.289313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.936 qpair failed and we were unable to recover it. 00:29:08.936 [2024-10-21 12:13:45.289702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.936 [2024-10-21 12:13:45.289732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.936 qpair failed and we were unable to recover it. 00:29:08.936 [2024-10-21 12:13:45.290096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.937 [2024-10-21 12:13:45.290126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.937 qpair failed and we were unable to recover it. 00:29:08.937 [2024-10-21 12:13:45.290470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.937 [2024-10-21 12:13:45.290502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.937 qpair failed and we were unable to recover it. 00:29:08.937 [2024-10-21 12:13:45.290846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.937 [2024-10-21 12:13:45.290876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.937 qpair failed and we were unable to recover it. 00:29:08.937 [2024-10-21 12:13:45.291165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.937 [2024-10-21 12:13:45.291197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.937 qpair failed and we were unable to recover it. 00:29:08.937 [2024-10-21 12:13:45.291564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.937 [2024-10-21 12:13:45.291596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.937 qpair failed and we were unable to recover it. 00:29:08.937 [2024-10-21 12:13:45.291941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.937 [2024-10-21 12:13:45.291971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.937 qpair failed and we were unable to recover it. 00:29:08.937 [2024-10-21 12:13:45.292340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.937 [2024-10-21 12:13:45.292372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.937 qpair failed and we were unable to recover it. 00:29:08.937 [2024-10-21 12:13:45.292731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.937 [2024-10-21 12:13:45.292762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.937 qpair failed and we were unable to recover it. 00:29:08.937 [2024-10-21 12:13:45.293129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.937 [2024-10-21 12:13:45.293159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.937 qpair failed and we were unable to recover it. 00:29:08.937 [2024-10-21 12:13:45.293524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.937 [2024-10-21 12:13:45.293555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.937 qpair failed and we were unable to recover it. 00:29:08.937 [2024-10-21 12:13:45.293932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.937 [2024-10-21 12:13:45.293961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.937 qpair failed and we were unable to recover it. 00:29:08.937 [2024-10-21 12:13:45.294351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.937 [2024-10-21 12:13:45.294384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.937 qpair failed and we were unable to recover it. 00:29:08.937 [2024-10-21 12:13:45.294776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.937 [2024-10-21 12:13:45.294806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.937 qpair failed and we were unable to recover it. 00:29:08.937 [2024-10-21 12:13:45.295038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.937 [2024-10-21 12:13:45.295071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.937 qpair failed and we were unable to recover it. 00:29:08.937 [2024-10-21 12:13:45.295422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.937 [2024-10-21 12:13:45.295453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.937 qpair failed and we were unable to recover it. 00:29:08.937 [2024-10-21 12:13:45.295812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.937 [2024-10-21 12:13:45.295841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.937 qpair failed and we were unable to recover it. 00:29:08.937 [2024-10-21 12:13:45.296199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.937 [2024-10-21 12:13:45.296229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.937 qpair failed and we were unable to recover it. 00:29:08.937 [2024-10-21 12:13:45.296581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.937 [2024-10-21 12:13:45.296614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.937 qpair failed and we were unable to recover it. 00:29:08.937 [2024-10-21 12:13:45.296958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.937 [2024-10-21 12:13:45.296990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.937 qpair failed and we were unable to recover it. 00:29:08.937 [2024-10-21 12:13:45.297367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.937 [2024-10-21 12:13:45.297398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.937 qpair failed and we were unable to recover it. 00:29:08.937 [2024-10-21 12:13:45.297761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.937 [2024-10-21 12:13:45.297792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.937 qpair failed and we were unable to recover it. 00:29:08.937 [2024-10-21 12:13:45.298145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.937 [2024-10-21 12:13:45.298182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.937 qpair failed and we were unable to recover it. 00:29:08.937 [2024-10-21 12:13:45.298507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.937 [2024-10-21 12:13:45.298540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.937 qpair failed and we were unable to recover it. 00:29:08.937 [2024-10-21 12:13:45.298912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.937 [2024-10-21 12:13:45.298942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.937 qpair failed and we were unable to recover it. 00:29:08.937 [2024-10-21 12:13:45.299297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.937 [2024-10-21 12:13:45.299338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.937 qpair failed and we were unable to recover it. 00:29:08.937 [2024-10-21 12:13:45.299685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.937 [2024-10-21 12:13:45.299715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.937 qpair failed and we were unable to recover it. 00:29:08.937 [2024-10-21 12:13:45.300077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.937 [2024-10-21 12:13:45.300107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.937 qpair failed and we were unable to recover it. 00:29:08.937 [2024-10-21 12:13:45.300465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.937 [2024-10-21 12:13:45.300496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.937 qpair failed and we were unable to recover it. 00:29:08.937 [2024-10-21 12:13:45.300844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.937 [2024-10-21 12:13:45.300875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.937 qpair failed and we were unable to recover it. 00:29:08.937 [2024-10-21 12:13:45.301100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.937 [2024-10-21 12:13:45.301133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.937 qpair failed and we were unable to recover it. 00:29:08.937 [2024-10-21 12:13:45.301507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.937 [2024-10-21 12:13:45.301538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.937 qpair failed and we were unable to recover it. 00:29:08.937 [2024-10-21 12:13:45.301893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.937 [2024-10-21 12:13:45.301923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.937 qpair failed and we were unable to recover it. 00:29:08.937 [2024-10-21 12:13:45.302283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.937 [2024-10-21 12:13:45.302314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.937 qpair failed and we were unable to recover it. 00:29:08.937 [2024-10-21 12:13:45.302560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.937 [2024-10-21 12:13:45.302592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.937 qpair failed and we were unable to recover it. 00:29:08.937 [2024-10-21 12:13:45.302937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.937 [2024-10-21 12:13:45.302968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.937 qpair failed and we were unable to recover it. 00:29:08.937 [2024-10-21 12:13:45.303365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.938 [2024-10-21 12:13:45.303397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.938 qpair failed and we were unable to recover it. 00:29:08.938 [2024-10-21 12:13:45.303757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.938 [2024-10-21 12:13:45.303789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.938 qpair failed and we were unable to recover it. 00:29:08.938 [2024-10-21 12:13:45.304147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.938 [2024-10-21 12:13:45.304177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.938 qpair failed and we were unable to recover it. 00:29:08.938 [2024-10-21 12:13:45.304549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.938 [2024-10-21 12:13:45.304581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.938 qpair failed and we were unable to recover it. 00:29:08.938 [2024-10-21 12:13:45.304928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.938 [2024-10-21 12:13:45.304959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.938 qpair failed and we were unable to recover it. 00:29:08.938 [2024-10-21 12:13:45.305329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.938 [2024-10-21 12:13:45.305360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.938 qpair failed and we were unable to recover it. 00:29:08.938 [2024-10-21 12:13:45.305708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.938 [2024-10-21 12:13:45.305738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.938 qpair failed and we were unable to recover it. 00:29:08.938 [2024-10-21 12:13:45.305989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.938 [2024-10-21 12:13:45.306020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.938 qpair failed and we were unable to recover it. 00:29:08.938 [2024-10-21 12:13:45.306364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.938 [2024-10-21 12:13:45.306396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.938 qpair failed and we were unable to recover it. 00:29:08.938 [2024-10-21 12:13:45.306759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.938 [2024-10-21 12:13:45.306788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.938 qpair failed and we were unable to recover it. 00:29:08.938 [2024-10-21 12:13:45.307146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.938 [2024-10-21 12:13:45.307177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.938 qpair failed and we were unable to recover it. 00:29:08.938 [2024-10-21 12:13:45.307550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.938 [2024-10-21 12:13:45.307580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.938 qpair failed and we were unable to recover it. 00:29:08.938 [2024-10-21 12:13:45.307942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.938 [2024-10-21 12:13:45.307973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.938 qpair failed and we were unable to recover it. 00:29:08.938 [2024-10-21 12:13:45.308338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.938 [2024-10-21 12:13:45.308371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.938 qpair failed and we were unable to recover it. 00:29:08.938 [2024-10-21 12:13:45.308716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.938 [2024-10-21 12:13:45.308745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.938 qpair failed and we were unable to recover it. 00:29:08.938 [2024-10-21 12:13:45.309105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.938 [2024-10-21 12:13:45.309134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.938 qpair failed and we were unable to recover it. 00:29:08.938 [2024-10-21 12:13:45.309382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.938 [2024-10-21 12:13:45.309418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.938 qpair failed and we were unable to recover it. 00:29:08.938 [2024-10-21 12:13:45.309761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.938 [2024-10-21 12:13:45.309791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.938 qpair failed and we were unable to recover it. 00:29:08.938 [2024-10-21 12:13:45.310153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.938 [2024-10-21 12:13:45.310184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.938 qpair failed and we were unable to recover it. 00:29:08.938 [2024-10-21 12:13:45.310417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.938 [2024-10-21 12:13:45.310451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.938 qpair failed and we were unable to recover it. 00:29:08.938 [2024-10-21 12:13:45.310819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.938 [2024-10-21 12:13:45.310850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.938 qpair failed and we were unable to recover it. 00:29:08.938 [2024-10-21 12:13:45.311092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.938 [2024-10-21 12:13:45.311125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.938 qpair failed and we were unable to recover it. 00:29:08.938 [2024-10-21 12:13:45.311478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.938 [2024-10-21 12:13:45.311508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.938 qpair failed and we were unable to recover it. 00:29:08.938 [2024-10-21 12:13:45.311867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.938 [2024-10-21 12:13:45.311897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.938 qpair failed and we were unable to recover it. 00:29:08.938 [2024-10-21 12:13:45.312253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.938 [2024-10-21 12:13:45.312282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.938 qpair failed and we were unable to recover it. 00:29:08.938 [2024-10-21 12:13:45.312651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.938 [2024-10-21 12:13:45.312682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.938 qpair failed and we were unable to recover it. 00:29:08.938 [2024-10-21 12:13:45.313039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.938 [2024-10-21 12:13:45.313075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.938 qpair failed and we were unable to recover it. 00:29:08.938 [2024-10-21 12:13:45.313436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.938 [2024-10-21 12:13:45.313468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.938 qpair failed and we were unable to recover it. 00:29:08.938 [2024-10-21 12:13:45.313825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.938 [2024-10-21 12:13:45.313858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.938 qpair failed and we were unable to recover it. 00:29:08.938 [2024-10-21 12:13:45.314187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.938 [2024-10-21 12:13:45.314217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.938 qpair failed and we were unable to recover it. 00:29:08.938 [2024-10-21 12:13:45.314570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.938 [2024-10-21 12:13:45.314602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.938 qpair failed and we were unable to recover it. 00:29:08.938 [2024-10-21 12:13:45.314956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.938 [2024-10-21 12:13:45.314985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.938 qpair failed and we were unable to recover it. 00:29:08.938 [2024-10-21 12:13:45.315354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.938 [2024-10-21 12:13:45.315385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.938 qpair failed and we were unable to recover it. 00:29:08.938 [2024-10-21 12:13:45.315750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.938 [2024-10-21 12:13:45.315780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.938 qpair failed and we were unable to recover it. 00:29:08.938 [2024-10-21 12:13:45.316130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.938 [2024-10-21 12:13:45.316159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.938 qpair failed and we were unable to recover it. 00:29:08.938 [2024-10-21 12:13:45.316400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.939 [2024-10-21 12:13:45.316434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.939 qpair failed and we were unable to recover it. 00:29:08.939 [2024-10-21 12:13:45.316793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.939 [2024-10-21 12:13:45.316824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.939 qpair failed and we were unable to recover it. 00:29:08.939 [2024-10-21 12:13:45.317186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.939 [2024-10-21 12:13:45.317217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.939 qpair failed and we were unable to recover it. 00:29:08.939 [2024-10-21 12:13:45.317595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.939 [2024-10-21 12:13:45.317627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.939 qpair failed and we were unable to recover it. 00:29:08.939 [2024-10-21 12:13:45.317980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.939 [2024-10-21 12:13:45.318012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.939 qpair failed and we were unable to recover it. 00:29:08.939 [2024-10-21 12:13:45.318245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.939 [2024-10-21 12:13:45.318275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.939 qpair failed and we were unable to recover it. 00:29:08.939 [2024-10-21 12:13:45.318637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.939 [2024-10-21 12:13:45.318668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.939 qpair failed and we were unable to recover it. 00:29:08.939 [2024-10-21 12:13:45.319026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.939 [2024-10-21 12:13:45.319058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.939 qpair failed and we were unable to recover it. 00:29:08.939 [2024-10-21 12:13:45.319414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.939 [2024-10-21 12:13:45.319445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.939 qpair failed and we were unable to recover it. 00:29:08.939 [2024-10-21 12:13:45.319800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.939 [2024-10-21 12:13:45.319832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.939 qpair failed and we were unable to recover it. 00:29:08.939 [2024-10-21 12:13:45.320191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.939 [2024-10-21 12:13:45.320221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.939 qpair failed and we were unable to recover it. 00:29:08.939 [2024-10-21 12:13:45.320585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.939 [2024-10-21 12:13:45.320615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.939 qpair failed and we were unable to recover it. 00:29:08.939 [2024-10-21 12:13:45.320982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.939 [2024-10-21 12:13:45.321012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.939 qpair failed and we were unable to recover it. 00:29:08.939 [2024-10-21 12:13:45.321381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.939 [2024-10-21 12:13:45.321415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.939 qpair failed and we were unable to recover it. 00:29:08.939 [2024-10-21 12:13:45.321685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.939 [2024-10-21 12:13:45.321718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.939 qpair failed and we were unable to recover it. 00:29:08.939 [2024-10-21 12:13:45.322102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.939 [2024-10-21 12:13:45.322133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.939 qpair failed and we were unable to recover it. 00:29:08.939 [2024-10-21 12:13:45.322489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.939 [2024-10-21 12:13:45.322522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.939 qpair failed and we were unable to recover it. 00:29:08.939 [2024-10-21 12:13:45.322882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.939 [2024-10-21 12:13:45.322912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.939 qpair failed and we were unable to recover it. 00:29:08.939 [2024-10-21 12:13:45.323304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.939 [2024-10-21 12:13:45.323346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.939 qpair failed and we were unable to recover it. 00:29:08.939 [2024-10-21 12:13:45.323690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.939 [2024-10-21 12:13:45.323720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.939 qpair failed and we were unable to recover it. 00:29:08.939 [2024-10-21 12:13:45.324077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.939 [2024-10-21 12:13:45.324107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.939 qpair failed and we were unable to recover it. 00:29:08.939 [2024-10-21 12:13:45.324467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.939 [2024-10-21 12:13:45.324500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.939 qpair failed and we were unable to recover it. 00:29:08.939 [2024-10-21 12:13:45.324847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.939 [2024-10-21 12:13:45.324877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.939 qpair failed and we were unable to recover it. 00:29:08.939 [2024-10-21 12:13:45.325237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.939 [2024-10-21 12:13:45.325265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.939 qpair failed and we were unable to recover it. 00:29:08.939 [2024-10-21 12:13:45.325663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.939 [2024-10-21 12:13:45.325695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.939 qpair failed and we were unable to recover it. 00:29:08.939 [2024-10-21 12:13:45.326055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.939 [2024-10-21 12:13:45.326085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.939 qpair failed and we were unable to recover it. 00:29:08.939 [2024-10-21 12:13:45.326449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.939 [2024-10-21 12:13:45.326483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.939 qpair failed and we were unable to recover it. 00:29:08.939 [2024-10-21 12:13:45.326872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.939 [2024-10-21 12:13:45.326904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.939 qpair failed and we were unable to recover it. 00:29:08.939 [2024-10-21 12:13:45.327255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.939 [2024-10-21 12:13:45.327288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.939 qpair failed and we were unable to recover it. 00:29:08.939 [2024-10-21 12:13:45.327553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.939 [2024-10-21 12:13:45.327587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.939 qpair failed and we were unable to recover it. 00:29:08.939 [2024-10-21 12:13:45.327836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.939 [2024-10-21 12:13:45.327865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.939 qpair failed and we were unable to recover it. 00:29:08.939 [2024-10-21 12:13:45.328234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.939 [2024-10-21 12:13:45.328277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.939 qpair failed and we were unable to recover it. 00:29:08.939 [2024-10-21 12:13:45.328673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.939 [2024-10-21 12:13:45.328704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.939 qpair failed and we were unable to recover it. 00:29:08.939 [2024-10-21 12:13:45.329077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-10-21 12:13:45.329107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-10-21 12:13:45.329467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-10-21 12:13:45.329498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-10-21 12:13:45.329853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-10-21 12:13:45.329884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-10-21 12:13:45.330245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-10-21 12:13:45.330275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-10-21 12:13:45.330636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-10-21 12:13:45.330667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-10-21 12:13:45.331015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-10-21 12:13:45.331045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-10-21 12:13:45.331409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-10-21 12:13:45.331441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-10-21 12:13:45.331789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-10-21 12:13:45.331818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-10-21 12:13:45.332178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-10-21 12:13:45.332208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-10-21 12:13:45.332578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-10-21 12:13:45.332609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-10-21 12:13:45.332970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-10-21 12:13:45.333000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-10-21 12:13:45.333379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-10-21 12:13:45.333411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-10-21 12:13:45.333762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-10-21 12:13:45.333794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-10-21 12:13:45.334159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-10-21 12:13:45.334190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-10-21 12:13:45.334555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-10-21 12:13:45.334586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-10-21 12:13:45.334962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-10-21 12:13:45.334993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-10-21 12:13:45.335343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-10-21 12:13:45.335375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-10-21 12:13:45.335737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-10-21 12:13:45.335767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-10-21 12:13:45.336122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-10-21 12:13:45.336152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-10-21 12:13:45.336506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-10-21 12:13:45.336540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-10-21 12:13:45.336892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-10-21 12:13:45.336924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-10-21 12:13:45.337283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-10-21 12:13:45.337314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-10-21 12:13:45.337708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-10-21 12:13:45.337741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-10-21 12:13:45.338115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-10-21 12:13:45.338147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-10-21 12:13:45.338498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-10-21 12:13:45.338533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-10-21 12:13:45.338916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-10-21 12:13:45.338948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-10-21 12:13:45.339293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-10-21 12:13:45.339333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-10-21 12:13:45.339688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-10-21 12:13:45.339721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-10-21 12:13:45.340091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-10-21 12:13:45.340122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-10-21 12:13:45.340484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-10-21 12:13:45.340516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-10-21 12:13:45.340872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-10-21 12:13:45.340903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-10-21 12:13:45.341260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-10-21 12:13:45.341290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-10-21 12:13:45.341707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-10-21 12:13:45.341740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-10-21 12:13:45.342111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-10-21 12:13:45.342141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-10-21 12:13:45.342501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-10-21 12:13:45.342533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-10-21 12:13:45.342894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-10-21 12:13:45.342925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-10-21 12:13:45.343283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-10-21 12:13:45.343312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-10-21 12:13:45.343697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-10-21 12:13:45.343728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-10-21 12:13:45.344072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-10-21 12:13:45.344110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-10-21 12:13:45.344467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-10-21 12:13:45.344500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-10-21 12:13:45.344948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-10-21 12:13:45.344977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-10-21 12:13:45.345342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-10-21 12:13:45.345374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-10-21 12:13:45.345623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-10-21 12:13:45.345653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-10-21 12:13:45.346009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-10-21 12:13:45.346039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-10-21 12:13:45.346296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-10-21 12:13:45.346346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-10-21 12:13:45.346718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-10-21 12:13:45.346748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-10-21 12:13:45.346995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-10-21 12:13:45.347026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-10-21 12:13:45.347376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-10-21 12:13:45.347408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-10-21 12:13:45.347783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-10-21 12:13:45.347814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-10-21 12:13:45.348159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-10-21 12:13:45.348189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-10-21 12:13:45.348623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-10-21 12:13:45.348654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-10-21 12:13:45.348899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-10-21 12:13:45.348929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-10-21 12:13:45.349276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-10-21 12:13:45.349308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-10-21 12:13:45.349650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-10-21 12:13:45.349682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-10-21 12:13:45.350047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-10-21 12:13:45.350078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-10-21 12:13:45.350440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-10-21 12:13:45.350472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-10-21 12:13:45.350775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-10-21 12:13:45.350804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-10-21 12:13:45.351166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-10-21 12:13:45.351197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-10-21 12:13:45.351317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-10-21 12:13:45.351359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-10-21 12:13:45.351635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-10-21 12:13:45.351665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-10-21 12:13:45.352053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-10-21 12:13:45.352083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-10-21 12:13:45.352352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-10-21 12:13:45.352384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-10-21 12:13:45.352728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-10-21 12:13:45.352759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-10-21 12:13:45.353122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-10-21 12:13:45.353153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-10-21 12:13:45.353508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-10-21 12:13:45.353540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-10-21 12:13:45.354009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-10-21 12:13:45.354041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-10-21 12:13:45.354392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-10-21 12:13:45.354424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-10-21 12:13:45.354781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-10-21 12:13:45.354812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-10-21 12:13:45.355144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-10-21 12:13:45.355174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-10-21 12:13:45.355550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-10-21 12:13:45.355581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-10-21 12:13:45.355829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-10-21 12:13:45.355860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-10-21 12:13:45.356209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-10-21 12:13:45.356241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-10-21 12:13:45.356482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-10-21 12:13:45.356517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-10-21 12:13:45.356873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-10-21 12:13:45.356904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-10-21 12:13:45.357341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-10-21 12:13:45.357373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-10-21 12:13:45.357790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-10-21 12:13:45.357821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-10-21 12:13:45.358180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-10-21 12:13:45.358212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-10-21 12:13:45.358349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-10-21 12:13:45.358382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-10-21 12:13:45.358739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-10-21 12:13:45.358771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-10-21 12:13:45.359130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-10-21 12:13:45.359163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-10-21 12:13:45.359503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-10-21 12:13:45.359535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-10-21 12:13:45.359789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-10-21 12:13:45.359819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-10-21 12:13:45.360163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-10-21 12:13:45.360194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-10-21 12:13:45.360449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-10-21 12:13:45.360482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-10-21 12:13:45.360723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-10-21 12:13:45.360754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-10-21 12:13:45.361135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-10-21 12:13:45.361166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-10-21 12:13:45.361393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-10-21 12:13:45.361426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-10-21 12:13:45.361777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-10-21 12:13:45.361808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-10-21 12:13:45.362169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-10-21 12:13:45.362200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-10-21 12:13:45.362572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-10-21 12:13:45.362603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-10-21 12:13:45.362842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-10-21 12:13:45.362871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-10-21 12:13:45.363227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-10-21 12:13:45.363258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-10-21 12:13:45.363478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-10-21 12:13:45.363510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-10-21 12:13:45.363862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-10-21 12:13:45.363894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-10-21 12:13:45.364253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-10-21 12:13:45.364282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-10-21 12:13:45.364674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-10-21 12:13:45.364706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-10-21 12:13:45.365068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-10-21 12:13:45.365097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-10-21 12:13:45.365455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-10-21 12:13:45.365487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-10-21 12:13:45.365846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-10-21 12:13:45.365879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-10-21 12:13:45.366334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-10-21 12:13:45.366367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-10-21 12:13:45.366727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-10-21 12:13:45.366757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-10-21 12:13:45.367111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-10-21 12:13:45.367144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-10-21 12:13:45.367395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-10-21 12:13:45.367428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-10-21 12:13:45.367817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-10-21 12:13:45.367850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-10-21 12:13:45.368214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-10-21 12:13:45.368245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-10-21 12:13:45.368627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-10-21 12:13:45.368664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-10-21 12:13:45.369014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-10-21 12:13:45.369045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-10-21 12:13:45.369404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-10-21 12:13:45.369437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-10-21 12:13:45.369797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-10-21 12:13:45.369827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-10-21 12:13:45.370189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-10-21 12:13:45.370219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-10-21 12:13:45.370561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-10-21 12:13:45.370591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-10-21 12:13:45.370944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-10-21 12:13:45.370975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-10-21 12:13:45.371215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-10-21 12:13:45.371245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-10-21 12:13:45.371388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-10-21 12:13:45.371418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-10-21 12:13:45.371811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-10-21 12:13:45.371841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-10-21 12:13:45.372189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-10-21 12:13:45.372220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-10-21 12:13:45.372466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-10-21 12:13:45.372499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-10-21 12:13:45.372866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-10-21 12:13:45.372898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-10-21 12:13:45.373090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-10-21 12:13:45.373120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-10-21 12:13:45.373485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-10-21 12:13:45.373518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-10-21 12:13:45.373882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-10-21 12:13:45.373915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-10-21 12:13:45.374137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-10-21 12:13:45.374167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-10-21 12:13:45.374564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-10-21 12:13:45.374598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-10-21 12:13:45.374961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-10-21 12:13:45.374992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-10-21 12:13:45.375368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-10-21 12:13:45.375400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-10-21 12:13:45.375750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-10-21 12:13:45.375781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-10-21 12:13:45.376142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-10-21 12:13:45.376172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-10-21 12:13:45.376382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-10-21 12:13:45.376413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-10-21 12:13:45.376789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-10-21 12:13:45.376821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-10-21 12:13:45.377186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-10-21 12:13:45.377217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-10-21 12:13:45.377575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-10-21 12:13:45.377607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-10-21 12:13:45.377973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-10-21 12:13:45.378005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-10-21 12:13:45.378377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-10-21 12:13:45.378409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-10-21 12:13:45.378784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-10-21 12:13:45.378815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-10-21 12:13:45.379165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-10-21 12:13:45.379196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-10-21 12:13:45.379560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-10-21 12:13:45.379593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-10-21 12:13:45.379939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-10-21 12:13:45.379971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-10-21 12:13:45.380224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-10-21 12:13:45.380254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-10-21 12:13:45.380606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-10-21 12:13:45.380639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-10-21 12:13:45.381002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-10-21 12:13:45.381033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-10-21 12:13:45.381480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-10-21 12:13:45.381515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-10-21 12:13:45.381882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-10-21 12:13:45.381912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-10-21 12:13:45.382268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-10-21 12:13:45.382299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-10-21 12:13:45.382584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-10-21 12:13:45.382615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-10-21 12:13:45.383023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-10-21 12:13:45.383055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-10-21 12:13:45.383415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-10-21 12:13:45.383454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-10-21 12:13:45.383690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-10-21 12:13:45.383720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-10-21 12:13:45.384089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-10-21 12:13:45.384119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-10-21 12:13:45.384497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-10-21 12:13:45.384527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-10-21 12:13:45.384890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-10-21 12:13:45.384922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-10-21 12:13:45.385273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-10-21 12:13:45.385304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-10-21 12:13:45.385560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-10-21 12:13:45.385591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-10-21 12:13:45.385938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-10-21 12:13:45.385969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-10-21 12:13:45.386393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-10-21 12:13:45.386424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-10-21 12:13:45.386771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-10-21 12:13:45.386803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-10-21 12:13:45.387049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-10-21 12:13:45.387081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-10-21 12:13:45.387449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-10-21 12:13:45.387482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-10-21 12:13:45.387854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-10-21 12:13:45.387884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-10-21 12:13:45.388244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-10-21 12:13:45.388275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-10-21 12:13:45.388673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-10-21 12:13:45.388705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-10-21 12:13:45.389080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-10-21 12:13:45.389112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-10-21 12:13:45.389353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-10-21 12:13:45.389386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-10-21 12:13:45.389745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-10-21 12:13:45.389776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-10-21 12:13:45.390169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-10-21 12:13:45.390199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-10-21 12:13:45.390581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-10-21 12:13:45.390614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-10-21 12:13:45.390981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-10-21 12:13:45.391012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-10-21 12:13:45.391370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-10-21 12:13:45.391403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-10-21 12:13:45.391827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-10-21 12:13:45.391858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-10-21 12:13:45.392218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-10-21 12:13:45.392249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-10-21 12:13:45.392622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-10-21 12:13:45.392655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.945 [2024-10-21 12:13:45.392902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-10-21 12:13:45.392935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-10-21 12:13:45.393289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-10-21 12:13:45.393331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-10-21 12:13:45.393694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-10-21 12:13:45.393725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-10-21 12:13:45.393970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-10-21 12:13:45.394002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-10-21 12:13:45.394371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-10-21 12:13:45.394403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-10-21 12:13:45.394768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-10-21 12:13:45.394799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-10-21 12:13:45.395068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-10-21 12:13:45.395098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-10-21 12:13:45.395430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-10-21 12:13:45.395461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-10-21 12:13:45.395833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-10-21 12:13:45.395865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-10-21 12:13:45.396227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-10-21 12:13:45.396260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-10-21 12:13:45.396530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-10-21 12:13:45.396565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-10-21 12:13:45.396899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-10-21 12:13:45.396931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-10-21 12:13:45.397291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-10-21 12:13:45.397334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-10-21 12:13:45.397708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-10-21 12:13:45.397739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-10-21 12:13:45.397956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-10-21 12:13:45.397988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-10-21 12:13:45.398345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-10-21 12:13:45.398384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-10-21 12:13:45.398741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-10-21 12:13:45.398770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-10-21 12:13:45.398985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-10-21 12:13:45.399015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-10-21 12:13:45.399389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-10-21 12:13:45.399424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-10-21 12:13:45.399694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-10-21 12:13:45.399724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-10-21 12:13:45.400070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-10-21 12:13:45.400102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-10-21 12:13:45.400447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-10-21 12:13:45.400479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-10-21 12:13:45.400859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-10-21 12:13:45.400889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-10-21 12:13:45.401129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-10-21 12:13:45.401158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-10-21 12:13:45.401372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-10-21 12:13:45.401405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-10-21 12:13:45.401688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-10-21 12:13:45.401719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-10-21 12:13:45.402075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-10-21 12:13:45.402105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-10-21 12:13:45.402517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-10-21 12:13:45.402550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-10-21 12:13:45.402901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-10-21 12:13:45.402930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-10-21 12:13:45.403293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-10-21 12:13:45.403334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-10-21 12:13:45.403671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-10-21 12:13:45.403700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-10-21 12:13:45.404051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-10-21 12:13:45.404081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-10-21 12:13:45.404439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-10-21 12:13:45.404471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-10-21 12:13:45.404909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-10-21 12:13:45.404940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-10-21 12:13:45.405299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-10-21 12:13:45.405339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-10-21 12:13:45.405724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-10-21 12:13:45.405755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-10-21 12:13:45.406113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-10-21 12:13:45.406143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-10-21 12:13:45.406394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-10-21 12:13:45.406426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-10-21 12:13:45.406783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-10-21 12:13:45.406813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-10-21 12:13:45.407169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-10-21 12:13:45.407200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-10-21 12:13:45.407573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-10-21 12:13:45.407605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-10-21 12:13:45.407970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-10-21 12:13:45.408001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-10-21 12:13:45.408360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-10-21 12:13:45.408394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-10-21 12:13:45.408764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-10-21 12:13:45.408794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-10-21 12:13:45.409219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-10-21 12:13:45.409249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-10-21 12:13:45.409627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-10-21 12:13:45.409660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-10-21 12:13:45.410010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-10-21 12:13:45.410040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-10-21 12:13:45.410401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-10-21 12:13:45.410432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-10-21 12:13:45.410810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-10-21 12:13:45.410841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-10-21 12:13:45.411186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-10-21 12:13:45.411216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-10-21 12:13:45.411574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-10-21 12:13:45.411605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-10-21 12:13:45.411962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-10-21 12:13:45.411991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-10-21 12:13:45.412342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-10-21 12:13:45.412375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-10-21 12:13:45.412756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-10-21 12:13:45.412787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-10-21 12:13:45.413188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-10-21 12:13:45.413220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-10-21 12:13:45.413579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-10-21 12:13:45.413618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-10-21 12:13:45.413945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-10-21 12:13:45.413975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-10-21 12:13:45.414336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-10-21 12:13:45.414369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-10-21 12:13:45.414718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-10-21 12:13:45.414750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-10-21 12:13:45.415112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-10-21 12:13:45.415143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-10-21 12:13:45.415489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-10-21 12:13:45.415522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-10-21 12:13:45.415879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-10-21 12:13:45.415910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-10-21 12:13:45.416252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-10-21 12:13:45.416282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-10-21 12:13:45.416690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-10-21 12:13:45.416722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-10-21 12:13:45.417070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-10-21 12:13:45.417099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-10-21 12:13:45.417459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-10-21 12:13:45.417491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-10-21 12:13:45.417860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-10-21 12:13:45.417890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-10-21 12:13:45.418249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-10-21 12:13:45.418279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-10-21 12:13:45.418639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-10-21 12:13:45.418671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-10-21 12:13:45.419032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-10-21 12:13:45.419063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-10-21 12:13:45.419425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-10-21 12:13:45.419456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-10-21 12:13:45.419817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-10-21 12:13:45.419847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-10-21 12:13:45.420204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-10-21 12:13:45.420235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-10-21 12:13:45.420596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-10-21 12:13:45.420628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-10-21 12:13:45.420979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-10-21 12:13:45.421008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-10-21 12:13:45.421376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-10-21 12:13:45.421409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-10-21 12:13:45.421762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-10-21 12:13:45.421793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-10-21 12:13:45.422176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-10-21 12:13:45.422207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-10-21 12:13:45.422460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-10-21 12:13:45.422491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-10-21 12:13:45.422878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-10-21 12:13:45.422908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-10-21 12:13:45.423264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-10-21 12:13:45.423295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-10-21 12:13:45.423640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-10-21 12:13:45.423672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-10-21 12:13:45.424036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-10-21 12:13:45.424068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-10-21 12:13:45.424447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-10-21 12:13:45.424478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-10-21 12:13:45.424838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-10-21 12:13:45.424868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-10-21 12:13:45.425223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-10-21 12:13:45.425253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-10-21 12:13:45.425629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-10-21 12:13:45.425663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-10-21 12:13:45.426042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-10-21 12:13:45.426074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-10-21 12:13:45.426430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-10-21 12:13:45.426460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-10-21 12:13:45.426816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-10-21 12:13:45.426846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-10-21 12:13:45.427208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-10-21 12:13:45.427239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-10-21 12:13:45.427597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-10-21 12:13:45.427628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-10-21 12:13:45.427871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-10-21 12:13:45.427905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-10-21 12:13:45.428250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-10-21 12:13:45.428281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-10-21 12:13:45.428540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-10-21 12:13:45.428571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-10-21 12:13:45.428921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-10-21 12:13:45.428957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-10-21 12:13:45.429303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-10-21 12:13:45.429345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-10-21 12:13:45.429715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-10-21 12:13:45.429745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-10-21 12:13:45.430101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-10-21 12:13:45.430130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-10-21 12:13:45.430494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-10-21 12:13:45.430525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-10-21 12:13:45.430867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-10-21 12:13:45.430896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-10-21 12:13:45.431253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-10-21 12:13:45.431284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-10-21 12:13:45.431640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-10-21 12:13:45.431674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-10-21 12:13:45.432028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-10-21 12:13:45.432059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-10-21 12:13:45.432409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-10-21 12:13:45.432441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-10-21 12:13:45.432813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-10-21 12:13:45.432843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-10-21 12:13:45.433191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-10-21 12:13:45.433221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-10-21 12:13:45.433461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-10-21 12:13:45.433497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-10-21 12:13:45.433875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-10-21 12:13:45.433907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-10-21 12:13:45.434296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-10-21 12:13:45.434338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-10-21 12:13:45.434562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-10-21 12:13:45.434596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-10-21 12:13:45.434960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-10-21 12:13:45.434990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-10-21 12:13:45.435348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-10-21 12:13:45.435379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-10-21 12:13:45.435742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-10-21 12:13:45.435774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-10-21 12:13:45.436011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-10-21 12:13:45.436045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-10-21 12:13:45.436396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-10-21 12:13:45.436428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-10-21 12:13:45.436790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-10-21 12:13:45.436820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-10-21 12:13:45.437170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-10-21 12:13:45.437201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-10-21 12:13:45.437566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-10-21 12:13:45.437600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-10-21 12:13:45.437851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-10-21 12:13:45.437882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-10-21 12:13:45.438226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-10-21 12:13:45.438257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-10-21 12:13:45.438491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-10-21 12:13:45.438526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-10-21 12:13:45.438918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-10-21 12:13:45.438950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-10-21 12:13:45.439308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-10-21 12:13:45.439351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-10-21 12:13:45.439703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-10-21 12:13:45.439734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-10-21 12:13:45.440096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-10-21 12:13:45.440128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-10-21 12:13:45.440517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-10-21 12:13:45.440551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-10-21 12:13:45.440887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-10-21 12:13:45.440917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-10-21 12:13:45.441282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-10-21 12:13:45.441311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-10-21 12:13:45.441666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-10-21 12:13:45.441696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-10-21 12:13:45.442061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-10-21 12:13:45.442092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-10-21 12:13:45.442462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-10-21 12:13:45.442495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-10-21 12:13:45.442885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-10-21 12:13:45.442917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-10-21 12:13:45.443154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-10-21 12:13:45.443185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-10-21 12:13:45.443533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-10-21 12:13:45.443564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.949 [2024-10-21 12:13:45.443938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-10-21 12:13:45.443973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-10-21 12:13:45.444343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-10-21 12:13:45.444375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-10-21 12:13:45.444606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-10-21 12:13:45.444640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-10-21 12:13:45.444908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-10-21 12:13:45.444936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-10-21 12:13:45.445304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-10-21 12:13:45.445346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-10-21 12:13:45.445707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-10-21 12:13:45.445738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-10-21 12:13:45.446097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-10-21 12:13:45.446126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-10-21 12:13:45.446489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-10-21 12:13:45.446520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-10-21 12:13:45.446878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-10-21 12:13:45.446908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-10-21 12:13:45.447271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-10-21 12:13:45.447300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-10-21 12:13:45.447674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-10-21 12:13:45.447706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-10-21 12:13:45.448098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-10-21 12:13:45.448130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-10-21 12:13:45.448376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-10-21 12:13:45.448409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-10-21 12:13:45.448754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-10-21 12:13:45.448783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-10-21 12:13:45.449143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-10-21 12:13:45.449173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-10-21 12:13:45.449538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-10-21 12:13:45.449569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-10-21 12:13:45.449805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-10-21 12:13:45.449842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-10-21 12:13:45.450065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-10-21 12:13:45.450098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-10-21 12:13:45.450486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-10-21 12:13:45.450518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-10-21 12:13:45.450875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-10-21 12:13:45.450906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-10-21 12:13:45.451271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-10-21 12:13:45.451301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-10-21 12:13:45.451671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-10-21 12:13:45.451702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-10-21 12:13:45.452062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-10-21 12:13:45.452092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-10-21 12:13:45.452274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-10-21 12:13:45.452305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-10-21 12:13:45.452679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-10-21 12:13:45.452710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-10-21 12:13:45.453062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-10-21 12:13:45.453091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-10-21 12:13:45.453448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-10-21 12:13:45.453479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-10-21 12:13:45.453840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-10-21 12:13:45.453870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-10-21 12:13:45.454230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-10-21 12:13:45.454260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-10-21 12:13:45.454627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-10-21 12:13:45.454660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-10-21 12:13:45.455012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-10-21 12:13:45.455043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-10-21 12:13:45.455399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-10-21 12:13:45.455430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-10-21 12:13:45.455779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-10-21 12:13:45.455809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-10-21 12:13:45.456178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-10-21 12:13:45.456210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-10-21 12:13:45.456576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-10-21 12:13:45.456607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-10-21 12:13:45.456962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-10-21 12:13:45.456992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.950 qpair failed and we were unable to recover it. 00:29:08.950 [2024-10-21 12:13:45.457356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.950 [2024-10-21 12:13:45.457387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.950 qpair failed and we were unable to recover it. 00:29:08.950 [2024-10-21 12:13:45.457784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.950 [2024-10-21 12:13:45.457816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.950 qpair failed and we were unable to recover it. 00:29:08.950 [2024-10-21 12:13:45.458177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.950 [2024-10-21 12:13:45.458207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.950 qpair failed and we were unable to recover it. 00:29:08.950 [2024-10-21 12:13:45.458572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.950 [2024-10-21 12:13:45.458603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.950 qpair failed and we were unable to recover it. 00:29:08.950 [2024-10-21 12:13:45.458978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.950 [2024-10-21 12:13:45.459015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.950 qpair failed and we were unable to recover it. 00:29:08.950 [2024-10-21 12:13:45.459371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.950 [2024-10-21 12:13:45.459402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.950 qpair failed and we were unable to recover it. 00:29:08.950 [2024-10-21 12:13:45.459763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.950 [2024-10-21 12:13:45.459792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.950 qpair failed and we were unable to recover it. 00:29:08.950 [2024-10-21 12:13:45.460154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.950 [2024-10-21 12:13:45.460184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.950 qpair failed and we were unable to recover it. 00:29:08.950 [2024-10-21 12:13:45.460599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.950 [2024-10-21 12:13:45.460632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.950 qpair failed and we were unable to recover it. 00:29:08.950 [2024-10-21 12:13:45.460988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.950 [2024-10-21 12:13:45.461017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.950 qpair failed and we were unable to recover it. 00:29:08.950 [2024-10-21 12:13:45.461466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.950 [2024-10-21 12:13:45.461498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.950 qpair failed and we were unable to recover it. 00:29:08.950 [2024-10-21 12:13:45.461864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.950 [2024-10-21 12:13:45.461894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.950 qpair failed and we were unable to recover it. 00:29:08.950 [2024-10-21 12:13:45.462251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.950 [2024-10-21 12:13:45.462283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.950 qpair failed and we were unable to recover it. 00:29:08.950 [2024-10-21 12:13:45.462680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.950 [2024-10-21 12:13:45.462713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.950 qpair failed and we were unable to recover it. 00:29:08.950 [2024-10-21 12:13:45.463074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.950 [2024-10-21 12:13:45.463105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.950 qpair failed and we were unable to recover it. 00:29:08.950 [2024-10-21 12:13:45.463484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.950 [2024-10-21 12:13:45.463516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.950 qpair failed and we were unable to recover it. 00:29:08.950 [2024-10-21 12:13:45.463771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.950 [2024-10-21 12:13:45.463801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.950 qpair failed and we were unable to recover it. 00:29:08.950 [2024-10-21 12:13:45.464155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.950 [2024-10-21 12:13:45.464185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.950 qpair failed and we were unable to recover it. 00:29:08.950 [2024-10-21 12:13:45.464562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.950 [2024-10-21 12:13:45.464594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.950 qpair failed and we were unable to recover it. 00:29:08.950 [2024-10-21 12:13:45.464931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.950 [2024-10-21 12:13:45.464960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.950 qpair failed and we were unable to recover it. 00:29:08.950 [2024-10-21 12:13:45.465312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.950 [2024-10-21 12:13:45.465354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.950 qpair failed and we were unable to recover it. 00:29:08.950 [2024-10-21 12:13:45.465692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.950 [2024-10-21 12:13:45.465724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.950 qpair failed and we were unable to recover it. 00:29:08.950 [2024-10-21 12:13:45.466039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.950 [2024-10-21 12:13:45.466068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.950 qpair failed and we were unable to recover it. 00:29:08.950 [2024-10-21 12:13:45.466401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.950 [2024-10-21 12:13:45.466433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.950 qpair failed and we were unable to recover it. 00:29:08.950 [2024-10-21 12:13:45.466802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.950 [2024-10-21 12:13:45.466834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.950 qpair failed and we were unable to recover it. 00:29:08.950 [2024-10-21 12:13:45.467169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.950 [2024-10-21 12:13:45.467200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.950 qpair failed and we were unable to recover it. 00:29:08.950 [2024-10-21 12:13:45.467466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.950 [2024-10-21 12:13:45.467497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.950 qpair failed and we were unable to recover it. 00:29:08.950 [2024-10-21 12:13:45.467864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.950 [2024-10-21 12:13:45.467894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.950 qpair failed and we were unable to recover it. 00:29:08.950 [2024-10-21 12:13:45.468254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.950 [2024-10-21 12:13:45.468284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.950 qpair failed and we were unable to recover it. 00:29:08.950 [2024-10-21 12:13:45.468577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.950 [2024-10-21 12:13:45.468608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.950 qpair failed and we were unable to recover it. 00:29:08.950 [2024-10-21 12:13:45.469030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.950 [2024-10-21 12:13:45.469062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.951 qpair failed and we were unable to recover it. 00:29:08.951 [2024-10-21 12:13:45.469413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.951 [2024-10-21 12:13:45.469445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.951 qpair failed and we were unable to recover it. 00:29:08.951 [2024-10-21 12:13:45.469803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.951 [2024-10-21 12:13:45.469833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.951 qpair failed and we were unable to recover it. 00:29:08.951 [2024-10-21 12:13:45.470189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.951 [2024-10-21 12:13:45.470218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.951 qpair failed and we were unable to recover it. 00:29:08.951 [2024-10-21 12:13:45.470584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.951 [2024-10-21 12:13:45.470616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.951 qpair failed and we were unable to recover it. 00:29:08.951 [2024-10-21 12:13:45.470962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.951 [2024-10-21 12:13:45.470992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.951 qpair failed and we were unable to recover it. 00:29:08.951 [2024-10-21 12:13:45.471343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.951 [2024-10-21 12:13:45.471374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.951 qpair failed and we were unable to recover it. 00:29:08.951 [2024-10-21 12:13:45.471722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.951 [2024-10-21 12:13:45.471752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.951 qpair failed and we were unable to recover it. 00:29:08.951 [2024-10-21 12:13:45.472110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.951 [2024-10-21 12:13:45.472141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.951 qpair failed and we were unable to recover it. 00:29:08.951 [2024-10-21 12:13:45.472499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.951 [2024-10-21 12:13:45.472531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.951 qpair failed and we were unable to recover it. 00:29:08.951 [2024-10-21 12:13:45.472888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.951 [2024-10-21 12:13:45.472918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.951 qpair failed and we were unable to recover it. 00:29:08.951 [2024-10-21 12:13:45.473275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.951 [2024-10-21 12:13:45.473306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.951 qpair failed and we were unable to recover it. 00:29:08.951 [2024-10-21 12:13:45.473696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.951 [2024-10-21 12:13:45.473725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.951 qpair failed and we were unable to recover it. 00:29:08.951 [2024-10-21 12:13:45.474080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.951 [2024-10-21 12:13:45.474111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.951 qpair failed and we were unable to recover it. 00:29:08.951 [2024-10-21 12:13:45.474472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.951 [2024-10-21 12:13:45.474509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.951 qpair failed and we were unable to recover it. 00:29:08.951 [2024-10-21 12:13:45.474865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.951 [2024-10-21 12:13:45.474895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.951 qpair failed and we were unable to recover it. 00:29:08.951 [2024-10-21 12:13:45.475263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.951 [2024-10-21 12:13:45.475293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.951 qpair failed and we were unable to recover it. 00:29:08.951 [2024-10-21 12:13:45.475672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.951 [2024-10-21 12:13:45.475705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.951 qpair failed and we were unable to recover it. 00:29:08.951 [2024-10-21 12:13:45.476134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.951 [2024-10-21 12:13:45.476165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.951 qpair failed and we were unable to recover it. 00:29:08.951 [2024-10-21 12:13:45.476512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.951 [2024-10-21 12:13:45.476545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.951 qpair failed and we were unable to recover it. 00:29:08.951 [2024-10-21 12:13:45.476895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.951 [2024-10-21 12:13:45.476924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.951 qpair failed and we were unable to recover it. 00:29:08.951 [2024-10-21 12:13:45.477278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.951 [2024-10-21 12:13:45.477309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.951 qpair failed and we were unable to recover it. 00:29:08.951 [2024-10-21 12:13:45.477667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.951 [2024-10-21 12:13:45.477698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.951 qpair failed and we were unable to recover it. 00:29:08.951 [2024-10-21 12:13:45.478051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.951 [2024-10-21 12:13:45.478081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.951 qpair failed and we were unable to recover it. 00:29:08.951 [2024-10-21 12:13:45.478456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.951 [2024-10-21 12:13:45.478488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.951 qpair failed and we were unable to recover it. 00:29:08.951 [2024-10-21 12:13:45.478873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.951 [2024-10-21 12:13:45.478903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.951 qpair failed and we were unable to recover it. 00:29:08.951 [2024-10-21 12:13:45.479262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.951 [2024-10-21 12:13:45.479292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.951 qpair failed and we were unable to recover it. 00:29:08.951 [2024-10-21 12:13:45.479659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.951 [2024-10-21 12:13:45.479692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.951 qpair failed and we were unable to recover it. 00:29:08.951 [2024-10-21 12:13:45.480053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.951 [2024-10-21 12:13:45.480085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.951 qpair failed and we were unable to recover it. 00:29:08.951 [2024-10-21 12:13:45.480452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.951 [2024-10-21 12:13:45.480484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.951 qpair failed and we were unable to recover it. 00:29:08.951 [2024-10-21 12:13:45.480836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.951 [2024-10-21 12:13:45.480866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.951 qpair failed and we were unable to recover it. 00:29:08.951 [2024-10-21 12:13:45.481216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.951 [2024-10-21 12:13:45.481247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.951 qpair failed and we were unable to recover it. 00:29:08.951 [2024-10-21 12:13:45.481612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.951 [2024-10-21 12:13:45.481645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.951 qpair failed and we were unable to recover it. 00:29:08.951 [2024-10-21 12:13:45.481983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.951 [2024-10-21 12:13:45.482014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.951 qpair failed and we were unable to recover it. 00:29:08.951 [2024-10-21 12:13:45.482396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.951 [2024-10-21 12:13:45.482428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.951 qpair failed and we were unable to recover it. 00:29:08.951 [2024-10-21 12:13:45.482775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.951 [2024-10-21 12:13:45.482805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.951 qpair failed and we were unable to recover it. 00:29:08.952 [2024-10-21 12:13:45.483159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.952 [2024-10-21 12:13:45.483188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.952 qpair failed and we were unable to recover it. 00:29:08.952 [2024-10-21 12:13:45.483604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.952 [2024-10-21 12:13:45.483636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.952 qpair failed and we were unable to recover it. 00:29:08.952 [2024-10-21 12:13:45.483986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.952 [2024-10-21 12:13:45.484017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.952 qpair failed and we were unable to recover it. 00:29:08.952 [2024-10-21 12:13:45.484378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.952 [2024-10-21 12:13:45.484410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.952 qpair failed and we were unable to recover it. 00:29:08.952 [2024-10-21 12:13:45.484825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.952 [2024-10-21 12:13:45.484855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.952 qpair failed and we were unable to recover it. 00:29:08.952 [2024-10-21 12:13:45.485207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.952 [2024-10-21 12:13:45.485238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.952 qpair failed and we were unable to recover it. 00:29:08.952 [2024-10-21 12:13:45.485601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.952 [2024-10-21 12:13:45.485632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.952 qpair failed and we were unable to recover it. 00:29:08.952 [2024-10-21 12:13:45.485963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.952 [2024-10-21 12:13:45.485993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.952 qpair failed and we were unable to recover it. 00:29:08.952 [2024-10-21 12:13:45.486346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.952 [2024-10-21 12:13:45.486377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.952 qpair failed and we were unable to recover it. 00:29:08.952 [2024-10-21 12:13:45.486740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.952 [2024-10-21 12:13:45.486770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.952 qpair failed and we were unable to recover it. 00:29:08.952 [2024-10-21 12:13:45.487018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.952 [2024-10-21 12:13:45.487049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.952 qpair failed and we were unable to recover it. 00:29:08.952 [2024-10-21 12:13:45.487408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.952 [2024-10-21 12:13:45.487440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.952 qpair failed and we were unable to recover it. 00:29:08.952 [2024-10-21 12:13:45.487795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.952 [2024-10-21 12:13:45.487825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.952 qpair failed and we were unable to recover it. 00:29:08.952 [2024-10-21 12:13:45.488185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.952 [2024-10-21 12:13:45.488215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.952 qpair failed and we were unable to recover it. 00:29:08.952 [2024-10-21 12:13:45.488468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.952 [2024-10-21 12:13:45.488504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.952 qpair failed and we were unable to recover it. 00:29:08.952 [2024-10-21 12:13:45.488855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.952 [2024-10-21 12:13:45.488887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.952 qpair failed and we were unable to recover it. 00:29:08.952 [2024-10-21 12:13:45.489291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.952 [2024-10-21 12:13:45.489338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.952 qpair failed and we were unable to recover it. 00:29:08.952 [2024-10-21 12:13:45.489711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.952 [2024-10-21 12:13:45.489741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.952 qpair failed and we were unable to recover it. 00:29:08.952 [2024-10-21 12:13:45.490084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.952 [2024-10-21 12:13:45.490122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.952 qpair failed and we were unable to recover it. 00:29:08.952 [2024-10-21 12:13:45.490476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.952 [2024-10-21 12:13:45.490509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.952 qpair failed and we were unable to recover it. 00:29:08.952 [2024-10-21 12:13:45.490878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.952 [2024-10-21 12:13:45.490908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.952 qpair failed and we were unable to recover it. 00:29:08.952 [2024-10-21 12:13:45.491265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.952 [2024-10-21 12:13:45.491295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.952 qpair failed and we were unable to recover it. 00:29:08.952 [2024-10-21 12:13:45.491662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.952 [2024-10-21 12:13:45.491693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.952 qpair failed and we were unable to recover it. 00:29:08.952 [2024-10-21 12:13:45.492042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.952 [2024-10-21 12:13:45.492071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.952 qpair failed and we were unable to recover it. 00:29:08.952 [2024-10-21 12:13:45.492433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.952 [2024-10-21 12:13:45.492465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.952 qpair failed and we were unable to recover it. 00:29:08.952 [2024-10-21 12:13:45.492900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.952 [2024-10-21 12:13:45.492930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.952 qpair failed and we were unable to recover it. 00:29:08.952 [2024-10-21 12:13:45.493283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.952 [2024-10-21 12:13:45.493313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.952 qpair failed and we were unable to recover it. 00:29:08.952 [2024-10-21 12:13:45.493705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.952 [2024-10-21 12:13:45.493737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.952 qpair failed and we were unable to recover it. 00:29:08.952 [2024-10-21 12:13:45.494099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.952 [2024-10-21 12:13:45.494129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.952 qpair failed and we were unable to recover it. 00:29:08.952 [2024-10-21 12:13:45.494486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.952 [2024-10-21 12:13:45.494518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.952 qpair failed and we were unable to recover it. 00:29:08.952 [2024-10-21 12:13:45.494877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.952 [2024-10-21 12:13:45.494907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.952 qpair failed and we were unable to recover it. 00:29:08.952 [2024-10-21 12:13:45.495269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.952 [2024-10-21 12:13:45.495300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.952 qpair failed and we were unable to recover it. 00:29:08.952 [2024-10-21 12:13:45.495684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.952 [2024-10-21 12:13:45.495716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.952 qpair failed and we were unable to recover it. 00:29:08.952 [2024-10-21 12:13:45.496070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.952 [2024-10-21 12:13:45.496100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.952 qpair failed and we were unable to recover it. 00:29:08.952 [2024-10-21 12:13:45.496460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.952 [2024-10-21 12:13:45.496492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.952 qpair failed and we were unable to recover it. 00:29:08.952 [2024-10-21 12:13:45.496861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.953 [2024-10-21 12:13:45.496892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.953 qpair failed and we were unable to recover it. 00:29:08.953 [2024-10-21 12:13:45.497346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.953 [2024-10-21 12:13:45.497380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.953 qpair failed and we were unable to recover it. 00:29:08.953 [2024-10-21 12:13:45.497727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.953 [2024-10-21 12:13:45.497756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.953 qpair failed and we were unable to recover it. 00:29:08.953 [2024-10-21 12:13:45.498124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.953 [2024-10-21 12:13:45.498154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.953 qpair failed and we were unable to recover it. 00:29:08.953 [2024-10-21 12:13:45.498507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.953 [2024-10-21 12:13:45.498539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.953 qpair failed and we were unable to recover it. 00:29:08.953 [2024-10-21 12:13:45.498887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.953 [2024-10-21 12:13:45.498917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.953 qpair failed and we were unable to recover it. 00:29:08.953 [2024-10-21 12:13:45.499285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.953 [2024-10-21 12:13:45.499314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.953 qpair failed and we were unable to recover it. 00:29:08.953 [2024-10-21 12:13:45.499737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.953 [2024-10-21 12:13:45.499768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.953 qpair failed and we were unable to recover it. 00:29:08.953 [2024-10-21 12:13:45.500129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.953 [2024-10-21 12:13:45.500159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.953 qpair failed and we were unable to recover it. 00:29:08.953 [2024-10-21 12:13:45.500508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.953 [2024-10-21 12:13:45.500540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.953 qpair failed and we were unable to recover it. 00:29:08.953 [2024-10-21 12:13:45.500898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.953 [2024-10-21 12:13:45.500929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.953 qpair failed and we were unable to recover it. 00:29:08.953 [2024-10-21 12:13:45.501290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.953 [2024-10-21 12:13:45.501331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.953 qpair failed and we were unable to recover it. 00:29:08.953 [2024-10-21 12:13:45.501697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.953 [2024-10-21 12:13:45.501727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.953 qpair failed and we were unable to recover it. 00:29:08.953 [2024-10-21 12:13:45.502086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.953 [2024-10-21 12:13:45.502117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.953 qpair failed and we were unable to recover it. 00:29:08.953 [2024-10-21 12:13:45.502489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.953 [2024-10-21 12:13:45.502520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.953 qpair failed and we were unable to recover it. 00:29:08.953 [2024-10-21 12:13:45.502877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.953 [2024-10-21 12:13:45.502908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.953 qpair failed and we were unable to recover it. 00:29:08.953 [2024-10-21 12:13:45.503255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.953 [2024-10-21 12:13:45.503286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.953 qpair failed and we were unable to recover it. 00:29:08.953 [2024-10-21 12:13:45.503692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.953 [2024-10-21 12:13:45.503724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.953 qpair failed and we were unable to recover it. 00:29:08.953 [2024-10-21 12:13:45.503959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.953 [2024-10-21 12:13:45.503994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.953 qpair failed and we were unable to recover it. 00:29:08.953 [2024-10-21 12:13:45.504219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.953 [2024-10-21 12:13:45.504250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.953 qpair failed and we were unable to recover it. 00:29:08.953 [2024-10-21 12:13:45.504623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.953 [2024-10-21 12:13:45.504656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.953 qpair failed and we were unable to recover it. 00:29:08.953 [2024-10-21 12:13:45.505012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.953 [2024-10-21 12:13:45.505042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.953 qpair failed and we were unable to recover it. 00:29:08.953 [2024-10-21 12:13:45.505405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.953 [2024-10-21 12:13:45.505437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.953 qpair failed and we were unable to recover it. 00:29:08.953 [2024-10-21 12:13:45.505796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.953 [2024-10-21 12:13:45.505832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.953 qpair failed and we were unable to recover it. 00:29:08.953 [2024-10-21 12:13:45.506176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.953 [2024-10-21 12:13:45.506206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.953 qpair failed and we were unable to recover it. 00:29:08.953 [2024-10-21 12:13:45.506458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.953 [2024-10-21 12:13:45.506492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.953 qpair failed and we were unable to recover it. 00:29:08.953 [2024-10-21 12:13:45.506850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.953 [2024-10-21 12:13:45.506879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.953 qpair failed and we were unable to recover it. 00:29:08.953 [2024-10-21 12:13:45.507239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.953 [2024-10-21 12:13:45.507270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:08.953 qpair failed and we were unable to recover it. 00:29:09.231 [2024-10-21 12:13:45.507645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.231 [2024-10-21 12:13:45.507679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.231 qpair failed and we were unable to recover it. 00:29:09.231 [2024-10-21 12:13:45.508038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.231 [2024-10-21 12:13:45.508072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.231 qpair failed and we were unable to recover it. 00:29:09.231 [2024-10-21 12:13:45.508427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.231 [2024-10-21 12:13:45.508460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.231 qpair failed and we were unable to recover it. 00:29:09.231 [2024-10-21 12:13:45.508593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.231 [2024-10-21 12:13:45.508627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.231 qpair failed and we were unable to recover it. 00:29:09.231 [2024-10-21 12:13:45.509021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.231 [2024-10-21 12:13:45.509052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.231 qpair failed and we were unable to recover it. 00:29:09.231 [2024-10-21 12:13:45.509413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.231 [2024-10-21 12:13:45.509445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.231 qpair failed and we were unable to recover it. 00:29:09.231 [2024-10-21 12:13:45.509787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.231 [2024-10-21 12:13:45.509819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.231 qpair failed and we were unable to recover it. 00:29:09.231 [2024-10-21 12:13:45.510037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.231 [2024-10-21 12:13:45.510070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.231 qpair failed and we were unable to recover it. 00:29:09.231 [2024-10-21 12:13:45.510422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.231 [2024-10-21 12:13:45.510455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.231 qpair failed and we were unable to recover it. 00:29:09.231 [2024-10-21 12:13:45.510779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.231 [2024-10-21 12:13:45.510811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.231 qpair failed and we were unable to recover it. 00:29:09.231 [2024-10-21 12:13:45.511047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.231 [2024-10-21 12:13:45.511080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.231 qpair failed and we were unable to recover it. 00:29:09.231 [2024-10-21 12:13:45.511444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.231 [2024-10-21 12:13:45.511478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.231 qpair failed and we were unable to recover it. 00:29:09.231 [2024-10-21 12:13:45.511817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.231 [2024-10-21 12:13:45.511848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.231 qpair failed and we were unable to recover it. 00:29:09.231 [2024-10-21 12:13:45.512204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.231 [2024-10-21 12:13:45.512234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.231 qpair failed and we were unable to recover it. 00:29:09.231 [2024-10-21 12:13:45.512589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.231 [2024-10-21 12:13:45.512621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.231 qpair failed and we were unable to recover it. 00:29:09.231 [2024-10-21 12:13:45.512981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.231 [2024-10-21 12:13:45.513012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.231 qpair failed and we were unable to recover it. 00:29:09.231 [2024-10-21 12:13:45.513365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.231 [2024-10-21 12:13:45.513397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.231 qpair failed and we were unable to recover it. 00:29:09.232 [2024-10-21 12:13:45.513754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-10-21 12:13:45.513785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-10-21 12:13:45.514136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-10-21 12:13:45.514167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-10-21 12:13:45.514536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-10-21 12:13:45.514568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-10-21 12:13:45.514909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-10-21 12:13:45.514941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-10-21 12:13:45.515300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-10-21 12:13:45.515342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-10-21 12:13:45.515712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-10-21 12:13:45.515742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-10-21 12:13:45.516095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-10-21 12:13:45.516126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-10-21 12:13:45.516473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-10-21 12:13:45.516506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-10-21 12:13:45.516858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-10-21 12:13:45.516889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-10-21 12:13:45.517225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-10-21 12:13:45.517255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-10-21 12:13:45.517607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-10-21 12:13:45.517639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-10-21 12:13:45.518011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-10-21 12:13:45.518042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-10-21 12:13:45.518403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-10-21 12:13:45.518435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-10-21 12:13:45.518790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-10-21 12:13:45.518821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-10-21 12:13:45.519177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-10-21 12:13:45.519207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-10-21 12:13:45.519577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-10-21 12:13:45.519608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-10-21 12:13:45.519968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-10-21 12:13:45.519999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-10-21 12:13:45.520234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-10-21 12:13:45.520266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-10-21 12:13:45.520622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-10-21 12:13:45.520661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-10-21 12:13:45.521022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-10-21 12:13:45.521053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-10-21 12:13:45.521400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-10-21 12:13:45.521433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-10-21 12:13:45.521782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-10-21 12:13:45.521812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-10-21 12:13:45.522149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-10-21 12:13:45.522180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-10-21 12:13:45.522552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-10-21 12:13:45.522584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-10-21 12:13:45.522946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-10-21 12:13:45.522977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-10-21 12:13:45.523340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-10-21 12:13:45.523372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-10-21 12:13:45.523734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-10-21 12:13:45.523765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-10-21 12:13:45.524118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-10-21 12:13:45.524149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-10-21 12:13:45.524504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-10-21 12:13:45.524536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-10-21 12:13:45.524931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-10-21 12:13:45.524962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-10-21 12:13:45.525333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-10-21 12:13:45.525367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-10-21 12:13:45.525711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-10-21 12:13:45.525741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-10-21 12:13:45.526101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-10-21 12:13:45.526132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-10-21 12:13:45.526501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-10-21 12:13:45.526534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-10-21 12:13:45.526888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-10-21 12:13:45.526918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.233 [2024-10-21 12:13:45.527318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-10-21 12:13:45.527361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-10-21 12:13:45.527733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-10-21 12:13:45.527765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-10-21 12:13:45.528112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-10-21 12:13:45.528143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-10-21 12:13:45.528498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-10-21 12:13:45.528530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-10-21 12:13:45.528775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-10-21 12:13:45.528805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-10-21 12:13:45.529157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-10-21 12:13:45.529188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-10-21 12:13:45.529539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-10-21 12:13:45.529571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-10-21 12:13:45.529927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-10-21 12:13:45.529958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-10-21 12:13:45.530317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-10-21 12:13:45.530359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-10-21 12:13:45.530728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-10-21 12:13:45.530757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-10-21 12:13:45.531116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-10-21 12:13:45.531147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-10-21 12:13:45.531504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-10-21 12:13:45.531537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-10-21 12:13:45.531889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-10-21 12:13:45.531920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-10-21 12:13:45.532281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-10-21 12:13:45.532311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-10-21 12:13:45.532689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-10-21 12:13:45.532719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-10-21 12:13:45.533081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-10-21 12:13:45.533112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-10-21 12:13:45.533484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-10-21 12:13:45.533515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-10-21 12:13:45.533868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-10-21 12:13:45.533899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-10-21 12:13:45.534257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-10-21 12:13:45.534287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-10-21 12:13:45.534692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-10-21 12:13:45.534724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-10-21 12:13:45.535083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-10-21 12:13:45.535113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-10-21 12:13:45.535470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-10-21 12:13:45.535502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-10-21 12:13:45.535860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-10-21 12:13:45.535890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-10-21 12:13:45.536252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-10-21 12:13:45.536289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-10-21 12:13:45.536686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-10-21 12:13:45.536718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-10-21 12:13:45.537078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-10-21 12:13:45.537108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-10-21 12:13:45.537444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-10-21 12:13:45.537477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-10-21 12:13:45.537875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-10-21 12:13:45.537906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-10-21 12:13:45.538254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-10-21 12:13:45.538284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-10-21 12:13:45.538642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-10-21 12:13:45.538674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-10-21 12:13:45.539031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-10-21 12:13:45.539062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-10-21 12:13:45.539310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-10-21 12:13:45.539357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-10-21 12:13:45.539722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-10-21 12:13:45.539753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-10-21 12:13:45.540117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-10-21 12:13:45.540148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-10-21 12:13:45.540500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-10-21 12:13:45.540532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-10-21 12:13:45.540894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-10-21 12:13:45.540924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-10-21 12:13:45.541272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-10-21 12:13:45.541303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-10-21 12:13:45.541697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-10-21 12:13:45.541729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-10-21 12:13:45.542068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-10-21 12:13:45.542098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-21 12:13:45.542454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-21 12:13:45.542486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-21 12:13:45.542854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-21 12:13:45.542885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-21 12:13:45.543234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-21 12:13:45.543264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-21 12:13:45.543658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-21 12:13:45.543690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-21 12:13:45.544041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-21 12:13:45.544071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-21 12:13:45.544421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-21 12:13:45.544453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-21 12:13:45.544799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-21 12:13:45.544832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-21 12:13:45.545180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-21 12:13:45.545212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-21 12:13:45.545559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-21 12:13:45.545590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-21 12:13:45.545928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-21 12:13:45.545959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-21 12:13:45.546308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-21 12:13:45.546349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-21 12:13:45.546722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-21 12:13:45.546753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-21 12:13:45.547114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-21 12:13:45.547145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-21 12:13:45.547488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-21 12:13:45.547520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-21 12:13:45.547872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-21 12:13:45.547903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-21 12:13:45.548303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-21 12:13:45.548346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-21 12:13:45.548676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-21 12:13:45.548706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-21 12:13:45.549054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-21 12:13:45.549085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-21 12:13:45.549439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-21 12:13:45.549471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-21 12:13:45.549840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-21 12:13:45.549871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-21 12:13:45.550142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-21 12:13:45.550173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-21 12:13:45.550539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-21 12:13:45.550572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-21 12:13:45.550922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-21 12:13:45.550953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-21 12:13:45.551205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-21 12:13:45.551235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-21 12:13:45.551580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-21 12:13:45.551612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-21 12:13:45.551879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-21 12:13:45.551914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-21 12:13:45.552238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-21 12:13:45.552270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-21 12:13:45.552594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-21 12:13:45.552626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-21 12:13:45.552979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-21 12:13:45.553010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-21 12:13:45.553370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-21 12:13:45.553402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-21 12:13:45.553735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-21 12:13:45.553766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-21 12:13:45.554110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-21 12:13:45.554141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-21 12:13:45.554521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-21 12:13:45.554553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-21 12:13:45.554806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-21 12:13:45.554838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-21 12:13:45.555184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-21 12:13:45.555215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-21 12:13:45.555465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-21 12:13:45.555499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-21 12:13:45.555842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-21 12:13:45.555872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-21 12:13:45.556122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-21 12:13:45.556153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-21 12:13:45.556509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-21 12:13:45.556541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-21 12:13:45.556903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-21 12:13:45.556934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-21 12:13:45.557297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-21 12:13:45.557344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-21 12:13:45.557730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-21 12:13:45.557761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-21 12:13:45.558125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-21 12:13:45.558156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-21 12:13:45.558526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-21 12:13:45.558558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-21 12:13:45.558905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-21 12:13:45.558936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-21 12:13:45.559292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-21 12:13:45.559333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-21 12:13:45.559699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-21 12:13:45.559731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-21 12:13:45.560068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-21 12:13:45.560099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-21 12:13:45.560456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-21 12:13:45.560487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-21 12:13:45.560846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-21 12:13:45.560877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-21 12:13:45.561107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-21 12:13:45.561136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-21 12:13:45.561489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-21 12:13:45.561527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-21 12:13:45.561878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-21 12:13:45.561908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-21 12:13:45.562262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-21 12:13:45.562292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-21 12:13:45.562682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-21 12:13:45.562715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-21 12:13:45.563078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-21 12:13:45.563108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-21 12:13:45.563499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-21 12:13:45.563531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-21 12:13:45.563877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-21 12:13:45.563908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-21 12:13:45.564136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-21 12:13:45.564166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-21 12:13:45.564639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-21 12:13:45.564671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-21 12:13:45.565021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-21 12:13:45.565052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-21 12:13:45.565401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-21 12:13:45.565451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-21 12:13:45.565807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-21 12:13:45.565837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-21 12:13:45.566080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-21 12:13:45.566111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-21 12:13:45.566476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-21 12:13:45.566508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-21 12:13:45.566877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-21 12:13:45.566907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-21 12:13:45.567262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-21 12:13:45.567293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-21 12:13:45.567674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-21 12:13:45.567706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-21 12:13:45.568065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-21 12:13:45.568097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-21 12:13:45.568426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-21 12:13:45.568458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-21 12:13:45.568823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-21 12:13:45.568855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-21 12:13:45.569214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-21 12:13:45.569245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-21 12:13:45.569571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-21 12:13:45.569603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-21 12:13:45.569952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-21 12:13:45.569982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-21 12:13:45.570355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-21 12:13:45.570388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-21 12:13:45.570790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-21 12:13:45.570821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-21 12:13:45.571176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-21 12:13:45.571207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-21 12:13:45.571610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-21 12:13:45.571642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.236 [2024-10-21 12:13:45.571871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.236 [2024-10-21 12:13:45.571902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.236 qpair failed and we were unable to recover it. 00:29:09.236 [2024-10-21 12:13:45.572266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.236 [2024-10-21 12:13:45.572296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.236 qpair failed and we were unable to recover it. 00:29:09.236 [2024-10-21 12:13:45.572668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.236 [2024-10-21 12:13:45.572700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.236 qpair failed and we were unable to recover it. 00:29:09.236 [2024-10-21 12:13:45.573054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.236 [2024-10-21 12:13:45.573085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.236 qpair failed and we were unable to recover it. 00:29:09.236 [2024-10-21 12:13:45.573346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.236 [2024-10-21 12:13:45.573381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.236 qpair failed and we were unable to recover it. 00:29:09.236 [2024-10-21 12:13:45.573728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.236 [2024-10-21 12:13:45.573758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.236 qpair failed and we were unable to recover it. 00:29:09.236 [2024-10-21 12:13:45.574127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.236 [2024-10-21 12:13:45.574158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.236 qpair failed and we were unable to recover it. 00:29:09.236 [2024-10-21 12:13:45.574504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.236 [2024-10-21 12:13:45.574536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.236 qpair failed and we were unable to recover it. 00:29:09.236 [2024-10-21 12:13:45.574781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.236 [2024-10-21 12:13:45.574810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.236 qpair failed and we were unable to recover it. 00:29:09.236 [2024-10-21 12:13:45.575056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.236 [2024-10-21 12:13:45.575085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.236 qpair failed and we were unable to recover it. 00:29:09.236 [2024-10-21 12:13:45.575453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.236 [2024-10-21 12:13:45.575486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.236 qpair failed and we were unable to recover it. 00:29:09.236 [2024-10-21 12:13:45.575851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.236 [2024-10-21 12:13:45.575881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.236 qpair failed and we were unable to recover it. 00:29:09.236 [2024-10-21 12:13:45.576241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.236 [2024-10-21 12:13:45.576271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.236 qpair failed and we were unable to recover it. 00:29:09.236 [2024-10-21 12:13:45.576619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.236 [2024-10-21 12:13:45.576657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.236 qpair failed and we were unable to recover it. 00:29:09.236 [2024-10-21 12:13:45.577015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.236 [2024-10-21 12:13:45.577045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.236 qpair failed and we were unable to recover it. 00:29:09.236 [2024-10-21 12:13:45.577403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.236 [2024-10-21 12:13:45.577436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.236 qpair failed and we were unable to recover it. 00:29:09.236 [2024-10-21 12:13:45.577799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.236 [2024-10-21 12:13:45.577830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.236 qpair failed and we were unable to recover it. 00:29:09.236 [2024-10-21 12:13:45.578183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.236 [2024-10-21 12:13:45.578213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.236 qpair failed and we were unable to recover it. 00:29:09.236 [2024-10-21 12:13:45.578490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.236 [2024-10-21 12:13:45.578522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.236 qpair failed and we were unable to recover it. 00:29:09.236 [2024-10-21 12:13:45.578890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.236 [2024-10-21 12:13:45.578921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.236 qpair failed and we were unable to recover it. 00:29:09.236 [2024-10-21 12:13:45.579284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.236 [2024-10-21 12:13:45.579315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.236 qpair failed and we were unable to recover it. 00:29:09.236 [2024-10-21 12:13:45.579753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.236 [2024-10-21 12:13:45.579785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.236 qpair failed and we were unable to recover it. 00:29:09.236 [2024-10-21 12:13:45.580135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.236 [2024-10-21 12:13:45.580166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.236 qpair failed and we were unable to recover it. 00:29:09.236 [2024-10-21 12:13:45.580515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.236 [2024-10-21 12:13:45.580546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.236 qpair failed and we were unable to recover it. 00:29:09.236 [2024-10-21 12:13:45.580797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.236 [2024-10-21 12:13:45.580834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.236 qpair failed and we were unable to recover it. 00:29:09.236 [2024-10-21 12:13:45.581218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.236 [2024-10-21 12:13:45.581249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.236 qpair failed and we were unable to recover it. 00:29:09.236 [2024-10-21 12:13:45.581555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.236 [2024-10-21 12:13:45.581586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.236 qpair failed and we were unable to recover it. 00:29:09.236 [2024-10-21 12:13:45.581832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.236 [2024-10-21 12:13:45.581866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.236 qpair failed and we were unable to recover it. 00:29:09.236 [2024-10-21 12:13:45.582227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.236 [2024-10-21 12:13:45.582256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.236 qpair failed and we were unable to recover it. 00:29:09.236 [2024-10-21 12:13:45.582636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.236 [2024-10-21 12:13:45.582669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.236 qpair failed and we were unable to recover it. 00:29:09.236 [2024-10-21 12:13:45.583033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.236 [2024-10-21 12:13:45.583063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.236 qpair failed and we were unable to recover it. 00:29:09.236 [2024-10-21 12:13:45.583413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.236 [2024-10-21 12:13:45.583445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.236 qpair failed and we were unable to recover it. 00:29:09.236 [2024-10-21 12:13:45.583824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.236 [2024-10-21 12:13:45.583854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.236 qpair failed and we were unable to recover it. 00:29:09.236 [2024-10-21 12:13:45.584310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.236 [2024-10-21 12:13:45.584352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.236 qpair failed and we were unable to recover it. 00:29:09.236 [2024-10-21 12:13:45.584715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.236 [2024-10-21 12:13:45.584745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.236 qpair failed and we were unable to recover it. 00:29:09.236 [2024-10-21 12:13:45.585096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.237 [2024-10-21 12:13:45.585127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.237 qpair failed and we were unable to recover it. 00:29:09.237 [2024-10-21 12:13:45.585493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.237 [2024-10-21 12:13:45.585524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.237 qpair failed and we were unable to recover it. 00:29:09.237 [2024-10-21 12:13:45.585878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.237 [2024-10-21 12:13:45.585909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.237 qpair failed and we were unable to recover it. 00:29:09.237 [2024-10-21 12:13:45.586281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.237 [2024-10-21 12:13:45.586311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.237 qpair failed and we were unable to recover it. 00:29:09.237 [2024-10-21 12:13:45.586665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.237 [2024-10-21 12:13:45.586695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.237 qpair failed and we were unable to recover it. 00:29:09.237 [2024-10-21 12:13:45.587045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.237 [2024-10-21 12:13:45.587079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.237 qpair failed and we were unable to recover it. 00:29:09.237 [2024-10-21 12:13:45.587444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.237 [2024-10-21 12:13:45.587477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.237 qpair failed and we were unable to recover it. 00:29:09.237 [2024-10-21 12:13:45.587804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.237 [2024-10-21 12:13:45.587835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.237 qpair failed and we were unable to recover it. 00:29:09.237 [2024-10-21 12:13:45.588197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.237 [2024-10-21 12:13:45.588228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.237 qpair failed and we were unable to recover it. 00:29:09.237 [2024-10-21 12:13:45.588594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.237 [2024-10-21 12:13:45.588625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.237 qpair failed and we were unable to recover it. 00:29:09.237 [2024-10-21 12:13:45.588975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.237 [2024-10-21 12:13:45.589005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.237 qpair failed and we were unable to recover it. 00:29:09.237 [2024-10-21 12:13:45.589341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.237 [2024-10-21 12:13:45.589373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.237 qpair failed and we were unable to recover it. 00:29:09.237 [2024-10-21 12:13:45.589643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.237 [2024-10-21 12:13:45.589673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.237 qpair failed and we were unable to recover it. 00:29:09.237 [2024-10-21 12:13:45.589936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.237 [2024-10-21 12:13:45.589966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.237 qpair failed and we were unable to recover it. 00:29:09.237 [2024-10-21 12:13:45.590317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.237 [2024-10-21 12:13:45.590365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.237 qpair failed and we were unable to recover it. 00:29:09.237 [2024-10-21 12:13:45.590768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.237 [2024-10-21 12:13:45.590798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.237 qpair failed and we were unable to recover it. 00:29:09.237 [2024-10-21 12:13:45.591145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.237 [2024-10-21 12:13:45.591174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.237 qpair failed and we were unable to recover it. 00:29:09.237 [2024-10-21 12:13:45.591572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.237 [2024-10-21 12:13:45.591605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.237 qpair failed and we were unable to recover it. 00:29:09.237 [2024-10-21 12:13:45.591966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.237 [2024-10-21 12:13:45.592003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.237 qpair failed and we were unable to recover it. 00:29:09.237 [2024-10-21 12:13:45.592341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.237 [2024-10-21 12:13:45.592375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.237 qpair failed and we were unable to recover it. 00:29:09.237 [2024-10-21 12:13:45.592754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.237 [2024-10-21 12:13:45.592785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.237 qpair failed and we were unable to recover it. 00:29:09.237 [2024-10-21 12:13:45.593151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.237 [2024-10-21 12:13:45.593183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.237 qpair failed and we were unable to recover it. 00:29:09.237 [2024-10-21 12:13:45.593527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.237 [2024-10-21 12:13:45.593558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.237 qpair failed and we were unable to recover it. 00:29:09.237 [2024-10-21 12:13:45.593989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.237 [2024-10-21 12:13:45.594020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.237 qpair failed and we were unable to recover it. 00:29:09.237 [2024-10-21 12:13:45.594235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.237 [2024-10-21 12:13:45.594269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.237 qpair failed and we were unable to recover it. 00:29:09.237 [2024-10-21 12:13:45.594576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.237 [2024-10-21 12:13:45.594607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.237 qpair failed and we were unable to recover it. 00:29:09.237 [2024-10-21 12:13:45.594961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.237 [2024-10-21 12:13:45.594992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.237 qpair failed and we were unable to recover it. 00:29:09.237 [2024-10-21 12:13:45.595250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.237 [2024-10-21 12:13:45.595282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.237 qpair failed and we were unable to recover it. 00:29:09.237 [2024-10-21 12:13:45.595667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.237 [2024-10-21 12:13:45.595698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.237 qpair failed and we were unable to recover it. 00:29:09.237 [2024-10-21 12:13:45.595932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.237 [2024-10-21 12:13:45.595963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.237 qpair failed and we were unable to recover it. 00:29:09.237 [2024-10-21 12:13:45.596343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.237 [2024-10-21 12:13:45.596375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.237 qpair failed and we were unable to recover it. 00:29:09.237 [2024-10-21 12:13:45.596759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.237 [2024-10-21 12:13:45.596790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.237 qpair failed and we were unable to recover it. 00:29:09.237 [2024-10-21 12:13:45.597050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.237 [2024-10-21 12:13:45.597082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.237 qpair failed and we were unable to recover it. 00:29:09.237 [2024-10-21 12:13:45.597455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.237 [2024-10-21 12:13:45.597486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.237 qpair failed and we were unable to recover it. 00:29:09.237 [2024-10-21 12:13:45.597875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.237 [2024-10-21 12:13:45.597906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.237 qpair failed and we were unable to recover it. 00:29:09.237 [2024-10-21 12:13:45.598270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.237 [2024-10-21 12:13:45.598300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.237 qpair failed and we were unable to recover it. 00:29:09.237 [2024-10-21 12:13:45.598717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.237 [2024-10-21 12:13:45.598748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.237 qpair failed and we were unable to recover it. 00:29:09.237 [2024-10-21 12:13:45.599097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.237 [2024-10-21 12:13:45.599128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.237 qpair failed and we were unable to recover it. 00:29:09.237 [2024-10-21 12:13:45.599564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.237 [2024-10-21 12:13:45.599596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.237 qpair failed and we were unable to recover it. 00:29:09.237 [2024-10-21 12:13:45.599969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.237 [2024-10-21 12:13:45.600002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.237 qpair failed and we were unable to recover it. 00:29:09.237 [2024-10-21 12:13:45.600257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.237 [2024-10-21 12:13:45.600289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.238 qpair failed and we were unable to recover it. 00:29:09.238 [2024-10-21 12:13:45.600725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.238 [2024-10-21 12:13:45.600758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.238 qpair failed and we were unable to recover it. 00:29:09.238 [2024-10-21 12:13:45.601110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.238 [2024-10-21 12:13:45.601141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.238 qpair failed and we were unable to recover it. 00:29:09.238 [2024-10-21 12:13:45.601415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.238 [2024-10-21 12:13:45.601447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.238 qpair failed and we were unable to recover it. 00:29:09.238 [2024-10-21 12:13:45.601803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.238 [2024-10-21 12:13:45.601834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.238 qpair failed and we were unable to recover it. 00:29:09.238 [2024-10-21 12:13:45.602203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.238 [2024-10-21 12:13:45.602235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.238 qpair failed and we were unable to recover it. 00:29:09.238 [2024-10-21 12:13:45.602652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.238 [2024-10-21 12:13:45.602683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.238 qpair failed and we were unable to recover it. 00:29:09.238 [2024-10-21 12:13:45.603045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.238 [2024-10-21 12:13:45.603076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.238 qpair failed and we were unable to recover it. 00:29:09.238 [2024-10-21 12:13:45.603449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.238 [2024-10-21 12:13:45.603482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.238 qpair failed and we were unable to recover it. 00:29:09.238 [2024-10-21 12:13:45.603840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.238 [2024-10-21 12:13:45.603870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.238 qpair failed and we were unable to recover it. 00:29:09.238 [2024-10-21 12:13:45.604236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.238 [2024-10-21 12:13:45.604267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.238 qpair failed and we were unable to recover it. 00:29:09.238 [2024-10-21 12:13:45.604755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.238 [2024-10-21 12:13:45.604786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.238 qpair failed and we were unable to recover it. 00:29:09.238 [2024-10-21 12:13:45.605205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.238 [2024-10-21 12:13:45.605236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.238 qpair failed and we were unable to recover it. 00:29:09.238 [2024-10-21 12:13:45.605597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.238 [2024-10-21 12:13:45.605629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.238 qpair failed and we were unable to recover it. 00:29:09.238 [2024-10-21 12:13:45.605910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.238 [2024-10-21 12:13:45.605942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.238 qpair failed and we were unable to recover it. 00:29:09.238 [2024-10-21 12:13:45.606342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.238 [2024-10-21 12:13:45.606374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.238 qpair failed and we were unable to recover it. 00:29:09.238 [2024-10-21 12:13:45.606734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.238 [2024-10-21 12:13:45.606765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.238 qpair failed and we were unable to recover it. 00:29:09.238 [2024-10-21 12:13:45.607137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.238 [2024-10-21 12:13:45.607170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.238 qpair failed and we were unable to recover it. 00:29:09.238 [2024-10-21 12:13:45.607557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.238 [2024-10-21 12:13:45.607597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.238 qpair failed and we were unable to recover it. 00:29:09.238 [2024-10-21 12:13:45.607948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.238 [2024-10-21 12:13:45.607979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.238 qpair failed and we were unable to recover it. 00:29:09.238 [2024-10-21 12:13:45.608354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.238 [2024-10-21 12:13:45.608388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.238 qpair failed and we were unable to recover it. 00:29:09.238 [2024-10-21 12:13:45.608744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.238 [2024-10-21 12:13:45.608775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.238 qpair failed and we were unable to recover it. 00:29:09.238 [2024-10-21 12:13:45.609127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.238 [2024-10-21 12:13:45.609157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.238 qpair failed and we were unable to recover it. 00:29:09.238 [2024-10-21 12:13:45.609543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.238 [2024-10-21 12:13:45.609575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.238 qpair failed and we were unable to recover it. 00:29:09.238 [2024-10-21 12:13:45.609821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.238 [2024-10-21 12:13:45.609852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.238 qpair failed and we were unable to recover it. 00:29:09.238 [2024-10-21 12:13:45.610267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.238 [2024-10-21 12:13:45.610298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.238 qpair failed and we were unable to recover it. 00:29:09.238 [2024-10-21 12:13:45.610681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.238 [2024-10-21 12:13:45.610713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.238 qpair failed and we were unable to recover it. 00:29:09.238 [2024-10-21 12:13:45.611063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.238 [2024-10-21 12:13:45.611094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.238 qpair failed and we were unable to recover it. 00:29:09.238 [2024-10-21 12:13:45.611453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.238 [2024-10-21 12:13:45.611485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.238 qpair failed and we were unable to recover it. 00:29:09.238 [2024-10-21 12:13:45.611855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.238 [2024-10-21 12:13:45.611886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.238 qpair failed and we were unable to recover it. 00:29:09.238 [2024-10-21 12:13:45.612152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.238 [2024-10-21 12:13:45.612184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.238 qpair failed and we were unable to recover it. 00:29:09.238 [2024-10-21 12:13:45.612457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.238 [2024-10-21 12:13:45.612490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.238 qpair failed and we were unable to recover it. 00:29:09.238 [2024-10-21 12:13:45.612744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.238 [2024-10-21 12:13:45.612776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.238 qpair failed and we were unable to recover it. 00:29:09.238 [2024-10-21 12:13:45.612989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.238 [2024-10-21 12:13:45.613021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.238 qpair failed and we were unable to recover it. 00:29:09.238 [2024-10-21 12:13:45.613375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.238 [2024-10-21 12:13:45.613408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.238 qpair failed and we were unable to recover it. 00:29:09.238 [2024-10-21 12:13:45.613812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.238 [2024-10-21 12:13:45.613842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.238 qpair failed and we were unable to recover it. 00:29:09.238 [2024-10-21 12:13:45.614127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.238 [2024-10-21 12:13:45.614158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.238 qpair failed and we were unable to recover it. 00:29:09.238 [2024-10-21 12:13:45.614411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.238 [2024-10-21 12:13:45.614442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.238 qpair failed and we were unable to recover it. 00:29:09.238 [2024-10-21 12:13:45.614875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.238 [2024-10-21 12:13:45.614907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.238 qpair failed and we were unable to recover it. 00:29:09.238 [2024-10-21 12:13:45.615140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.238 [2024-10-21 12:13:45.615174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.238 qpair failed and we were unable to recover it. 00:29:09.238 [2024-10-21 12:13:45.615533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.238 [2024-10-21 12:13:45.615565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.238 qpair failed and we were unable to recover it. 00:29:09.239 [2024-10-21 12:13:45.615937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.239 [2024-10-21 12:13:45.615968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.239 qpair failed and we were unable to recover it. 00:29:09.239 [2024-10-21 12:13:45.616343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.239 [2024-10-21 12:13:45.616377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.239 qpair failed and we were unable to recover it. 00:29:09.239 [2024-10-21 12:13:45.616738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.239 [2024-10-21 12:13:45.616768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.239 qpair failed and we were unable to recover it. 00:29:09.239 [2024-10-21 12:13:45.617120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.239 [2024-10-21 12:13:45.617153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.239 qpair failed and we were unable to recover it. 00:29:09.239 [2024-10-21 12:13:45.617505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.239 [2024-10-21 12:13:45.617538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.239 qpair failed and we were unable to recover it. 00:29:09.239 [2024-10-21 12:13:45.617879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.239 [2024-10-21 12:13:45.617910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.239 qpair failed and we were unable to recover it. 00:29:09.239 [2024-10-21 12:13:45.618268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.239 [2024-10-21 12:13:45.618299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.239 qpair failed and we were unable to recover it. 00:29:09.239 [2024-10-21 12:13:45.618685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.239 [2024-10-21 12:13:45.618718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.239 qpair failed and we were unable to recover it. 00:29:09.239 [2024-10-21 12:13:45.618964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.239 [2024-10-21 12:13:45.618995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.239 qpair failed and we were unable to recover it. 00:29:09.239 [2024-10-21 12:13:45.619348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.239 [2024-10-21 12:13:45.619381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.239 qpair failed and we were unable to recover it. 00:29:09.239 [2024-10-21 12:13:45.619770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.239 [2024-10-21 12:13:45.619801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.239 qpair failed and we were unable to recover it. 00:29:09.239 [2024-10-21 12:13:45.620170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.239 [2024-10-21 12:13:45.620202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.239 qpair failed and we were unable to recover it. 00:29:09.239 [2024-10-21 12:13:45.620580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.239 [2024-10-21 12:13:45.620612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.239 qpair failed and we were unable to recover it. 00:29:09.239 [2024-10-21 12:13:45.620849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.239 [2024-10-21 12:13:45.620884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.239 qpair failed and we were unable to recover it. 00:29:09.239 [2024-10-21 12:13:45.621286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.239 [2024-10-21 12:13:45.621318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.239 qpair failed and we were unable to recover it. 00:29:09.239 [2024-10-21 12:13:45.621737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.239 [2024-10-21 12:13:45.621769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.239 qpair failed and we were unable to recover it. 00:29:09.239 [2024-10-21 12:13:45.622141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.239 [2024-10-21 12:13:45.622174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.239 qpair failed and we were unable to recover it. 00:29:09.239 [2024-10-21 12:13:45.622508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.239 [2024-10-21 12:13:45.622547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.239 qpair failed and we were unable to recover it. 00:29:09.239 [2024-10-21 12:13:45.622896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.239 [2024-10-21 12:13:45.622927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.239 qpair failed and we were unable to recover it. 00:29:09.239 [2024-10-21 12:13:45.623283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.239 [2024-10-21 12:13:45.623315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.239 qpair failed and we were unable to recover it. 00:29:09.239 [2024-10-21 12:13:45.623605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.239 [2024-10-21 12:13:45.623636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.239 qpair failed and we were unable to recover it. 00:29:09.239 [2024-10-21 12:13:45.624035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.239 [2024-10-21 12:13:45.624065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.239 qpair failed and we were unable to recover it. 00:29:09.239 [2024-10-21 12:13:45.624442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.239 [2024-10-21 12:13:45.624477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.239 qpair failed and we were unable to recover it. 00:29:09.239 [2024-10-21 12:13:45.624868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.239 [2024-10-21 12:13:45.624898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.239 qpair failed and we were unable to recover it. 00:29:09.239 [2024-10-21 12:13:45.625258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.239 [2024-10-21 12:13:45.625289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.239 qpair failed and we were unable to recover it. 00:29:09.239 [2024-10-21 12:13:45.625675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.239 [2024-10-21 12:13:45.625705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.239 qpair failed and we were unable to recover it. 00:29:09.239 [2024-10-21 12:13:45.626065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.239 [2024-10-21 12:13:45.626096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.239 qpair failed and we were unable to recover it. 00:29:09.239 [2024-10-21 12:13:45.626495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.239 [2024-10-21 12:13:45.626528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.239 qpair failed and we were unable to recover it. 00:29:09.239 [2024-10-21 12:13:45.626880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.239 [2024-10-21 12:13:45.626910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.239 qpair failed and we were unable to recover it. 00:29:09.239 [2024-10-21 12:13:45.627261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.239 [2024-10-21 12:13:45.627291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.239 qpair failed and we were unable to recover it. 00:29:09.239 [2024-10-21 12:13:45.627551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.239 [2024-10-21 12:13:45.627585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.239 qpair failed and we were unable to recover it. 00:29:09.239 [2024-10-21 12:13:45.627969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.239 [2024-10-21 12:13:45.628000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.239 qpair failed and we were unable to recover it. 00:29:09.239 [2024-10-21 12:13:45.628361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.239 [2024-10-21 12:13:45.628393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.239 qpair failed and we were unable to recover it. 00:29:09.239 [2024-10-21 12:13:45.628752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.239 [2024-10-21 12:13:45.628783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.239 qpair failed and we were unable to recover it. 00:29:09.239 [2024-10-21 12:13:45.629137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.239 [2024-10-21 12:13:45.629167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.239 qpair failed and we were unable to recover it. 00:29:09.239 [2024-10-21 12:13:45.629529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.239 [2024-10-21 12:13:45.629560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.239 qpair failed and we were unable to recover it. 00:29:09.239 [2024-10-21 12:13:45.629929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.239 [2024-10-21 12:13:45.629959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.239 qpair failed and we were unable to recover it. 00:29:09.239 [2024-10-21 12:13:45.630315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.239 [2024-10-21 12:13:45.630359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.239 qpair failed and we were unable to recover it. 00:29:09.239 [2024-10-21 12:13:45.630724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.239 [2024-10-21 12:13:45.630754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.239 qpair failed and we were unable to recover it. 00:29:09.239 [2024-10-21 12:13:45.631109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.239 [2024-10-21 12:13:45.631139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.239 qpair failed and we were unable to recover it. 00:29:09.240 [2024-10-21 12:13:45.631504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.240 [2024-10-21 12:13:45.631537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.240 qpair failed and we were unable to recover it. 00:29:09.240 [2024-10-21 12:13:45.631896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.240 [2024-10-21 12:13:45.631927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.240 qpair failed and we were unable to recover it. 00:29:09.240 [2024-10-21 12:13:45.632227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.240 [2024-10-21 12:13:45.632258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.240 qpair failed and we were unable to recover it. 00:29:09.240 [2024-10-21 12:13:45.632619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.240 [2024-10-21 12:13:45.632651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.240 qpair failed and we were unable to recover it. 00:29:09.240 [2024-10-21 12:13:45.633014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.240 [2024-10-21 12:13:45.633046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.240 qpair failed and we were unable to recover it. 00:29:09.240 [2024-10-21 12:13:45.633395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.240 [2024-10-21 12:13:45.633427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.240 qpair failed and we were unable to recover it. 00:29:09.240 [2024-10-21 12:13:45.633822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.240 [2024-10-21 12:13:45.633852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.240 qpair failed and we were unable to recover it. 00:29:09.240 [2024-10-21 12:13:45.634085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.240 [2024-10-21 12:13:45.634118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.240 qpair failed and we were unable to recover it. 00:29:09.240 [2024-10-21 12:13:45.634482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.240 [2024-10-21 12:13:45.634514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.240 qpair failed and we were unable to recover it. 00:29:09.240 [2024-10-21 12:13:45.634870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.240 [2024-10-21 12:13:45.634900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.240 qpair failed and we were unable to recover it. 00:29:09.240 [2024-10-21 12:13:45.635255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.240 [2024-10-21 12:13:45.635285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.240 qpair failed and we were unable to recover it. 00:29:09.240 [2024-10-21 12:13:45.635749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.240 [2024-10-21 12:13:45.635782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.240 qpair failed and we were unable to recover it. 00:29:09.240 [2024-10-21 12:13:45.636002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.240 [2024-10-21 12:13:45.636035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.240 qpair failed and we were unable to recover it. 00:29:09.240 [2024-10-21 12:13:45.636383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.240 [2024-10-21 12:13:45.636416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.240 qpair failed and we were unable to recover it. 00:29:09.240 [2024-10-21 12:13:45.636768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.240 [2024-10-21 12:13:45.636799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.240 qpair failed and we were unable to recover it. 00:29:09.240 [2024-10-21 12:13:45.637060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.240 [2024-10-21 12:13:45.637093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.240 qpair failed and we were unable to recover it. 00:29:09.240 [2024-10-21 12:13:45.637489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.240 [2024-10-21 12:13:45.637521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.240 qpair failed and we were unable to recover it. 00:29:09.240 [2024-10-21 12:13:45.637894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.240 [2024-10-21 12:13:45.637930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.240 qpair failed and we were unable to recover it. 00:29:09.240 [2024-10-21 12:13:45.638279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.240 [2024-10-21 12:13:45.638310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.240 qpair failed and we were unable to recover it. 00:29:09.240 [2024-10-21 12:13:45.638716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.240 [2024-10-21 12:13:45.638747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.240 qpair failed and we were unable to recover it. 00:29:09.240 [2024-10-21 12:13:45.639112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.240 [2024-10-21 12:13:45.639144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.240 qpair failed and we were unable to recover it. 00:29:09.240 [2024-10-21 12:13:45.639501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.240 [2024-10-21 12:13:45.639532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.240 qpair failed and we were unable to recover it. 00:29:09.240 [2024-10-21 12:13:45.639893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.240 [2024-10-21 12:13:45.639924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.240 qpair failed and we were unable to recover it. 00:29:09.240 [2024-10-21 12:13:45.640276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.240 [2024-10-21 12:13:45.640307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.240 qpair failed and we were unable to recover it. 00:29:09.240 [2024-10-21 12:13:45.640676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.240 [2024-10-21 12:13:45.640707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.240 qpair failed and we were unable to recover it. 00:29:09.240 [2024-10-21 12:13:45.641062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.240 [2024-10-21 12:13:45.641091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.240 qpair failed and we were unable to recover it. 00:29:09.240 [2024-10-21 12:13:45.641428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.240 [2024-10-21 12:13:45.641461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.240 qpair failed and we were unable to recover it. 00:29:09.240 [2024-10-21 12:13:45.641799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.240 [2024-10-21 12:13:45.641830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.240 qpair failed and we were unable to recover it. 00:29:09.240 [2024-10-21 12:13:45.642086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.240 [2024-10-21 12:13:45.642119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.240 qpair failed and we were unable to recover it. 00:29:09.240 [2024-10-21 12:13:45.642468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.240 [2024-10-21 12:13:45.642500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.240 qpair failed and we were unable to recover it. 00:29:09.240 [2024-10-21 12:13:45.642858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.240 [2024-10-21 12:13:45.642888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.240 qpair failed and we were unable to recover it. 00:29:09.240 [2024-10-21 12:13:45.643136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.240 [2024-10-21 12:13:45.643169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.240 qpair failed and we were unable to recover it. 00:29:09.240 [2024-10-21 12:13:45.643543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.240 [2024-10-21 12:13:45.643576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.240 qpair failed and we were unable to recover it. 00:29:09.240 [2024-10-21 12:13:45.643814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.240 [2024-10-21 12:13:45.643844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.240 qpair failed and we were unable to recover it. 00:29:09.240 [2024-10-21 12:13:45.644221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.240 [2024-10-21 12:13:45.644251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.240 qpair failed and we were unable to recover it. 00:29:09.240 [2024-10-21 12:13:45.644605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.241 [2024-10-21 12:13:45.644638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.241 qpair failed and we were unable to recover it. 00:29:09.241 [2024-10-21 12:13:45.645003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.241 [2024-10-21 12:13:45.645034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.241 qpair failed and we were unable to recover it. 00:29:09.241 [2024-10-21 12:13:45.645390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.241 [2024-10-21 12:13:45.645422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.241 qpair failed and we were unable to recover it. 00:29:09.241 [2024-10-21 12:13:45.645822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.241 [2024-10-21 12:13:45.645853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.241 qpair failed and we were unable to recover it. 00:29:09.241 [2024-10-21 12:13:45.646204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.241 [2024-10-21 12:13:45.646236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.241 qpair failed and we were unable to recover it. 00:29:09.241 [2024-10-21 12:13:45.646597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.241 [2024-10-21 12:13:45.646629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.241 qpair failed and we were unable to recover it. 00:29:09.241 [2024-10-21 12:13:45.646989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.241 [2024-10-21 12:13:45.647021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.241 qpair failed and we were unable to recover it. 00:29:09.241 [2024-10-21 12:13:45.647402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.241 [2024-10-21 12:13:45.647435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.241 qpair failed and we were unable to recover it. 00:29:09.241 [2024-10-21 12:13:45.647794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.241 [2024-10-21 12:13:45.647826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.241 qpair failed and we were unable to recover it. 00:29:09.241 [2024-10-21 12:13:45.648179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.241 [2024-10-21 12:13:45.648210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.241 qpair failed and we were unable to recover it. 00:29:09.241 [2024-10-21 12:13:45.648567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.241 [2024-10-21 12:13:45.648599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.241 qpair failed and we were unable to recover it. 00:29:09.241 [2024-10-21 12:13:45.648972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.241 [2024-10-21 12:13:45.649003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.241 qpair failed and we were unable to recover it. 00:29:09.241 [2024-10-21 12:13:45.649362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.241 [2024-10-21 12:13:45.649395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.241 qpair failed and we were unable to recover it. 00:29:09.241 [2024-10-21 12:13:45.649753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.241 [2024-10-21 12:13:45.649784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.241 qpair failed and we were unable to recover it. 00:29:09.241 [2024-10-21 12:13:45.650131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.241 [2024-10-21 12:13:45.650162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.241 qpair failed and we were unable to recover it. 00:29:09.241 [2024-10-21 12:13:45.650523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.241 [2024-10-21 12:13:45.650556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.241 qpair failed and we were unable to recover it. 00:29:09.241 [2024-10-21 12:13:45.650906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.241 [2024-10-21 12:13:45.650936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.241 qpair failed and we were unable to recover it. 00:29:09.241 [2024-10-21 12:13:45.651187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.241 [2024-10-21 12:13:45.651218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.241 qpair failed and we were unable to recover it. 00:29:09.241 [2024-10-21 12:13:45.651578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.241 [2024-10-21 12:13:45.651611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.241 qpair failed and we were unable to recover it. 00:29:09.241 [2024-10-21 12:13:45.651967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.241 [2024-10-21 12:13:45.651998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.241 qpair failed and we were unable to recover it. 00:29:09.241 [2024-10-21 12:13:45.652361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.241 [2024-10-21 12:13:45.652394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.241 qpair failed and we were unable to recover it. 00:29:09.241 [2024-10-21 12:13:45.652798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.241 [2024-10-21 12:13:45.652829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.241 qpair failed and we were unable to recover it. 00:29:09.241 [2024-10-21 12:13:45.653183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.241 [2024-10-21 12:13:45.653220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.241 qpair failed and we were unable to recover it. 00:29:09.241 [2024-10-21 12:13:45.653599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.241 [2024-10-21 12:13:45.653632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.241 qpair failed and we were unable to recover it. 00:29:09.241 [2024-10-21 12:13:45.653981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.241 [2024-10-21 12:13:45.654011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.241 qpair failed and we were unable to recover it. 00:29:09.241 [2024-10-21 12:13:45.654374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.241 [2024-10-21 12:13:45.654407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.241 qpair failed and we were unable to recover it. 00:29:09.241 [2024-10-21 12:13:45.654776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.241 [2024-10-21 12:13:45.654806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.241 qpair failed and we were unable to recover it. 00:29:09.241 [2024-10-21 12:13:45.655180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.241 [2024-10-21 12:13:45.655211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.241 qpair failed and we were unable to recover it. 00:29:09.241 [2024-10-21 12:13:45.655565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.241 [2024-10-21 12:13:45.655599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.241 qpair failed and we were unable to recover it. 00:29:09.241 [2024-10-21 12:13:45.655953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.241 [2024-10-21 12:13:45.655985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.241 qpair failed and we were unable to recover it. 00:29:09.241 [2024-10-21 12:13:45.656348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.241 [2024-10-21 12:13:45.656380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.241 qpair failed and we were unable to recover it. 00:29:09.241 [2024-10-21 12:13:45.656730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.241 [2024-10-21 12:13:45.656761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.241 qpair failed and we were unable to recover it. 00:29:09.241 [2024-10-21 12:13:45.657103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.241 [2024-10-21 12:13:45.657133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.241 qpair failed and we were unable to recover it. 00:29:09.241 [2024-10-21 12:13:45.657481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.241 [2024-10-21 12:13:45.657515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.241 qpair failed and we were unable to recover it. 00:29:09.241 [2024-10-21 12:13:45.657852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.241 [2024-10-21 12:13:45.657883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.241 qpair failed and we were unable to recover it. 00:29:09.241 [2024-10-21 12:13:45.658237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.241 [2024-10-21 12:13:45.658270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.241 qpair failed and we were unable to recover it. 00:29:09.241 [2024-10-21 12:13:45.658634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.241 [2024-10-21 12:13:45.658666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.241 qpair failed and we were unable to recover it. 00:29:09.241 [2024-10-21 12:13:45.659022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.241 [2024-10-21 12:13:45.659054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.241 qpair failed and we were unable to recover it. 00:29:09.241 [2024-10-21 12:13:45.659458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.241 [2024-10-21 12:13:45.659490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.241 qpair failed and we were unable to recover it. 00:29:09.241 [2024-10-21 12:13:45.659887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.241 [2024-10-21 12:13:45.659918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.241 qpair failed and we were unable to recover it. 00:29:09.241 [2024-10-21 12:13:45.660265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.242 [2024-10-21 12:13:45.660296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.242 qpair failed and we were unable to recover it. 00:29:09.242 [2024-10-21 12:13:45.660677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.242 [2024-10-21 12:13:45.660710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.242 qpair failed and we were unable to recover it. 00:29:09.242 [2024-10-21 12:13:45.661061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.242 [2024-10-21 12:13:45.661091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.242 qpair failed and we were unable to recover it. 00:29:09.242 [2024-10-21 12:13:45.661454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.242 [2024-10-21 12:13:45.661487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.242 qpair failed and we were unable to recover it. 00:29:09.242 [2024-10-21 12:13:45.661851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.242 [2024-10-21 12:13:45.661882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.242 qpair failed and we were unable to recover it. 00:29:09.242 [2024-10-21 12:13:45.662236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.242 [2024-10-21 12:13:45.662267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.242 qpair failed and we were unable to recover it. 00:29:09.242 [2024-10-21 12:13:45.662635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.242 [2024-10-21 12:13:45.662668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.242 qpair failed and we were unable to recover it. 00:29:09.242 [2024-10-21 12:13:45.663041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.242 [2024-10-21 12:13:45.663071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.242 qpair failed and we were unable to recover it. 00:29:09.242 [2024-10-21 12:13:45.663466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.242 [2024-10-21 12:13:45.663498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.242 qpair failed and we were unable to recover it. 00:29:09.242 [2024-10-21 12:13:45.663895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.242 [2024-10-21 12:13:45.663928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.242 qpair failed and we were unable to recover it. 00:29:09.242 [2024-10-21 12:13:45.664297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.242 [2024-10-21 12:13:45.664339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.242 qpair failed and we were unable to recover it. 00:29:09.242 [2024-10-21 12:13:45.664699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.242 [2024-10-21 12:13:45.664730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.242 qpair failed and we were unable to recover it. 00:29:09.242 [2024-10-21 12:13:45.665089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.242 [2024-10-21 12:13:45.665121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.242 qpair failed and we were unable to recover it. 00:29:09.242 [2024-10-21 12:13:45.665483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.242 [2024-10-21 12:13:45.665516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.242 qpair failed and we were unable to recover it. 00:29:09.242 [2024-10-21 12:13:45.665859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.242 [2024-10-21 12:13:45.665890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.242 qpair failed and we were unable to recover it. 00:29:09.242 [2024-10-21 12:13:45.666250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.242 [2024-10-21 12:13:45.666282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.242 qpair failed and we were unable to recover it. 00:29:09.242 [2024-10-21 12:13:45.666699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.242 [2024-10-21 12:13:45.666732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.242 qpair failed and we were unable to recover it. 00:29:09.242 [2024-10-21 12:13:45.667080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.242 [2024-10-21 12:13:45.667111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.242 qpair failed and we were unable to recover it. 00:29:09.242 [2024-10-21 12:13:45.667477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.242 [2024-10-21 12:13:45.667510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.242 qpair failed and we were unable to recover it. 00:29:09.242 [2024-10-21 12:13:45.667873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.242 [2024-10-21 12:13:45.667903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.242 qpair failed and we were unable to recover it. 00:29:09.242 [2024-10-21 12:13:45.668255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.242 [2024-10-21 12:13:45.668286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.242 qpair failed and we were unable to recover it. 00:29:09.242 [2024-10-21 12:13:45.668645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.242 [2024-10-21 12:13:45.668677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.242 qpair failed and we were unable to recover it. 00:29:09.242 [2024-10-21 12:13:45.669058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.242 [2024-10-21 12:13:45.669095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.242 qpair failed and we were unable to recover it. 00:29:09.242 [2024-10-21 12:13:45.669480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.242 [2024-10-21 12:13:45.669513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.242 qpair failed and we were unable to recover it. 00:29:09.242 [2024-10-21 12:13:45.669883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.242 [2024-10-21 12:13:45.669913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.242 qpair failed and we were unable to recover it. 00:29:09.242 [2024-10-21 12:13:45.670267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.242 [2024-10-21 12:13:45.670297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.242 qpair failed and we were unable to recover it. 00:29:09.242 [2024-10-21 12:13:45.670672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.242 [2024-10-21 12:13:45.670705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.242 qpair failed and we were unable to recover it. 00:29:09.242 [2024-10-21 12:13:45.671068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.242 [2024-10-21 12:13:45.671100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.242 qpair failed and we were unable to recover it. 00:29:09.242 [2024-10-21 12:13:45.671497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.242 [2024-10-21 12:13:45.671530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.242 qpair failed and we were unable to recover it. 00:29:09.242 [2024-10-21 12:13:45.671881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.242 [2024-10-21 12:13:45.671912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.242 qpair failed and we were unable to recover it. 00:29:09.242 [2024-10-21 12:13:45.672266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.242 [2024-10-21 12:13:45.672296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.242 qpair failed and we were unable to recover it. 00:29:09.242 [2024-10-21 12:13:45.672665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.242 [2024-10-21 12:13:45.672699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.242 qpair failed and we were unable to recover it. 00:29:09.242 [2024-10-21 12:13:45.673062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.242 [2024-10-21 12:13:45.673092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.242 qpair failed and we were unable to recover it. 00:29:09.242 [2024-10-21 12:13:45.673487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.242 [2024-10-21 12:13:45.673520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.242 qpair failed and we were unable to recover it. 00:29:09.242 [2024-10-21 12:13:45.673899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.242 [2024-10-21 12:13:45.673930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.242 qpair failed and we were unable to recover it. 00:29:09.242 [2024-10-21 12:13:45.674281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.242 [2024-10-21 12:13:45.674312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.242 qpair failed and we were unable to recover it. 00:29:09.242 [2024-10-21 12:13:45.674661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.243 [2024-10-21 12:13:45.674694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.243 qpair failed and we were unable to recover it. 00:29:09.243 [2024-10-21 12:13:45.675080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.243 [2024-10-21 12:13:45.675112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.243 qpair failed and we were unable to recover it. 00:29:09.243 [2024-10-21 12:13:45.675474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.243 [2024-10-21 12:13:45.675505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.243 qpair failed and we were unable to recover it. 00:29:09.243 [2024-10-21 12:13:45.675874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.243 [2024-10-21 12:13:45.675905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.243 qpair failed and we were unable to recover it. 00:29:09.243 [2024-10-21 12:13:45.676257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.243 [2024-10-21 12:13:45.676288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.243 qpair failed and we were unable to recover it. 00:29:09.243 [2024-10-21 12:13:45.676666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.243 [2024-10-21 12:13:45.676698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.243 qpair failed and we were unable to recover it. 00:29:09.243 [2024-10-21 12:13:45.677052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.243 [2024-10-21 12:13:45.677083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.243 qpair failed and we were unable to recover it. 00:29:09.243 [2024-10-21 12:13:45.677417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.243 [2024-10-21 12:13:45.677451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.243 qpair failed and we were unable to recover it. 00:29:09.243 [2024-10-21 12:13:45.677820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.243 [2024-10-21 12:13:45.677851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.243 qpair failed and we were unable to recover it. 00:29:09.243 [2024-10-21 12:13:45.678264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.243 [2024-10-21 12:13:45.678294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.243 qpair failed and we were unable to recover it. 00:29:09.243 [2024-10-21 12:13:45.678672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.243 [2024-10-21 12:13:45.678704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.243 qpair failed and we were unable to recover it. 00:29:09.243 [2024-10-21 12:13:45.679077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.243 [2024-10-21 12:13:45.679109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.243 qpair failed and we were unable to recover it. 00:29:09.243 [2024-10-21 12:13:45.679484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.243 [2024-10-21 12:13:45.679517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.243 qpair failed and we were unable to recover it. 00:29:09.243 [2024-10-21 12:13:45.679873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.243 [2024-10-21 12:13:45.679904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.243 qpair failed and we were unable to recover it. 00:29:09.243 [2024-10-21 12:13:45.680276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.243 [2024-10-21 12:13:45.680308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.243 qpair failed and we were unable to recover it. 00:29:09.243 [2024-10-21 12:13:45.680670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.243 [2024-10-21 12:13:45.680702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.243 qpair failed and we were unable to recover it. 00:29:09.243 [2024-10-21 12:13:45.681107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.243 [2024-10-21 12:13:45.681138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.243 qpair failed and we were unable to recover it. 00:29:09.243 [2024-10-21 12:13:45.681501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.243 [2024-10-21 12:13:45.681534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.243 qpair failed and we were unable to recover it. 00:29:09.243 [2024-10-21 12:13:45.681874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.243 [2024-10-21 12:13:45.681905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.243 qpair failed and we were unable to recover it. 00:29:09.243 [2024-10-21 12:13:45.682171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.243 [2024-10-21 12:13:45.682201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.243 qpair failed and we were unable to recover it. 00:29:09.243 [2024-10-21 12:13:45.682562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.243 [2024-10-21 12:13:45.682595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.243 qpair failed and we were unable to recover it. 00:29:09.243 [2024-10-21 12:13:45.682950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.243 [2024-10-21 12:13:45.682981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.243 qpair failed and we were unable to recover it. 00:29:09.243 [2024-10-21 12:13:45.683366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.243 [2024-10-21 12:13:45.683398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.243 qpair failed and we were unable to recover it. 00:29:09.243 [2024-10-21 12:13:45.683754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.243 [2024-10-21 12:13:45.683785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.243 qpair failed and we were unable to recover it. 00:29:09.243 [2024-10-21 12:13:45.684143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.243 [2024-10-21 12:13:45.684175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.243 qpair failed and we were unable to recover it. 00:29:09.243 [2024-10-21 12:13:45.684518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.243 [2024-10-21 12:13:45.684550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.243 qpair failed and we were unable to recover it. 00:29:09.243 [2024-10-21 12:13:45.684808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.243 [2024-10-21 12:13:45.684848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.243 qpair failed and we were unable to recover it. 00:29:09.243 [2024-10-21 12:13:45.685215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.243 [2024-10-21 12:13:45.685247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.243 qpair failed and we were unable to recover it. 00:29:09.243 [2024-10-21 12:13:45.685590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.243 [2024-10-21 12:13:45.685623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.243 qpair failed and we were unable to recover it. 00:29:09.243 [2024-10-21 12:13:45.686072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.243 [2024-10-21 12:13:45.686103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.243 qpair failed and we were unable to recover it. 00:29:09.243 [2024-10-21 12:13:45.686458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.243 [2024-10-21 12:13:45.686491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.243 qpair failed and we were unable to recover it. 00:29:09.243 [2024-10-21 12:13:45.686864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.243 [2024-10-21 12:13:45.686895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.243 qpair failed and we were unable to recover it. 00:29:09.243 [2024-10-21 12:13:45.687140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.243 [2024-10-21 12:13:45.687171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.243 qpair failed and we were unable to recover it. 00:29:09.243 [2024-10-21 12:13:45.687537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.243 [2024-10-21 12:13:45.687569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.243 qpair failed and we were unable to recover it. 00:29:09.243 [2024-10-21 12:13:45.687940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.243 [2024-10-21 12:13:45.687971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.243 qpair failed and we were unable to recover it. 00:29:09.243 [2024-10-21 12:13:45.688379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.243 [2024-10-21 12:13:45.688412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.243 qpair failed and we were unable to recover it. 00:29:09.243 [2024-10-21 12:13:45.688775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.243 [2024-10-21 12:13:45.688819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.243 qpair failed and we were unable to recover it. 00:29:09.243 [2024-10-21 12:13:45.689194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.243 [2024-10-21 12:13:45.689225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.243 qpair failed and we were unable to recover it. 00:29:09.243 [2024-10-21 12:13:45.689480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.243 [2024-10-21 12:13:45.689513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.243 qpair failed and we were unable to recover it. 00:29:09.243 [2024-10-21 12:13:45.689873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.243 [2024-10-21 12:13:45.689904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.243 qpair failed and we were unable to recover it. 00:29:09.243 [2024-10-21 12:13:45.690281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.243 [2024-10-21 12:13:45.690312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.244 qpair failed and we were unable to recover it. 00:29:09.244 [2024-10-21 12:13:45.690567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.244 [2024-10-21 12:13:45.690598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.244 qpair failed and we were unable to recover it. 00:29:09.244 [2024-10-21 12:13:45.690957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.244 [2024-10-21 12:13:45.690988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.244 qpair failed and we were unable to recover it. 00:29:09.244 [2024-10-21 12:13:45.691241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.244 [2024-10-21 12:13:45.691272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.244 qpair failed and we were unable to recover it. 00:29:09.244 [2024-10-21 12:13:45.691704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.244 [2024-10-21 12:13:45.691736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.244 qpair failed and we were unable to recover it. 00:29:09.244 [2024-10-21 12:13:45.692106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.244 [2024-10-21 12:13:45.692137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.244 qpair failed and we were unable to recover it. 00:29:09.244 [2024-10-21 12:13:45.692496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.244 [2024-10-21 12:13:45.692528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.244 qpair failed and we were unable to recover it. 00:29:09.244 [2024-10-21 12:13:45.692888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.244 [2024-10-21 12:13:45.692918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.244 qpair failed and we were unable to recover it. 00:29:09.244 [2024-10-21 12:13:45.693279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.244 [2024-10-21 12:13:45.693310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.244 qpair failed and we were unable to recover it. 00:29:09.244 [2024-10-21 12:13:45.693697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.244 [2024-10-21 12:13:45.693728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.244 qpair failed and we were unable to recover it. 00:29:09.244 [2024-10-21 12:13:45.694079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.244 [2024-10-21 12:13:45.694111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.244 qpair failed and we were unable to recover it. 00:29:09.244 [2024-10-21 12:13:45.694469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.244 [2024-10-21 12:13:45.694502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.244 qpair failed and we were unable to recover it. 00:29:09.244 [2024-10-21 12:13:45.694869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.244 [2024-10-21 12:13:45.694899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.244 qpair failed and we were unable to recover it. 00:29:09.244 [2024-10-21 12:13:45.695260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.244 [2024-10-21 12:13:45.695292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.244 qpair failed and we were unable to recover it. 00:29:09.244 [2024-10-21 12:13:45.695666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.244 [2024-10-21 12:13:45.695699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.244 qpair failed and we were unable to recover it. 00:29:09.244 [2024-10-21 12:13:45.696130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.244 [2024-10-21 12:13:45.696160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.244 qpair failed and we were unable to recover it. 00:29:09.244 [2024-10-21 12:13:45.696546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.244 [2024-10-21 12:13:45.696578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.244 qpair failed and we were unable to recover it. 00:29:09.244 [2024-10-21 12:13:45.696941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.244 [2024-10-21 12:13:45.696973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.244 qpair failed and we were unable to recover it. 00:29:09.244 [2024-10-21 12:13:45.697340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.244 [2024-10-21 12:13:45.697372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.244 qpair failed and we were unable to recover it. 00:29:09.244 [2024-10-21 12:13:45.697725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.244 [2024-10-21 12:13:45.697756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.244 qpair failed and we were unable to recover it. 00:29:09.244 [2024-10-21 12:13:45.698117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.244 [2024-10-21 12:13:45.698148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.244 qpair failed and we were unable to recover it. 00:29:09.244 [2024-10-21 12:13:45.698512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.244 [2024-10-21 12:13:45.698543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.244 qpair failed and we were unable to recover it. 00:29:09.244 [2024-10-21 12:13:45.698911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.244 [2024-10-21 12:13:45.698942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.244 qpair failed and we were unable to recover it. 00:29:09.244 [2024-10-21 12:13:45.699348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.244 [2024-10-21 12:13:45.699381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.244 qpair failed and we were unable to recover it. 00:29:09.244 [2024-10-21 12:13:45.699753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.244 [2024-10-21 12:13:45.699784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.244 qpair failed and we were unable to recover it. 00:29:09.244 [2024-10-21 12:13:45.700133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.244 [2024-10-21 12:13:45.700164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.244 qpair failed and we were unable to recover it. 00:29:09.244 [2024-10-21 12:13:45.700502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.244 [2024-10-21 12:13:45.700540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.244 qpair failed and we were unable to recover it. 00:29:09.244 [2024-10-21 12:13:45.700917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.244 [2024-10-21 12:13:45.700949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.244 qpair failed and we were unable to recover it. 00:29:09.244 [2024-10-21 12:13:45.701343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.244 [2024-10-21 12:13:45.701375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.244 qpair failed and we were unable to recover it. 00:29:09.244 [2024-10-21 12:13:45.701740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.244 [2024-10-21 12:13:45.701770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.244 qpair failed and we were unable to recover it. 00:29:09.244 [2024-10-21 12:13:45.702204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.244 [2024-10-21 12:13:45.702234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.244 qpair failed and we were unable to recover it. 00:29:09.244 [2024-10-21 12:13:45.702495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.244 [2024-10-21 12:13:45.702527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.244 qpair failed and we were unable to recover it. 00:29:09.244 [2024-10-21 12:13:45.702889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.244 [2024-10-21 12:13:45.702919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.244 qpair failed and we were unable to recover it. 00:29:09.244 [2024-10-21 12:13:45.703278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.244 [2024-10-21 12:13:45.703309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.244 qpair failed and we were unable to recover it. 00:29:09.244 [2024-10-21 12:13:45.703713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.244 [2024-10-21 12:13:45.703744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.244 qpair failed and we were unable to recover it. 00:29:09.244 [2024-10-21 12:13:45.704115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.245 [2024-10-21 12:13:45.704145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.245 qpair failed and we were unable to recover it. 00:29:09.245 [2024-10-21 12:13:45.704503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.245 [2024-10-21 12:13:45.704535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.245 qpair failed and we were unable to recover it. 00:29:09.245 [2024-10-21 12:13:45.704898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.245 [2024-10-21 12:13:45.704929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.245 qpair failed and we were unable to recover it. 00:29:09.245 [2024-10-21 12:13:45.705287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.245 [2024-10-21 12:13:45.705318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.245 qpair failed and we were unable to recover it. 00:29:09.245 [2024-10-21 12:13:45.705710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.245 [2024-10-21 12:13:45.705742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.245 qpair failed and we were unable to recover it. 00:29:09.245 [2024-10-21 12:13:45.706093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.245 [2024-10-21 12:13:45.706126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.245 qpair failed and we were unable to recover it. 00:29:09.245 [2024-10-21 12:13:45.706503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.245 [2024-10-21 12:13:45.706536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.245 qpair failed and we were unable to recover it. 00:29:09.245 [2024-10-21 12:13:45.706887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.245 [2024-10-21 12:13:45.706916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.245 qpair failed and we were unable to recover it. 00:29:09.245 [2024-10-21 12:13:45.707264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.245 [2024-10-21 12:13:45.707295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.245 qpair failed and we were unable to recover it. 00:29:09.245 [2024-10-21 12:13:45.707623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.245 [2024-10-21 12:13:45.707655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.245 qpair failed and we were unable to recover it. 00:29:09.245 [2024-10-21 12:13:45.708027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.245 [2024-10-21 12:13:45.708058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.245 qpair failed and we were unable to recover it. 00:29:09.245 [2024-10-21 12:13:45.708410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.245 [2024-10-21 12:13:45.708442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.245 qpair failed and we were unable to recover it. 00:29:09.245 [2024-10-21 12:13:45.708789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.245 [2024-10-21 12:13:45.708820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.245 qpair failed and we were unable to recover it. 00:29:09.245 [2024-10-21 12:13:45.709190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.245 [2024-10-21 12:13:45.709221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.245 qpair failed and we were unable to recover it. 00:29:09.245 [2024-10-21 12:13:45.709576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.245 [2024-10-21 12:13:45.709609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.245 qpair failed and we were unable to recover it. 00:29:09.245 [2024-10-21 12:13:45.709964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.245 [2024-10-21 12:13:45.709995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.245 qpair failed and we were unable to recover it. 00:29:09.245 [2024-10-21 12:13:45.710376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.245 [2024-10-21 12:13:45.710409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.245 qpair failed and we were unable to recover it. 00:29:09.245 [2024-10-21 12:13:45.710825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.245 [2024-10-21 12:13:45.710855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.245 qpair failed and we were unable to recover it. 00:29:09.245 [2024-10-21 12:13:45.711238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.245 [2024-10-21 12:13:45.711270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.245 qpair failed and we were unable to recover it. 00:29:09.245 [2024-10-21 12:13:45.711645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.245 [2024-10-21 12:13:45.711677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.245 qpair failed and we were unable to recover it. 00:29:09.245 [2024-10-21 12:13:45.712069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.245 [2024-10-21 12:13:45.712101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.245 qpair failed and we were unable to recover it. 00:29:09.245 [2024-10-21 12:13:45.712447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.245 [2024-10-21 12:13:45.712479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.245 qpair failed and we were unable to recover it. 00:29:09.245 [2024-10-21 12:13:45.712844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.245 [2024-10-21 12:13:45.712875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.245 qpair failed and we were unable to recover it. 00:29:09.245 [2024-10-21 12:13:45.713225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.245 [2024-10-21 12:13:45.713256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.245 qpair failed and we were unable to recover it. 00:29:09.245 [2024-10-21 12:13:45.713620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.245 [2024-10-21 12:13:45.713652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.245 qpair failed and we were unable to recover it. 00:29:09.245 [2024-10-21 12:13:45.714000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.245 [2024-10-21 12:13:45.714031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.245 qpair failed and we were unable to recover it. 00:29:09.245 [2024-10-21 12:13:45.714392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.245 [2024-10-21 12:13:45.714425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.245 qpair failed and we were unable to recover it. 00:29:09.245 [2024-10-21 12:13:45.714796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.245 [2024-10-21 12:13:45.714826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.245 qpair failed and we were unable to recover it. 00:29:09.245 [2024-10-21 12:13:45.715197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.245 [2024-10-21 12:13:45.715228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.245 qpair failed and we were unable to recover it. 00:29:09.245 [2024-10-21 12:13:45.715591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.245 [2024-10-21 12:13:45.715624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.245 qpair failed and we were unable to recover it. 00:29:09.245 [2024-10-21 12:13:45.715976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.245 [2024-10-21 12:13:45.716006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.245 qpair failed and we were unable to recover it. 00:29:09.245 [2024-10-21 12:13:45.716373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.245 [2024-10-21 12:13:45.716411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.245 qpair failed and we were unable to recover it. 00:29:09.245 [2024-10-21 12:13:45.716713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.245 [2024-10-21 12:13:45.716744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.245 qpair failed and we were unable to recover it. 00:29:09.245 [2024-10-21 12:13:45.717092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.245 [2024-10-21 12:13:45.717123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.245 qpair failed and we were unable to recover it. 00:29:09.245 [2024-10-21 12:13:45.717473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.245 [2024-10-21 12:13:45.717505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.245 qpair failed and we were unable to recover it. 00:29:09.245 [2024-10-21 12:13:45.717861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.245 [2024-10-21 12:13:45.717892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.245 qpair failed and we were unable to recover it. 00:29:09.245 [2024-10-21 12:13:45.718247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.245 [2024-10-21 12:13:45.718277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.245 qpair failed and we were unable to recover it. 00:29:09.245 [2024-10-21 12:13:45.718673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.245 [2024-10-21 12:13:45.718704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.245 qpair failed and we were unable to recover it. 00:29:09.245 [2024-10-21 12:13:45.719141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.245 [2024-10-21 12:13:45.719172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.245 qpair failed and we were unable to recover it. 00:29:09.245 [2024-10-21 12:13:45.719511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.245 [2024-10-21 12:13:45.719545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.245 qpair failed and we were unable to recover it. 00:29:09.245 [2024-10-21 12:13:45.719893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.246 [2024-10-21 12:13:45.719923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.246 qpair failed and we were unable to recover it. 00:29:09.246 [2024-10-21 12:13:45.720282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.246 [2024-10-21 12:13:45.720313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.246 qpair failed and we were unable to recover it. 00:29:09.246 [2024-10-21 12:13:45.720562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.246 [2024-10-21 12:13:45.720593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.246 qpair failed and we were unable to recover it. 00:29:09.246 [2024-10-21 12:13:45.720952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.246 [2024-10-21 12:13:45.720982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.246 qpair failed and we were unable to recover it. 00:29:09.246 [2024-10-21 12:13:45.721224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.246 [2024-10-21 12:13:45.721258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.246 qpair failed and we were unable to recover it. 00:29:09.246 [2024-10-21 12:13:45.721664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.246 [2024-10-21 12:13:45.721697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.246 qpair failed and we were unable to recover it. 00:29:09.246 [2024-10-21 12:13:45.721950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.246 [2024-10-21 12:13:45.721984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.246 qpair failed and we were unable to recover it. 00:29:09.246 [2024-10-21 12:13:45.722355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.246 [2024-10-21 12:13:45.722388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.246 qpair failed and we were unable to recover it. 00:29:09.246 [2024-10-21 12:13:45.722739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.246 [2024-10-21 12:13:45.722769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.246 qpair failed and we were unable to recover it. 00:29:09.246 [2024-10-21 12:13:45.723120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.246 [2024-10-21 12:13:45.723152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.246 qpair failed and we were unable to recover it. 00:29:09.246 [2024-10-21 12:13:45.723390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.246 [2024-10-21 12:13:45.723424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.246 qpair failed and we were unable to recover it. 00:29:09.246 [2024-10-21 12:13:45.723810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.246 [2024-10-21 12:13:45.723841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.246 qpair failed and we were unable to recover it. 00:29:09.246 [2024-10-21 12:13:45.724200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.246 [2024-10-21 12:13:45.724232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.246 qpair failed and we were unable to recover it. 00:29:09.246 [2024-10-21 12:13:45.724566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.246 [2024-10-21 12:13:45.724599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.246 qpair failed and we were unable to recover it. 00:29:09.246 [2024-10-21 12:13:45.724842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.246 [2024-10-21 12:13:45.724876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.246 qpair failed and we were unable to recover it. 00:29:09.246 [2024-10-21 12:13:45.725256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.246 [2024-10-21 12:13:45.725287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.246 qpair failed and we were unable to recover it. 00:29:09.246 [2024-10-21 12:13:45.725687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.246 [2024-10-21 12:13:45.725719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.246 qpair failed and we were unable to recover it. 00:29:09.246 [2024-10-21 12:13:45.726079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.246 [2024-10-21 12:13:45.726110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.246 qpair failed and we were unable to recover it. 00:29:09.246 [2024-10-21 12:13:45.726440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.246 [2024-10-21 12:13:45.726474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.246 qpair failed and we were unable to recover it. 00:29:09.246 [2024-10-21 12:13:45.726826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.246 [2024-10-21 12:13:45.726856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.246 qpair failed and we were unable to recover it. 00:29:09.246 [2024-10-21 12:13:45.727104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.246 [2024-10-21 12:13:45.727137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.246 qpair failed and we were unable to recover it. 00:29:09.246 [2024-10-21 12:13:45.727485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.246 [2024-10-21 12:13:45.727518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.246 qpair failed and we were unable to recover it. 00:29:09.246 [2024-10-21 12:13:45.727874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.246 [2024-10-21 12:13:45.727906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.246 qpair failed and we were unable to recover it. 00:29:09.246 [2024-10-21 12:13:45.728276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.246 [2024-10-21 12:13:45.728308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.246 qpair failed and we were unable to recover it. 00:29:09.246 [2024-10-21 12:13:45.728695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.246 [2024-10-21 12:13:45.728726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.246 qpair failed and we were unable to recover it. 00:29:09.246 [2024-10-21 12:13:45.728978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.246 [2024-10-21 12:13:45.729012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.246 qpair failed and we were unable to recover it. 00:29:09.246 [2024-10-21 12:13:45.729406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.246 [2024-10-21 12:13:45.729437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.246 qpair failed and we were unable to recover it. 00:29:09.246 [2024-10-21 12:13:45.729802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.246 [2024-10-21 12:13:45.729833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.246 qpair failed and we were unable to recover it. 00:29:09.246 [2024-10-21 12:13:45.730199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.246 [2024-10-21 12:13:45.730230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.246 qpair failed and we were unable to recover it. 00:29:09.246 [2024-10-21 12:13:45.730570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.246 [2024-10-21 12:13:45.730601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.246 qpair failed and we were unable to recover it. 00:29:09.246 [2024-10-21 12:13:45.730959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.246 [2024-10-21 12:13:45.730989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.246 qpair failed and we were unable to recover it. 00:29:09.246 [2024-10-21 12:13:45.731352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.246 [2024-10-21 12:13:45.731392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.246 qpair failed and we were unable to recover it. 00:29:09.246 [2024-10-21 12:13:45.731746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.246 [2024-10-21 12:13:45.731777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.246 qpair failed and we were unable to recover it. 00:29:09.246 [2024-10-21 12:13:45.732146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.246 [2024-10-21 12:13:45.732178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.246 qpair failed and we were unable to recover it. 00:29:09.246 [2024-10-21 12:13:45.732547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.246 [2024-10-21 12:13:45.732579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.246 qpair failed and we were unable to recover it. 00:29:09.246 [2024-10-21 12:13:45.732936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.246 [2024-10-21 12:13:45.732967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.246 qpair failed and we were unable to recover it. 00:29:09.246 [2024-10-21 12:13:45.733332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.246 [2024-10-21 12:13:45.733364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.246 qpair failed and we were unable to recover it. 00:29:09.246 [2024-10-21 12:13:45.733660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.246 [2024-10-21 12:13:45.733691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.246 qpair failed and we were unable to recover it. 00:29:09.246 [2024-10-21 12:13:45.734041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.246 [2024-10-21 12:13:45.734072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.246 qpair failed and we were unable to recover it. 00:29:09.246 [2024-10-21 12:13:45.734441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.246 [2024-10-21 12:13:45.734473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.246 qpair failed and we were unable to recover it. 00:29:09.246 [2024-10-21 12:13:45.734741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.247 [2024-10-21 12:13:45.734772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.247 qpair failed and we were unable to recover it. 00:29:09.247 [2024-10-21 12:13:45.735110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.247 [2024-10-21 12:13:45.735141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.247 qpair failed and we were unable to recover it. 00:29:09.247 [2024-10-21 12:13:45.735568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.247 [2024-10-21 12:13:45.735601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.247 qpair failed and we were unable to recover it. 00:29:09.247 [2024-10-21 12:13:45.735952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.247 [2024-10-21 12:13:45.735983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.247 qpair failed and we were unable to recover it. 00:29:09.247 [2024-10-21 12:13:45.736384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.247 [2024-10-21 12:13:45.736415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.247 qpair failed and we were unable to recover it. 00:29:09.247 [2024-10-21 12:13:45.736827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.247 [2024-10-21 12:13:45.736858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.247 qpair failed and we were unable to recover it. 00:29:09.247 [2024-10-21 12:13:45.737209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.247 [2024-10-21 12:13:45.737241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.247 qpair failed and we were unable to recover it. 00:29:09.247 [2024-10-21 12:13:45.737599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.247 [2024-10-21 12:13:45.737632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.247 qpair failed and we were unable to recover it. 00:29:09.247 [2024-10-21 12:13:45.738028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.247 [2024-10-21 12:13:45.738058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.247 qpair failed and we were unable to recover it. 00:29:09.247 [2024-10-21 12:13:45.738406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.247 [2024-10-21 12:13:45.738438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.247 qpair failed and we were unable to recover it. 00:29:09.247 [2024-10-21 12:13:45.738800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.247 [2024-10-21 12:13:45.738832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.247 qpair failed and we were unable to recover it. 00:29:09.247 [2024-10-21 12:13:45.739189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.247 [2024-10-21 12:13:45.739219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.247 qpair failed and we were unable to recover it. 00:29:09.247 [2024-10-21 12:13:45.739584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.247 [2024-10-21 12:13:45.739617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.247 qpair failed and we were unable to recover it. 00:29:09.247 [2024-10-21 12:13:45.739975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.247 [2024-10-21 12:13:45.740005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.247 qpair failed and we were unable to recover it. 00:29:09.247 [2024-10-21 12:13:45.740348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.247 [2024-10-21 12:13:45.740379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.247 qpair failed and we were unable to recover it. 00:29:09.247 [2024-10-21 12:13:45.740622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.247 [2024-10-21 12:13:45.740652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.247 qpair failed and we were unable to recover it. 00:29:09.247 [2024-10-21 12:13:45.741013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.247 [2024-10-21 12:13:45.741044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.247 qpair failed and we were unable to recover it. 00:29:09.247 [2024-10-21 12:13:45.741402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.247 [2024-10-21 12:13:45.741433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.247 qpair failed and we were unable to recover it. 00:29:09.247 [2024-10-21 12:13:45.741803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.247 [2024-10-21 12:13:45.741834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.247 qpair failed and we were unable to recover it. 00:29:09.247 [2024-10-21 12:13:45.742262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.247 [2024-10-21 12:13:45.742293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.247 qpair failed and we were unable to recover it. 00:29:09.247 [2024-10-21 12:13:45.742724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.247 [2024-10-21 12:13:45.742755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.247 qpair failed and we were unable to recover it. 00:29:09.247 [2024-10-21 12:13:45.743087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.247 [2024-10-21 12:13:45.743118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.247 qpair failed and we were unable to recover it. 00:29:09.247 [2024-10-21 12:13:45.743509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.247 [2024-10-21 12:13:45.743542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.247 qpair failed and we were unable to recover it. 00:29:09.247 [2024-10-21 12:13:45.743766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.247 [2024-10-21 12:13:45.743797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.247 qpair failed and we were unable to recover it. 00:29:09.247 [2024-10-21 12:13:45.744163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.247 [2024-10-21 12:13:45.744193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.247 qpair failed and we were unable to recover it. 00:29:09.247 [2024-10-21 12:13:45.744556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.247 [2024-10-21 12:13:45.744588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.247 qpair failed and we were unable to recover it. 00:29:09.247 [2024-10-21 12:13:45.744947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.247 [2024-10-21 12:13:45.744978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.247 qpair failed and we were unable to recover it. 00:29:09.247 [2024-10-21 12:13:45.745341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.247 [2024-10-21 12:13:45.745373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.247 qpair failed and we were unable to recover it. 00:29:09.247 [2024-10-21 12:13:45.745751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.247 [2024-10-21 12:13:45.745782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.247 qpair failed and we were unable to recover it. 00:29:09.247 [2024-10-21 12:13:45.746151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.247 [2024-10-21 12:13:45.746183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.247 qpair failed and we were unable to recover it. 00:29:09.247 [2024-10-21 12:13:45.746603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.247 [2024-10-21 12:13:45.746636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.247 qpair failed and we were unable to recover it. 00:29:09.247 [2024-10-21 12:13:45.746988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.247 [2024-10-21 12:13:45.747019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.247 qpair failed and we were unable to recover it. 00:29:09.247 [2024-10-21 12:13:45.747368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.247 [2024-10-21 12:13:45.747400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.247 qpair failed and we were unable to recover it. 00:29:09.247 [2024-10-21 12:13:45.747778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.247 [2024-10-21 12:13:45.747808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.247 qpair failed and we were unable to recover it. 00:29:09.247 [2024-10-21 12:13:45.748166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.247 [2024-10-21 12:13:45.748197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.247 qpair failed and we were unable to recover it. 00:29:09.247 [2024-10-21 12:13:45.748539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.247 [2024-10-21 12:13:45.748572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.247 qpair failed and we were unable to recover it. 00:29:09.247 [2024-10-21 12:13:45.748907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.247 [2024-10-21 12:13:45.748938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.247 qpair failed and we were unable to recover it. 00:29:09.247 [2024-10-21 12:13:45.749307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.247 [2024-10-21 12:13:45.749349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.247 qpair failed and we were unable to recover it. 00:29:09.247 [2024-10-21 12:13:45.749694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.247 [2024-10-21 12:13:45.749726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.247 qpair failed and we were unable to recover it. 00:29:09.247 [2024-10-21 12:13:45.750095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.247 [2024-10-21 12:13:45.750126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.247 qpair failed and we were unable to recover it. 00:29:09.247 [2024-10-21 12:13:45.750486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.247 [2024-10-21 12:13:45.750518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.248 qpair failed and we were unable to recover it. 00:29:09.248 [2024-10-21 12:13:45.750874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.248 [2024-10-21 12:13:45.750905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.248 qpair failed and we were unable to recover it. 00:29:09.248 [2024-10-21 12:13:45.751274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.248 [2024-10-21 12:13:45.751305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.248 qpair failed and we were unable to recover it. 00:29:09.248 [2024-10-21 12:13:45.751703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.248 [2024-10-21 12:13:45.751735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.248 qpair failed and we were unable to recover it. 00:29:09.248 [2024-10-21 12:13:45.752083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.248 [2024-10-21 12:13:45.752113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.248 qpair failed and we were unable to recover it. 00:29:09.248 [2024-10-21 12:13:45.752470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.248 [2024-10-21 12:13:45.752503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.248 qpair failed and we were unable to recover it. 00:29:09.248 [2024-10-21 12:13:45.752874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.248 [2024-10-21 12:13:45.752904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.248 qpair failed and we were unable to recover it. 00:29:09.248 [2024-10-21 12:13:45.753175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.248 [2024-10-21 12:13:45.753207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.248 qpair failed and we were unable to recover it. 00:29:09.248 [2024-10-21 12:13:45.753560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.248 [2024-10-21 12:13:45.753593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.248 qpair failed and we were unable to recover it. 00:29:09.248 [2024-10-21 12:13:45.753948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.248 [2024-10-21 12:13:45.753979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.248 qpair failed and we were unable to recover it. 00:29:09.248 [2024-10-21 12:13:45.754350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.248 [2024-10-21 12:13:45.754383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.248 qpair failed and we were unable to recover it. 00:29:09.248 [2024-10-21 12:13:45.754674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.248 [2024-10-21 12:13:45.754704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.248 qpair failed and we were unable to recover it. 00:29:09.248 [2024-10-21 12:13:45.755063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.248 [2024-10-21 12:13:45.755094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.248 qpair failed and we were unable to recover it. 00:29:09.248 [2024-10-21 12:13:45.755453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.248 [2024-10-21 12:13:45.755486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.248 qpair failed and we were unable to recover it. 00:29:09.248 [2024-10-21 12:13:45.755842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.248 [2024-10-21 12:13:45.755874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.248 qpair failed and we were unable to recover it. 00:29:09.248 [2024-10-21 12:13:45.756223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.248 [2024-10-21 12:13:45.756254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.248 qpair failed and we were unable to recover it. 00:29:09.248 [2024-10-21 12:13:45.756619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.248 [2024-10-21 12:13:45.756651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.248 qpair failed and we were unable to recover it. 00:29:09.248 [2024-10-21 12:13:45.757006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.248 [2024-10-21 12:13:45.757037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.248 qpair failed and we were unable to recover it. 00:29:09.248 [2024-10-21 12:13:45.757395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.248 [2024-10-21 12:13:45.757435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.248 qpair failed and we were unable to recover it. 00:29:09.248 [2024-10-21 12:13:45.757769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.248 [2024-10-21 12:13:45.757801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.248 qpair failed and we were unable to recover it. 00:29:09.248 [2024-10-21 12:13:45.758154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.248 [2024-10-21 12:13:45.758185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.248 qpair failed and we were unable to recover it. 00:29:09.248 [2024-10-21 12:13:45.758547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.248 [2024-10-21 12:13:45.758579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.248 qpair failed and we were unable to recover it. 00:29:09.248 [2024-10-21 12:13:45.758934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.248 [2024-10-21 12:13:45.758964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.248 qpair failed and we were unable to recover it. 00:29:09.248 [2024-10-21 12:13:45.759355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.248 [2024-10-21 12:13:45.759389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.248 qpair failed and we were unable to recover it. 00:29:09.248 [2024-10-21 12:13:45.759736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.248 [2024-10-21 12:13:45.759766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.248 qpair failed and we were unable to recover it. 00:29:09.248 [2024-10-21 12:13:45.760122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.248 [2024-10-21 12:13:45.760152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.248 qpair failed and we were unable to recover it. 00:29:09.248 [2024-10-21 12:13:45.760390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.248 [2024-10-21 12:13:45.760426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.248 qpair failed and we were unable to recover it. 00:29:09.248 [2024-10-21 12:13:45.760783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.248 [2024-10-21 12:13:45.760813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.248 qpair failed and we were unable to recover it. 00:29:09.248 [2024-10-21 12:13:45.761172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.248 [2024-10-21 12:13:45.761201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.248 qpair failed and we were unable to recover it. 00:29:09.248 [2024-10-21 12:13:45.761644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.248 [2024-10-21 12:13:45.761676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.248 qpair failed and we were unable to recover it. 00:29:09.248 [2024-10-21 12:13:45.762029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.248 [2024-10-21 12:13:45.762060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.248 qpair failed and we were unable to recover it. 00:29:09.248 [2024-10-21 12:13:45.762419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.248 [2024-10-21 12:13:45.762453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.248 qpair failed and we were unable to recover it. 00:29:09.248 [2024-10-21 12:13:45.762840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.248 [2024-10-21 12:13:45.762870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.248 qpair failed and we were unable to recover it. 00:29:09.248 [2024-10-21 12:13:45.763232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.248 [2024-10-21 12:13:45.763263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.248 qpair failed and we were unable to recover it. 00:29:09.248 [2024-10-21 12:13:45.763628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.248 [2024-10-21 12:13:45.763660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.248 qpair failed and we were unable to recover it. 00:29:09.248 [2024-10-21 12:13:45.764018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.248 [2024-10-21 12:13:45.764048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.248 qpair failed and we were unable to recover it. 00:29:09.248 [2024-10-21 12:13:45.764408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.248 [2024-10-21 12:13:45.764441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.248 qpair failed and we were unable to recover it. 00:29:09.248 [2024-10-21 12:13:45.764811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.248 [2024-10-21 12:13:45.764842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.248 qpair failed and we were unable to recover it. 00:29:09.248 [2024-10-21 12:13:45.765202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.248 [2024-10-21 12:13:45.765233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.248 qpair failed and we were unable to recover it. 00:29:09.248 [2024-10-21 12:13:45.765595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.248 [2024-10-21 12:13:45.765627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.248 qpair failed and we were unable to recover it. 00:29:09.248 [2024-10-21 12:13:45.766002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.248 [2024-10-21 12:13:45.766034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.248 qpair failed and we were unable to recover it. 00:29:09.249 [2024-10-21 12:13:45.766399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.249 [2024-10-21 12:13:45.766431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.249 qpair failed and we were unable to recover it. 00:29:09.249 [2024-10-21 12:13:45.766807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.249 [2024-10-21 12:13:45.766837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.249 qpair failed and we were unable to recover it. 00:29:09.249 [2024-10-21 12:13:45.767201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.249 [2024-10-21 12:13:45.767232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.249 qpair failed and we were unable to recover it. 00:29:09.249 [2024-10-21 12:13:45.767594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.249 [2024-10-21 12:13:45.767626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.249 qpair failed and we were unable to recover it. 00:29:09.249 [2024-10-21 12:13:45.767986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.249 [2024-10-21 12:13:45.768017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.249 qpair failed and we were unable to recover it. 00:29:09.249 [2024-10-21 12:13:45.768379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.249 [2024-10-21 12:13:45.768412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.249 qpair failed and we were unable to recover it. 00:29:09.249 [2024-10-21 12:13:45.768772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.249 [2024-10-21 12:13:45.768803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.249 qpair failed and we were unable to recover it. 00:29:09.249 [2024-10-21 12:13:45.769155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.249 [2024-10-21 12:13:45.769187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.249 qpair failed and we were unable to recover it. 00:29:09.249 [2024-10-21 12:13:45.769421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.249 [2024-10-21 12:13:45.769457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.249 qpair failed and we were unable to recover it. 00:29:09.249 [2024-10-21 12:13:45.769825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.249 [2024-10-21 12:13:45.769856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.249 qpair failed and we were unable to recover it. 00:29:09.249 [2024-10-21 12:13:45.770102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.249 [2024-10-21 12:13:45.770136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.249 qpair failed and we were unable to recover it. 00:29:09.249 [2024-10-21 12:13:45.770495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.249 [2024-10-21 12:13:45.770527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.249 qpair failed and we were unable to recover it. 00:29:09.249 [2024-10-21 12:13:45.770895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.249 [2024-10-21 12:13:45.770925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.249 qpair failed and we were unable to recover it. 00:29:09.249 [2024-10-21 12:13:45.771281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.249 [2024-10-21 12:13:45.771313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.249 qpair failed and we were unable to recover it. 00:29:09.249 [2024-10-21 12:13:45.771683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.249 [2024-10-21 12:13:45.771715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.249 qpair failed and we were unable to recover it. 00:29:09.249 [2024-10-21 12:13:45.772084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.249 [2024-10-21 12:13:45.772116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.249 qpair failed and we were unable to recover it. 00:29:09.249 [2024-10-21 12:13:45.772477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.249 [2024-10-21 12:13:45.772509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.249 qpair failed and we were unable to recover it. 00:29:09.249 [2024-10-21 12:13:45.772865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.249 [2024-10-21 12:13:45.772902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.249 qpair failed and we were unable to recover it. 00:29:09.249 [2024-10-21 12:13:45.773264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.249 [2024-10-21 12:13:45.773296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.249 qpair failed and we were unable to recover it. 00:29:09.249 [2024-10-21 12:13:45.773659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.249 [2024-10-21 12:13:45.773690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.249 qpair failed and we were unable to recover it. 00:29:09.249 [2024-10-21 12:13:45.774048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.249 [2024-10-21 12:13:45.774079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.249 qpair failed and we were unable to recover it. 00:29:09.249 [2024-10-21 12:13:45.774454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.249 [2024-10-21 12:13:45.774485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.249 qpair failed and we were unable to recover it. 00:29:09.249 [2024-10-21 12:13:45.774856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.249 [2024-10-21 12:13:45.774886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.249 qpair failed and we were unable to recover it. 00:29:09.249 [2024-10-21 12:13:45.775249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.249 [2024-10-21 12:13:45.775280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.249 qpair failed and we were unable to recover it. 00:29:09.249 [2024-10-21 12:13:45.775635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.249 [2024-10-21 12:13:45.775669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.249 qpair failed and we were unable to recover it. 00:29:09.249 [2024-10-21 12:13:45.776027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.249 [2024-10-21 12:13:45.776058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.249 qpair failed and we were unable to recover it. 00:29:09.249 [2024-10-21 12:13:45.776417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.249 [2024-10-21 12:13:45.776449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.249 qpair failed and we were unable to recover it. 00:29:09.249 [2024-10-21 12:13:45.776803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.249 [2024-10-21 12:13:45.776834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.249 qpair failed and we were unable to recover it. 00:29:09.249 [2024-10-21 12:13:45.777199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.249 [2024-10-21 12:13:45.777229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.249 qpair failed and we were unable to recover it. 00:29:09.249 [2024-10-21 12:13:45.777603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.249 [2024-10-21 12:13:45.777635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.249 qpair failed and we were unable to recover it. 00:29:09.249 [2024-10-21 12:13:45.777997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.249 [2024-10-21 12:13:45.778029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.249 qpair failed and we were unable to recover it. 00:29:09.249 [2024-10-21 12:13:45.778387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.249 [2024-10-21 12:13:45.778420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.249 qpair failed and we were unable to recover it. 00:29:09.249 [2024-10-21 12:13:45.778784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.249 [2024-10-21 12:13:45.778815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.249 qpair failed and we were unable to recover it. 00:29:09.249 [2024-10-21 12:13:45.779177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.249 [2024-10-21 12:13:45.779207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.249 qpair failed and we were unable to recover it. 00:29:09.249 [2024-10-21 12:13:45.779574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.249 [2024-10-21 12:13:45.779606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.249 qpair failed and we were unable to recover it. 00:29:09.249 [2024-10-21 12:13:45.779841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.250 [2024-10-21 12:13:45.779875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.250 qpair failed and we were unable to recover it. 00:29:09.250 [2024-10-21 12:13:45.780112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.250 [2024-10-21 12:13:45.780149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.250 qpair failed and we were unable to recover it. 00:29:09.250 [2024-10-21 12:13:45.780496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.250 [2024-10-21 12:13:45.780530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.250 qpair failed and we were unable to recover it. 00:29:09.250 [2024-10-21 12:13:45.780881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.250 [2024-10-21 12:13:45.780912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.250 qpair failed and we were unable to recover it. 00:29:09.250 [2024-10-21 12:13:45.781165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.250 [2024-10-21 12:13:45.781195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.250 qpair failed and we were unable to recover it. 00:29:09.250 [2024-10-21 12:13:45.781565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.250 [2024-10-21 12:13:45.781597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.250 qpair failed and we were unable to recover it. 00:29:09.250 [2024-10-21 12:13:45.781955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.250 [2024-10-21 12:13:45.781988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.250 qpair failed and we were unable to recover it. 00:29:09.250 [2024-10-21 12:13:45.782341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.250 [2024-10-21 12:13:45.782374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.250 qpair failed and we were unable to recover it. 00:29:09.250 [2024-10-21 12:13:45.782729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.250 [2024-10-21 12:13:45.782759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.250 qpair failed and we were unable to recover it. 00:29:09.250 [2024-10-21 12:13:45.783111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.250 [2024-10-21 12:13:45.783143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.250 qpair failed and we were unable to recover it. 00:29:09.250 [2024-10-21 12:13:45.783510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.250 [2024-10-21 12:13:45.783541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.250 qpair failed and we were unable to recover it. 00:29:09.250 [2024-10-21 12:13:45.783901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.250 [2024-10-21 12:13:45.783932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.250 qpair failed and we were unable to recover it. 00:29:09.250 [2024-10-21 12:13:45.784288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.250 [2024-10-21 12:13:45.784342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.250 qpair failed and we were unable to recover it. 00:29:09.250 [2024-10-21 12:13:45.784587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.250 [2024-10-21 12:13:45.784623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.250 qpair failed and we were unable to recover it. 00:29:09.250 [2024-10-21 12:13:45.784975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.250 [2024-10-21 12:13:45.785006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.250 qpair failed and we were unable to recover it. 00:29:09.250 [2024-10-21 12:13:45.785357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.250 [2024-10-21 12:13:45.785391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.250 qpair failed and we were unable to recover it. 00:29:09.250 [2024-10-21 12:13:45.785803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.250 [2024-10-21 12:13:45.785833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.250 qpair failed and we were unable to recover it. 00:29:09.250 [2024-10-21 12:13:45.786168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.250 [2024-10-21 12:13:45.786199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.250 qpair failed and we were unable to recover it. 00:29:09.250 [2024-10-21 12:13:45.786568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.250 [2024-10-21 12:13:45.786600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.250 qpair failed and we were unable to recover it. 00:29:09.250 [2024-10-21 12:13:45.786947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.250 [2024-10-21 12:13:45.786978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.250 qpair failed and we were unable to recover it. 00:29:09.250 [2024-10-21 12:13:45.787342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.250 [2024-10-21 12:13:45.787375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.250 qpair failed and we were unable to recover it. 00:29:09.250 [2024-10-21 12:13:45.787721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.250 [2024-10-21 12:13:45.787751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.250 qpair failed and we were unable to recover it. 00:29:09.250 [2024-10-21 12:13:45.788103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.250 [2024-10-21 12:13:45.788146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.250 qpair failed and we were unable to recover it. 00:29:09.250 [2024-10-21 12:13:45.788482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.250 [2024-10-21 12:13:45.788514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.250 qpair failed and we were unable to recover it. 00:29:09.250 [2024-10-21 12:13:45.788865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.250 [2024-10-21 12:13:45.788896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.250 qpair failed and we were unable to recover it. 00:29:09.250 [2024-10-21 12:13:45.789257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.250 [2024-10-21 12:13:45.789288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.250 qpair failed and we were unable to recover it. 00:29:09.250 [2024-10-21 12:13:45.789684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.250 [2024-10-21 12:13:45.789718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.250 qpair failed and we were unable to recover it. 00:29:09.250 [2024-10-21 12:13:45.790074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.250 [2024-10-21 12:13:45.790104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.250 qpair failed and we were unable to recover it. 00:29:09.250 [2024-10-21 12:13:45.790347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.250 [2024-10-21 12:13:45.790382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.250 qpair failed and we were unable to recover it. 00:29:09.250 [2024-10-21 12:13:45.790719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.250 [2024-10-21 12:13:45.790750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.250 qpair failed and we were unable to recover it. 00:29:09.250 [2024-10-21 12:13:45.791113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.250 [2024-10-21 12:13:45.791143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.250 qpair failed and we were unable to recover it. 00:29:09.250 [2024-10-21 12:13:45.791505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.250 [2024-10-21 12:13:45.791538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.250 qpair failed and we were unable to recover it. 00:29:09.250 [2024-10-21 12:13:45.791894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.250 [2024-10-21 12:13:45.791925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.250 qpair failed and we were unable to recover it. 00:29:09.250 [2024-10-21 12:13:45.792271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.250 [2024-10-21 12:13:45.792301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.250 qpair failed and we were unable to recover it. 00:29:09.250 [2024-10-21 12:13:45.792697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.250 [2024-10-21 12:13:45.792731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.250 qpair failed and we were unable to recover it. 00:29:09.250 [2024-10-21 12:13:45.793078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.250 [2024-10-21 12:13:45.793108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.250 qpair failed and we were unable to recover it. 00:29:09.250 [2024-10-21 12:13:45.793480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.250 [2024-10-21 12:13:45.793514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.250 qpair failed and we were unable to recover it. 00:29:09.250 [2024-10-21 12:13:45.793874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.250 [2024-10-21 12:13:45.793906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.250 qpair failed and we were unable to recover it. 00:29:09.250 [2024-10-21 12:13:45.794239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.250 [2024-10-21 12:13:45.794268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.250 qpair failed and we were unable to recover it. 00:29:09.250 [2024-10-21 12:13:45.794627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.250 [2024-10-21 12:13:45.794658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.250 qpair failed and we were unable to recover it. 00:29:09.250 [2024-10-21 12:13:45.795015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.250 [2024-10-21 12:13:45.795046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.251 qpair failed and we were unable to recover it. 00:29:09.251 [2024-10-21 12:13:45.795405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.251 [2024-10-21 12:13:45.795438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.251 qpair failed and we were unable to recover it. 00:29:09.251 [2024-10-21 12:13:45.795813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.251 [2024-10-21 12:13:45.795843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.251 qpair failed and we were unable to recover it. 00:29:09.251 [2024-10-21 12:13:45.796212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.251 [2024-10-21 12:13:45.796244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.251 qpair failed and we were unable to recover it. 00:29:09.251 [2024-10-21 12:13:45.796609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.251 [2024-10-21 12:13:45.796641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.251 qpair failed and we were unable to recover it. 00:29:09.251 [2024-10-21 12:13:45.796995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.251 [2024-10-21 12:13:45.797026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.251 qpair failed and we were unable to recover it. 00:29:09.251 [2024-10-21 12:13:45.797379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.251 [2024-10-21 12:13:45.797413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.251 qpair failed and we were unable to recover it. 00:29:09.251 [2024-10-21 12:13:45.797693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.251 [2024-10-21 12:13:45.797724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.251 qpair failed and we were unable to recover it. 00:29:09.251 [2024-10-21 12:13:45.798094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.251 [2024-10-21 12:13:45.798126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.251 qpair failed and we were unable to recover it. 00:29:09.251 [2024-10-21 12:13:45.798479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.251 [2024-10-21 12:13:45.798513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.251 qpair failed and we were unable to recover it. 00:29:09.251 [2024-10-21 12:13:45.798868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.251 [2024-10-21 12:13:45.798898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.251 qpair failed and we were unable to recover it. 00:29:09.251 [2024-10-21 12:13:45.799254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.251 [2024-10-21 12:13:45.799284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.251 qpair failed and we were unable to recover it. 00:29:09.251 [2024-10-21 12:13:45.799675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.251 [2024-10-21 12:13:45.799707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.251 qpair failed and we were unable to recover it. 00:29:09.251 [2024-10-21 12:13:45.800064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.251 [2024-10-21 12:13:45.800094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.251 qpair failed and we were unable to recover it. 00:29:09.251 [2024-10-21 12:13:45.800449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.251 [2024-10-21 12:13:45.800483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.251 qpair failed and we were unable to recover it. 00:29:09.251 [2024-10-21 12:13:45.800840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.251 [2024-10-21 12:13:45.800872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.251 qpair failed and we were unable to recover it. 00:29:09.251 [2024-10-21 12:13:45.801240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.251 [2024-10-21 12:13:45.801270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.251 qpair failed and we were unable to recover it. 00:29:09.251 [2024-10-21 12:13:45.801516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.251 [2024-10-21 12:13:45.801550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.251 qpair failed and we were unable to recover it. 00:29:09.251 [2024-10-21 12:13:45.801910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.251 [2024-10-21 12:13:45.801940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.251 qpair failed and we were unable to recover it. 00:29:09.251 [2024-10-21 12:13:45.802296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.251 [2024-10-21 12:13:45.802339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.251 qpair failed and we were unable to recover it. 00:29:09.251 [2024-10-21 12:13:45.802715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.251 [2024-10-21 12:13:45.802745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.251 qpair failed and we were unable to recover it. 00:29:09.251 [2024-10-21 12:13:45.803105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.251 [2024-10-21 12:13:45.803137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.251 qpair failed and we were unable to recover it. 00:29:09.251 [2024-10-21 12:13:45.803488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.251 [2024-10-21 12:13:45.803526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.251 qpair failed and we were unable to recover it. 00:29:09.251 [2024-10-21 12:13:45.803874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.251 [2024-10-21 12:13:45.803905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.251 qpair failed and we were unable to recover it. 00:29:09.251 [2024-10-21 12:13:45.804258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.251 [2024-10-21 12:13:45.804290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.251 qpair failed and we were unable to recover it. 00:29:09.251 [2024-10-21 12:13:45.804575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.251 [2024-10-21 12:13:45.804610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.251 qpair failed and we were unable to recover it. 00:29:09.251 [2024-10-21 12:13:45.804970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.251 [2024-10-21 12:13:45.805001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.251 qpair failed and we were unable to recover it. 00:29:09.251 [2024-10-21 12:13:45.805234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.251 [2024-10-21 12:13:45.805268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.251 qpair failed and we were unable to recover it. 00:29:09.251 [2024-10-21 12:13:45.805689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.251 [2024-10-21 12:13:45.805721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.251 qpair failed and we were unable to recover it. 00:29:09.251 [2024-10-21 12:13:45.806076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.251 [2024-10-21 12:13:45.806105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.251 qpair failed and we were unable to recover it. 00:29:09.251 [2024-10-21 12:13:45.806484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.251 [2024-10-21 12:13:45.806517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.251 qpair failed and we were unable to recover it. 00:29:09.251 [2024-10-21 12:13:45.806882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.251 [2024-10-21 12:13:45.806912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.251 qpair failed and we were unable to recover it. 00:29:09.251 [2024-10-21 12:13:45.807278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.251 [2024-10-21 12:13:45.807309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.251 qpair failed and we were unable to recover it. 00:29:09.251 [2024-10-21 12:13:45.807610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.251 [2024-10-21 12:13:45.807641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.251 qpair failed and we were unable to recover it. 00:29:09.251 [2024-10-21 12:13:45.807988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.251 [2024-10-21 12:13:45.808018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.251 qpair failed and we were unable to recover it. 00:29:09.251 [2024-10-21 12:13:45.808385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.251 [2024-10-21 12:13:45.808418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.251 qpair failed and we were unable to recover it. 00:29:09.251 [2024-10-21 12:13:45.808791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.251 [2024-10-21 12:13:45.808821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.251 qpair failed and we were unable to recover it. 00:29:09.251 [2024-10-21 12:13:45.809197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.251 [2024-10-21 12:13:45.809229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.251 qpair failed and we were unable to recover it. 00:29:09.251 [2024-10-21 12:13:45.809591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.251 [2024-10-21 12:13:45.809623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.251 qpair failed and we were unable to recover it. 00:29:09.251 [2024-10-21 12:13:45.809990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.251 [2024-10-21 12:13:45.810020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.251 qpair failed and we were unable to recover it. 00:29:09.251 [2024-10-21 12:13:45.810254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.251 [2024-10-21 12:13:45.810288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.251 qpair failed and we were unable to recover it. 00:29:09.252 [2024-10-21 12:13:45.810681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.252 [2024-10-21 12:13:45.810714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.252 qpair failed and we were unable to recover it. 00:29:09.252 [2024-10-21 12:13:45.811074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.252 [2024-10-21 12:13:45.811105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.252 qpair failed and we were unable to recover it. 00:29:09.529 [2024-10-21 12:13:45.811362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.529 [2024-10-21 12:13:45.811397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.529 qpair failed and we were unable to recover it. 00:29:09.529 [2024-10-21 12:13:45.811748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.529 [2024-10-21 12:13:45.811780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.529 qpair failed and we were unable to recover it. 00:29:09.529 [2024-10-21 12:13:45.812125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.529 [2024-10-21 12:13:45.812156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.529 qpair failed and we were unable to recover it. 00:29:09.529 [2024-10-21 12:13:45.812510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.529 [2024-10-21 12:13:45.812540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.529 qpair failed and we were unable to recover it. 00:29:09.529 [2024-10-21 12:13:45.812900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.529 [2024-10-21 12:13:45.812931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.529 qpair failed and we were unable to recover it. 00:29:09.529 [2024-10-21 12:13:45.813291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.529 [2024-10-21 12:13:45.813346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.529 qpair failed and we were unable to recover it. 00:29:09.529 [2024-10-21 12:13:45.813728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.529 [2024-10-21 12:13:45.813759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.529 qpair failed and we were unable to recover it. 00:29:09.529 [2024-10-21 12:13:45.814113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.529 [2024-10-21 12:13:45.814146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.529 qpair failed and we were unable to recover it. 00:29:09.529 [2024-10-21 12:13:45.814505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.529 [2024-10-21 12:13:45.814536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.529 qpair failed and we were unable to recover it. 00:29:09.529 [2024-10-21 12:13:45.814897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.529 [2024-10-21 12:13:45.814928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.529 qpair failed and we were unable to recover it. 00:29:09.529 [2024-10-21 12:13:45.815298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.529 [2024-10-21 12:13:45.815344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.529 qpair failed and we were unable to recover it. 00:29:09.529 [2024-10-21 12:13:45.815712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.529 [2024-10-21 12:13:45.815742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.529 qpair failed and we were unable to recover it. 00:29:09.529 [2024-10-21 12:13:45.816098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.529 [2024-10-21 12:13:45.816131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.529 qpair failed and we were unable to recover it. 00:29:09.529 [2024-10-21 12:13:45.816480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.529 [2024-10-21 12:13:45.816512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.529 qpair failed and we were unable to recover it. 00:29:09.529 [2024-10-21 12:13:45.816882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.529 [2024-10-21 12:13:45.816912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.529 qpair failed and we were unable to recover it. 00:29:09.529 [2024-10-21 12:13:45.817159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.529 [2024-10-21 12:13:45.817191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.529 qpair failed and we were unable to recover it. 00:29:09.529 [2024-10-21 12:13:45.817550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.529 [2024-10-21 12:13:45.817582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.529 qpair failed and we were unable to recover it. 00:29:09.529 [2024-10-21 12:13:45.817932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.529 [2024-10-21 12:13:45.817964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.529 qpair failed and we were unable to recover it. 00:29:09.529 [2024-10-21 12:13:45.818313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.529 [2024-10-21 12:13:45.818358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.529 qpair failed and we were unable to recover it. 00:29:09.529 [2024-10-21 12:13:45.818724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.529 [2024-10-21 12:13:45.818760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.529 qpair failed and we were unable to recover it. 00:29:09.529 [2024-10-21 12:13:45.819002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.529 [2024-10-21 12:13:45.819036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.529 qpair failed and we were unable to recover it. 00:29:09.529 [2024-10-21 12:13:45.819447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.529 [2024-10-21 12:13:45.819479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.529 qpair failed and we were unable to recover it. 00:29:09.529 [2024-10-21 12:13:45.819724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.529 [2024-10-21 12:13:45.819757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.529 qpair failed and we were unable to recover it. 00:29:09.529 [2024-10-21 12:13:45.820115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.529 [2024-10-21 12:13:45.820147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.529 qpair failed and we were unable to recover it. 00:29:09.529 [2024-10-21 12:13:45.820516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.529 [2024-10-21 12:13:45.820550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.529 qpair failed and we were unable to recover it. 00:29:09.529 [2024-10-21 12:13:45.820910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.529 [2024-10-21 12:13:45.820941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.529 qpair failed and we were unable to recover it. 00:29:09.529 [2024-10-21 12:13:45.821312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.529 [2024-10-21 12:13:45.821367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.529 qpair failed and we were unable to recover it. 00:29:09.529 [2024-10-21 12:13:45.821714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.529 [2024-10-21 12:13:45.821743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.529 qpair failed and we were unable to recover it. 00:29:09.529 [2024-10-21 12:13:45.822107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.529 [2024-10-21 12:13:45.822137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.529 qpair failed and we were unable to recover it. 00:29:09.529 [2024-10-21 12:13:45.822488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.529 [2024-10-21 12:13:45.822520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.529 qpair failed and we were unable to recover it. 00:29:09.529 [2024-10-21 12:13:45.822887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.529 [2024-10-21 12:13:45.822918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.529 qpair failed and we were unable to recover it. 00:29:09.529 [2024-10-21 12:13:45.823271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.529 [2024-10-21 12:13:45.823302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.529 qpair failed and we were unable to recover it. 00:29:09.529 [2024-10-21 12:13:45.823672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.529 [2024-10-21 12:13:45.823705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.529 qpair failed and we were unable to recover it. 00:29:09.529 [2024-10-21 12:13:45.824067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.529 [2024-10-21 12:13:45.824098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.529 qpair failed and we were unable to recover it. 00:29:09.529 [2024-10-21 12:13:45.824459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.529 [2024-10-21 12:13:45.824491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.529 qpair failed and we were unable to recover it. 00:29:09.529 [2024-10-21 12:13:45.824846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.529 [2024-10-21 12:13:45.824876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.530 qpair failed and we were unable to recover it. 00:29:09.530 [2024-10-21 12:13:45.825236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.530 [2024-10-21 12:13:45.825267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.530 qpair failed and we were unable to recover it. 00:29:09.530 [2024-10-21 12:13:45.825634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.530 [2024-10-21 12:13:45.825666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.530 qpair failed and we were unable to recover it. 00:29:09.530 [2024-10-21 12:13:45.826020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.530 [2024-10-21 12:13:45.826051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.530 qpair failed and we were unable to recover it. 00:29:09.530 [2024-10-21 12:13:45.826409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.530 [2024-10-21 12:13:45.826441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.530 qpair failed and we were unable to recover it. 00:29:09.530 [2024-10-21 12:13:45.826805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.530 [2024-10-21 12:13:45.826836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.530 qpair failed and we were unable to recover it. 00:29:09.530 [2024-10-21 12:13:45.827204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.530 [2024-10-21 12:13:45.827235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.530 qpair failed and we were unable to recover it. 00:29:09.530 [2024-10-21 12:13:45.827585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.530 [2024-10-21 12:13:45.827618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.530 qpair failed and we were unable to recover it. 00:29:09.530 [2024-10-21 12:13:45.827948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.530 [2024-10-21 12:13:45.827979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.530 qpair failed and we were unable to recover it. 00:29:09.530 [2024-10-21 12:13:45.828344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.530 [2024-10-21 12:13:45.828377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.530 qpair failed and we were unable to recover it. 00:29:09.530 [2024-10-21 12:13:45.828770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.530 [2024-10-21 12:13:45.828801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.530 qpair failed and we were unable to recover it. 00:29:09.530 [2024-10-21 12:13:45.829149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.530 [2024-10-21 12:13:45.829182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.530 qpair failed and we were unable to recover it. 00:29:09.530 [2024-10-21 12:13:45.829512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.530 [2024-10-21 12:13:45.829546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.530 qpair failed and we were unable to recover it. 00:29:09.530 [2024-10-21 12:13:45.829915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.530 [2024-10-21 12:13:45.829947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.530 qpair failed and we were unable to recover it. 00:29:09.530 [2024-10-21 12:13:45.830304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.530 [2024-10-21 12:13:45.830348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.530 qpair failed and we were unable to recover it. 00:29:09.530 [2024-10-21 12:13:45.830699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.530 [2024-10-21 12:13:45.830730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.530 qpair failed and we were unable to recover it. 00:29:09.530 [2024-10-21 12:13:45.831087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.530 [2024-10-21 12:13:45.831118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.530 qpair failed and we were unable to recover it. 00:29:09.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1160893 Killed "${NVMF_APP[@]}" "$@" 00:29:09.530 [2024-10-21 12:13:45.831474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.530 [2024-10-21 12:13:45.831506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.530 qpair failed and we were unable to recover it. 00:29:09.530 [2024-10-21 12:13:45.831879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.530 [2024-10-21 12:13:45.831911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.530 qpair failed and we were unable to recover it. 00:29:09.530 [2024-10-21 12:13:45.832168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.530 [2024-10-21 12:13:45.832200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.530 qpair failed and we were unable to recover it. 00:29:09.530 12:13:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:09.530 [2024-10-21 12:13:45.832543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.530 [2024-10-21 12:13:45.832577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.530 qpair failed and we were unable to recover it. 00:29:09.530 12:13:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:09.530 [2024-10-21 12:13:45.832946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.530 12:13:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:09.530 [2024-10-21 12:13:45.832979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.530 qpair failed and we were unable to recover it. 00:29:09.530 12:13:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:09.530 [2024-10-21 12:13:45.833345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.530 [2024-10-21 12:13:45.833378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.530 qpair failed and we were unable to recover it. 00:29:09.530 12:13:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:09.530 [2024-10-21 12:13:45.833738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.530 [2024-10-21 12:13:45.833769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.530 qpair failed and we were unable to recover it. 00:29:09.530 [2024-10-21 12:13:45.834133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.530 [2024-10-21 12:13:45.834165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.530 qpair failed and we were unable to recover it. 00:29:09.530 [2024-10-21 12:13:45.834552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.530 [2024-10-21 12:13:45.834586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.530 qpair failed and we were unable to recover it. 00:29:09.530 [2024-10-21 12:13:45.834942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.530 [2024-10-21 12:13:45.834974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.530 qpair failed and we were unable to recover it. 00:29:09.530 [2024-10-21 12:13:45.835343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.530 [2024-10-21 12:13:45.835377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.530 qpair failed and we were unable to recover it. 00:29:09.530 [2024-10-21 12:13:45.835741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.530 [2024-10-21 12:13:45.835772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.530 qpair failed and we were unable to recover it. 00:29:09.530 [2024-10-21 12:13:45.836151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.530 [2024-10-21 12:13:45.836182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.530 qpair failed and we were unable to recover it. 00:29:09.530 [2024-10-21 12:13:45.836550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.530 [2024-10-21 12:13:45.836583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.530 qpair failed and we were unable to recover it. 00:29:09.530 [2024-10-21 12:13:45.836957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.530 [2024-10-21 12:13:45.836990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.530 qpair failed and we were unable to recover it. 00:29:09.530 [2024-10-21 12:13:45.837237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.530 [2024-10-21 12:13:45.837267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.530 qpair failed and we were unable to recover it. 00:29:09.530 [2024-10-21 12:13:45.837630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.530 [2024-10-21 12:13:45.837662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.530 qpair failed and we were unable to recover it. 00:29:09.530 [2024-10-21 12:13:45.837909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.530 [2024-10-21 12:13:45.837940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.530 qpair failed and we were unable to recover it. 00:29:09.530 [2024-10-21 12:13:45.838297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.530 [2024-10-21 12:13:45.838343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.530 qpair failed and we were unable to recover it. 00:29:09.530 [2024-10-21 12:13:45.838722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.530 [2024-10-21 12:13:45.838753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.530 qpair failed and we were unable to recover it. 00:29:09.530 [2024-10-21 12:13:45.839114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.530 [2024-10-21 12:13:45.839145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.530 qpair failed and we were unable to recover it. 00:29:09.531 [2024-10-21 12:13:45.839528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.531 [2024-10-21 12:13:45.839562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.531 qpair failed and we were unable to recover it. 00:29:09.531 [2024-10-21 12:13:45.839941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.531 [2024-10-21 12:13:45.839972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.531 qpair failed and we were unable to recover it. 00:29:09.531 [2024-10-21 12:13:45.840332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.531 [2024-10-21 12:13:45.840364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.531 qpair failed and we were unable to recover it. 00:29:09.531 [2024-10-21 12:13:45.840722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.531 [2024-10-21 12:13:45.840752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.531 qpair failed and we were unable to recover it. 00:29:09.531 [2024-10-21 12:13:45.841108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.531 [2024-10-21 12:13:45.841140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.531 qpair failed and we were unable to recover it. 00:29:09.531 [2024-10-21 12:13:45.841493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.531 [2024-10-21 12:13:45.841526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.531 qpair failed and we were unable to recover it. 00:29:09.531 12:13:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=1161737 00:29:09.531 [2024-10-21 12:13:45.841774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.531 [2024-10-21 12:13:45.841806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.531 qpair failed and we were unable to recover it. 00:29:09.531 12:13:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 1161737 00:29:09.531 [2024-10-21 12:13:45.842045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.531 [2024-10-21 12:13:45.842076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.531 qpair failed and we were unable to recover it. 00:29:09.531 12:13:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:09.531 12:13:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1161737 ']' 00:29:09.531 [2024-10-21 12:13:45.842447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.531 [2024-10-21 12:13:45.842487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.531 qpair failed and we were unable to recover it. 00:29:09.531 12:13:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:09.531 [2024-10-21 12:13:45.842723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.531 [2024-10-21 12:13:45.842755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.531 qpair failed and we were unable to recover it. 00:29:09.531 12:13:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:09.531 [2024-10-21 12:13:45.843120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.531 12:13:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:09.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:09.531 [2024-10-21 12:13:45.843152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.531 qpair failed and we were unable to recover it. 00:29:09.531 12:13:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:09.531 [2024-10-21 12:13:45.843568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.531 12:13:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:09.531 [2024-10-21 12:13:45.843603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.531 qpair failed and we were unable to recover it. 00:29:09.531 [2024-10-21 12:13:45.844035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.531 [2024-10-21 12:13:45.844068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.531 qpair failed and we were unable to recover it. 00:29:09.531 [2024-10-21 12:13:45.844398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.531 [2024-10-21 12:13:45.844431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.531 qpair failed and we were unable to recover it. 00:29:09.531 [2024-10-21 12:13:45.844809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.531 [2024-10-21 12:13:45.844840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.531 qpair failed and we were unable to recover it. 00:29:09.531 [2024-10-21 12:13:45.845086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.531 [2024-10-21 12:13:45.845121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.531 qpair failed and we were unable to recover it. 00:29:09.531 [2024-10-21 12:13:45.845545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.531 [2024-10-21 12:13:45.845578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.531 qpair failed and we were unable to recover it. 00:29:09.531 [2024-10-21 12:13:45.845832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.531 [2024-10-21 12:13:45.845864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.531 qpair failed and we were unable to recover it. 00:29:09.531 [2024-10-21 12:13:45.846142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.531 [2024-10-21 12:13:45.846173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.531 qpair failed and we were unable to recover it. 00:29:09.531 [2024-10-21 12:13:45.846457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.531 [2024-10-21 12:13:45.846489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.531 qpair failed and we were unable to recover it. 00:29:09.531 [2024-10-21 12:13:45.846884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.531 [2024-10-21 12:13:45.846915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.531 qpair failed and we were unable to recover it. 00:29:09.531 [2024-10-21 12:13:45.847080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.531 [2024-10-21 12:13:45.847113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.531 qpair failed and we were unable to recover it. 00:29:09.531 [2024-10-21 12:13:45.847618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.531 [2024-10-21 12:13:45.847652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.531 qpair failed and we were unable to recover it. 00:29:09.531 [2024-10-21 12:13:45.848008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.531 [2024-10-21 12:13:45.848039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.531 qpair failed and we were unable to recover it. 00:29:09.531 [2024-10-21 12:13:45.848293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.531 [2024-10-21 12:13:45.848339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.531 qpair failed and we were unable to recover it. 00:29:09.531 [2024-10-21 12:13:45.848759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.531 [2024-10-21 12:13:45.848791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.531 qpair failed and we were unable to recover it. 00:29:09.531 [2024-10-21 12:13:45.849015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.531 [2024-10-21 12:13:45.849046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.531 qpair failed and we were unable to recover it. 00:29:09.531 [2024-10-21 12:13:45.849408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.531 [2024-10-21 12:13:45.849442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.531 qpair failed and we were unable to recover it. 00:29:09.531 [2024-10-21 12:13:45.849832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.531 [2024-10-21 12:13:45.849863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.531 qpair failed and we were unable to recover it. 00:29:09.531 [2024-10-21 12:13:45.850103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.531 [2024-10-21 12:13:45.850134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.531 qpair failed and we were unable to recover it. 00:29:09.531 [2024-10-21 12:13:45.850559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.531 [2024-10-21 12:13:45.850592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.531 qpair failed and we were unable to recover it. 00:29:09.531 [2024-10-21 12:13:45.850950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.531 [2024-10-21 12:13:45.850981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.531 qpair failed and we were unable to recover it. 00:29:09.531 [2024-10-21 12:13:45.851351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.531 [2024-10-21 12:13:45.851391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.531 qpair failed and we were unable to recover it. 00:29:09.531 [2024-10-21 12:13:45.851766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.531 [2024-10-21 12:13:45.851798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.531 qpair failed and we were unable to recover it. 00:29:09.531 [2024-10-21 12:13:45.852158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.531 [2024-10-21 12:13:45.852189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.531 qpair failed and we were unable to recover it. 00:29:09.531 [2024-10-21 12:13:45.852567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.532 [2024-10-21 12:13:45.852601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.532 qpair failed and we were unable to recover it. 00:29:09.532 [2024-10-21 12:13:45.852883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.532 [2024-10-21 12:13:45.852914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.532 qpair failed and we were unable to recover it. 00:29:09.532 [2024-10-21 12:13:45.853268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.532 [2024-10-21 12:13:45.853299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.532 qpair failed and we were unable to recover it. 00:29:09.532 [2024-10-21 12:13:45.853591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.532 [2024-10-21 12:13:45.853624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.532 qpair failed and we were unable to recover it. 00:29:09.532 [2024-10-21 12:13:45.853978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.532 [2024-10-21 12:13:45.854008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.532 qpair failed and we were unable to recover it. 00:29:09.532 [2024-10-21 12:13:45.854369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.532 [2024-10-21 12:13:45.854402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.532 qpair failed and we were unable to recover it. 00:29:09.532 [2024-10-21 12:13:45.854675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.532 [2024-10-21 12:13:45.854708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.532 qpair failed and we were unable to recover it. 00:29:09.532 [2024-10-21 12:13:45.855064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.532 [2024-10-21 12:13:45.855095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.532 qpair failed and we were unable to recover it. 00:29:09.532 [2024-10-21 12:13:45.855337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.532 [2024-10-21 12:13:45.855370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.532 qpair failed and we were unable to recover it. 00:29:09.532 [2024-10-21 12:13:45.855755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.532 [2024-10-21 12:13:45.855788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.532 qpair failed and we were unable to recover it. 00:29:09.532 [2024-10-21 12:13:45.856154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.532 [2024-10-21 12:13:45.856184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.532 qpair failed and we were unable to recover it. 00:29:09.532 [2024-10-21 12:13:45.856628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.532 [2024-10-21 12:13:45.856662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.532 qpair failed and we were unable to recover it. 00:29:09.532 [2024-10-21 12:13:45.857029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.532 [2024-10-21 12:13:45.857061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.532 qpair failed and we were unable to recover it. 00:29:09.532 [2024-10-21 12:13:45.857399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.532 [2024-10-21 12:13:45.857432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.532 qpair failed and we were unable to recover it. 00:29:09.532 [2024-10-21 12:13:45.857846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.532 [2024-10-21 12:13:45.857877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.532 qpair failed and we were unable to recover it. 00:29:09.532 [2024-10-21 12:13:45.858267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.532 [2024-10-21 12:13:45.858298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.532 qpair failed and we were unable to recover it. 00:29:09.532 [2024-10-21 12:13:45.858598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.532 [2024-10-21 12:13:45.858629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.532 qpair failed and we were unable to recover it. 00:29:09.532 [2024-10-21 12:13:45.859008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.532 [2024-10-21 12:13:45.859038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.532 qpair failed and we were unable to recover it. 00:29:09.532 [2024-10-21 12:13:45.859402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.532 [2024-10-21 12:13:45.859435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.532 qpair failed and we were unable to recover it. 00:29:09.532 [2024-10-21 12:13:45.859840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.532 [2024-10-21 12:13:45.859870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.532 qpair failed and we were unable to recover it. 00:29:09.532 [2024-10-21 12:13:45.860223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.532 [2024-10-21 12:13:45.860255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.532 qpair failed and we were unable to recover it. 00:29:09.532 [2024-10-21 12:13:45.860620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.532 [2024-10-21 12:13:45.860653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.532 qpair failed and we were unable to recover it. 00:29:09.532 [2024-10-21 12:13:45.860885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.532 [2024-10-21 12:13:45.860916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.532 qpair failed and we were unable to recover it. 00:29:09.532 [2024-10-21 12:13:45.861286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.532 [2024-10-21 12:13:45.861317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.532 qpair failed and we were unable to recover it. 00:29:09.532 [2024-10-21 12:13:45.861723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.532 [2024-10-21 12:13:45.861756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.532 qpair failed and we were unable to recover it. 00:29:09.532 [2024-10-21 12:13:45.862115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.532 [2024-10-21 12:13:45.862148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.532 qpair failed and we were unable to recover it. 00:29:09.532 [2024-10-21 12:13:45.862502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.532 [2024-10-21 12:13:45.862534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.532 qpair failed and we were unable to recover it. 00:29:09.532 [2024-10-21 12:13:45.862895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.532 [2024-10-21 12:13:45.862925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.532 qpair failed and we were unable to recover it. 00:29:09.532 [2024-10-21 12:13:45.863275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.532 [2024-10-21 12:13:45.863306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.532 qpair failed and we were unable to recover it. 00:29:09.532 [2024-10-21 12:13:45.863747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.532 [2024-10-21 12:13:45.863779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.532 qpair failed and we were unable to recover it. 00:29:09.532 [2024-10-21 12:13:45.864129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.532 [2024-10-21 12:13:45.864161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.532 qpair failed and we were unable to recover it. 00:29:09.532 [2024-10-21 12:13:45.864423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.532 [2024-10-21 12:13:45.864457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.532 qpair failed and we were unable to recover it. 00:29:09.532 [2024-10-21 12:13:45.864845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.532 [2024-10-21 12:13:45.864878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.532 qpair failed and we were unable to recover it. 00:29:09.532 [2024-10-21 12:13:45.865243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.532 [2024-10-21 12:13:45.865273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.532 qpair failed and we were unable to recover it. 00:29:09.532 [2024-10-21 12:13:45.865577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.532 [2024-10-21 12:13:45.865612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.532 qpair failed and we were unable to recover it. 00:29:09.532 [2024-10-21 12:13:45.865978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.532 [2024-10-21 12:13:45.866009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.532 qpair failed and we were unable to recover it. 00:29:09.532 [2024-10-21 12:13:45.866376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.532 [2024-10-21 12:13:45.866408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.532 qpair failed and we were unable to recover it. 00:29:09.532 [2024-10-21 12:13:45.866795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.532 [2024-10-21 12:13:45.866838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.532 qpair failed and we were unable to recover it. 00:29:09.532 [2024-10-21 12:13:45.867086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.532 [2024-10-21 12:13:45.867117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.532 qpair failed and we were unable to recover it. 00:29:09.532 [2024-10-21 12:13:45.867530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.532 [2024-10-21 12:13:45.867563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.532 qpair failed and we were unable to recover it. 00:29:09.533 [2024-10-21 12:13:45.867939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.533 [2024-10-21 12:13:45.867970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.533 qpair failed and we were unable to recover it. 00:29:09.533 [2024-10-21 12:13:45.868352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.533 [2024-10-21 12:13:45.868384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.533 qpair failed and we were unable to recover it. 00:29:09.533 [2024-10-21 12:13:45.868741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.533 [2024-10-21 12:13:45.868772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.533 qpair failed and we were unable to recover it. 00:29:09.533 [2024-10-21 12:13:45.869144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.533 [2024-10-21 12:13:45.869176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.533 qpair failed and we were unable to recover it. 00:29:09.533 [2024-10-21 12:13:45.869571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.533 [2024-10-21 12:13:45.869602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.533 qpair failed and we were unable to recover it. 00:29:09.533 [2024-10-21 12:13:45.869980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.533 [2024-10-21 12:13:45.870011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.533 qpair failed and we were unable to recover it. 00:29:09.533 [2024-10-21 12:13:45.870388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.533 [2024-10-21 12:13:45.870420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.533 qpair failed and we were unable to recover it. 00:29:09.533 [2024-10-21 12:13:45.870796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.533 [2024-10-21 12:13:45.870826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.533 qpair failed and we were unable to recover it. 00:29:09.533 [2024-10-21 12:13:45.871141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.533 [2024-10-21 12:13:45.871173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.533 qpair failed and we were unable to recover it. 00:29:09.533 [2024-10-21 12:13:45.871555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.533 [2024-10-21 12:13:45.871588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.533 qpair failed and we were unable to recover it. 00:29:09.533 [2024-10-21 12:13:45.871974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.533 [2024-10-21 12:13:45.872006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.533 qpair failed and we were unable to recover it. 00:29:09.533 [2024-10-21 12:13:45.872254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.533 [2024-10-21 12:13:45.872285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.533 qpair failed and we were unable to recover it. 00:29:09.533 [2024-10-21 12:13:45.872534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.533 [2024-10-21 12:13:45.872566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.533 qpair failed and we were unable to recover it. 00:29:09.533 [2024-10-21 12:13:45.872964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.533 [2024-10-21 12:13:45.872998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.533 qpair failed and we were unable to recover it. 00:29:09.533 [2024-10-21 12:13:45.873400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.533 [2024-10-21 12:13:45.873435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.533 qpair failed and we were unable to recover it. 00:29:09.533 [2024-10-21 12:13:45.873872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.533 [2024-10-21 12:13:45.873904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.533 qpair failed and we were unable to recover it. 00:29:09.533 [2024-10-21 12:13:45.874160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.533 [2024-10-21 12:13:45.874190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.533 qpair failed and we were unable to recover it. 00:29:09.533 [2024-10-21 12:13:45.874555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.533 [2024-10-21 12:13:45.874589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.533 qpair failed and we were unable to recover it. 00:29:09.533 [2024-10-21 12:13:45.874958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.533 [2024-10-21 12:13:45.874990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.533 qpair failed and we were unable to recover it. 00:29:09.533 [2024-10-21 12:13:45.875308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.533 [2024-10-21 12:13:45.875353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.533 qpair failed and we were unable to recover it. 00:29:09.533 [2024-10-21 12:13:45.875731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.533 [2024-10-21 12:13:45.875762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.533 qpair failed and we were unable to recover it. 00:29:09.533 [2024-10-21 12:13:45.876145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.533 [2024-10-21 12:13:45.876176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.533 qpair failed and we were unable to recover it. 00:29:09.533 [2024-10-21 12:13:45.876569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.533 [2024-10-21 12:13:45.876600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.533 qpair failed and we were unable to recover it. 00:29:09.533 [2024-10-21 12:13:45.876880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.533 [2024-10-21 12:13:45.876911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.533 qpair failed and we were unable to recover it. 00:29:09.533 [2024-10-21 12:13:45.877296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.533 [2024-10-21 12:13:45.877344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.533 qpair failed and we were unable to recover it. 00:29:09.533 [2024-10-21 12:13:45.877753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.533 [2024-10-21 12:13:45.877784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.533 qpair failed and we were unable to recover it. 00:29:09.533 [2024-10-21 12:13:45.878140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.533 [2024-10-21 12:13:45.878172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.533 qpair failed and we were unable to recover it. 00:29:09.533 [2024-10-21 12:13:45.878464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.533 [2024-10-21 12:13:45.878496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.533 qpair failed and we were unable to recover it. 00:29:09.533 [2024-10-21 12:13:45.878875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.533 [2024-10-21 12:13:45.878906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.533 qpair failed and we were unable to recover it. 00:29:09.533 [2024-10-21 12:13:45.879297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.533 [2024-10-21 12:13:45.879339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.533 qpair failed and we were unable to recover it. 00:29:09.533 [2024-10-21 12:13:45.879699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.533 [2024-10-21 12:13:45.879730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.533 qpair failed and we were unable to recover it. 00:29:09.533 [2024-10-21 12:13:45.879956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.533 [2024-10-21 12:13:45.879988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.533 qpair failed and we were unable to recover it. 00:29:09.533 [2024-10-21 12:13:45.880371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.533 [2024-10-21 12:13:45.880403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.533 qpair failed and we were unable to recover it. 00:29:09.533 [2024-10-21 12:13:45.880808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.534 [2024-10-21 12:13:45.880841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.534 qpair failed and we were unable to recover it. 00:29:09.534 [2024-10-21 12:13:45.881109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.534 [2024-10-21 12:13:45.881140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.534 qpair failed and we were unable to recover it. 00:29:09.534 [2024-10-21 12:13:45.881491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.534 [2024-10-21 12:13:45.881523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.534 qpair failed and we were unable to recover it. 00:29:09.534 [2024-10-21 12:13:45.881898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.534 [2024-10-21 12:13:45.881930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.534 qpair failed and we were unable to recover it. 00:29:09.534 [2024-10-21 12:13:45.882188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.534 [2024-10-21 12:13:45.882225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.534 qpair failed and we were unable to recover it. 00:29:09.534 [2024-10-21 12:13:45.882590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.534 [2024-10-21 12:13:45.882625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.534 qpair failed and we were unable to recover it. 00:29:09.534 [2024-10-21 12:13:45.882977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.534 [2024-10-21 12:13:45.883011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.534 qpair failed and we were unable to recover it. 00:29:09.534 [2024-10-21 12:13:45.883400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.534 [2024-10-21 12:13:45.883434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.534 qpair failed and we were unable to recover it. 00:29:09.534 [2024-10-21 12:13:45.883812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.534 [2024-10-21 12:13:45.883842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.534 qpair failed and we were unable to recover it. 00:29:09.534 [2024-10-21 12:13:45.884103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.534 [2024-10-21 12:13:45.884134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.534 qpair failed and we were unable to recover it. 00:29:09.534 [2024-10-21 12:13:45.884490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.534 [2024-10-21 12:13:45.884522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.534 qpair failed and we were unable to recover it. 00:29:09.534 [2024-10-21 12:13:45.884891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.534 [2024-10-21 12:13:45.884922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.534 qpair failed and we were unable to recover it. 00:29:09.534 [2024-10-21 12:13:45.885297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.534 [2024-10-21 12:13:45.885340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.534 qpair failed and we were unable to recover it. 00:29:09.534 [2024-10-21 12:13:45.885678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.534 [2024-10-21 12:13:45.885710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.534 qpair failed and we were unable to recover it. 00:29:09.534 [2024-10-21 12:13:45.886086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.534 [2024-10-21 12:13:45.886118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.534 qpair failed and we were unable to recover it. 00:29:09.534 [2024-10-21 12:13:45.886482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.534 [2024-10-21 12:13:45.886514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.534 qpair failed and we were unable to recover it. 00:29:09.534 [2024-10-21 12:13:45.886872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.534 [2024-10-21 12:13:45.886903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.534 qpair failed and we were unable to recover it. 00:29:09.534 [2024-10-21 12:13:45.887241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.534 [2024-10-21 12:13:45.887272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.534 qpair failed and we were unable to recover it. 00:29:09.534 [2024-10-21 12:13:45.887715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.534 [2024-10-21 12:13:45.887748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.534 qpair failed and we were unable to recover it. 00:29:09.534 [2024-10-21 12:13:45.888109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.534 [2024-10-21 12:13:45.888140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.534 qpair failed and we were unable to recover it. 00:29:09.534 [2024-10-21 12:13:45.888491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.534 [2024-10-21 12:13:45.888524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.534 qpair failed and we were unable to recover it. 00:29:09.534 [2024-10-21 12:13:45.888904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.534 [2024-10-21 12:13:45.888935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.534 qpair failed and we were unable to recover it. 00:29:09.534 [2024-10-21 12:13:45.889295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.534 [2024-10-21 12:13:45.889337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.534 qpair failed and we were unable to recover it. 00:29:09.534 [2024-10-21 12:13:45.889582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.534 [2024-10-21 12:13:45.889613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.534 qpair failed and we were unable to recover it. 00:29:09.534 [2024-10-21 12:13:45.889981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.534 [2024-10-21 12:13:45.890011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.534 qpair failed and we were unable to recover it. 00:29:09.534 [2024-10-21 12:13:45.890347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.534 [2024-10-21 12:13:45.890381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.534 qpair failed and we were unable to recover it. 00:29:09.534 [2024-10-21 12:13:45.890639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.534 [2024-10-21 12:13:45.890670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.534 qpair failed and we were unable to recover it. 00:29:09.534 [2024-10-21 12:13:45.891016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.534 [2024-10-21 12:13:45.891046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.534 qpair failed and we were unable to recover it. 00:29:09.534 [2024-10-21 12:13:45.891301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.534 [2024-10-21 12:13:45.891344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.534 qpair failed and we were unable to recover it. 00:29:09.534 [2024-10-21 12:13:45.891691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.534 [2024-10-21 12:13:45.891722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.534 qpair failed and we were unable to recover it. 00:29:09.534 [2024-10-21 12:13:45.892154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.534 [2024-10-21 12:13:45.892186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.534 qpair failed and we were unable to recover it. 00:29:09.534 [2024-10-21 12:13:45.892539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.534 [2024-10-21 12:13:45.892571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.534 qpair failed and we were unable to recover it. 00:29:09.534 [2024-10-21 12:13:45.892949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.534 [2024-10-21 12:13:45.892979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.534 qpair failed and we were unable to recover it. 00:29:09.534 [2024-10-21 12:13:45.893379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.534 [2024-10-21 12:13:45.893411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.534 qpair failed and we were unable to recover it. 00:29:09.534 [2024-10-21 12:13:45.893786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.534 [2024-10-21 12:13:45.893817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.534 qpair failed and we were unable to recover it. 00:29:09.534 [2024-10-21 12:13:45.894157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.534 [2024-10-21 12:13:45.894188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.534 qpair failed and we were unable to recover it. 00:29:09.534 [2024-10-21 12:13:45.894565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.534 [2024-10-21 12:13:45.894598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.534 qpair failed and we were unable to recover it. 00:29:09.534 [2024-10-21 12:13:45.894847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.534 [2024-10-21 12:13:45.894877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.534 qpair failed and we were unable to recover it. 00:29:09.534 [2024-10-21 12:13:45.895228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.534 [2024-10-21 12:13:45.895259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.534 qpair failed and we were unable to recover it. 00:29:09.534 [2024-10-21 12:13:45.895623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.534 [2024-10-21 12:13:45.895656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.534 qpair failed and we were unable to recover it. 00:29:09.534 [2024-10-21 12:13:45.895904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.535 [2024-10-21 12:13:45.895934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.535 qpair failed and we were unable to recover it. 00:29:09.535 [2024-10-21 12:13:45.896298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.535 [2024-10-21 12:13:45.896342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.535 qpair failed and we were unable to recover it. 00:29:09.535 [2024-10-21 12:13:45.896722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.535 [2024-10-21 12:13:45.896753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.535 qpair failed and we were unable to recover it. 00:29:09.535 [2024-10-21 12:13:45.897138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.535 [2024-10-21 12:13:45.897168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.535 qpair failed and we were unable to recover it. 00:29:09.535 [2024-10-21 12:13:45.897524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.535 [2024-10-21 12:13:45.897562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.535 qpair failed and we were unable to recover it. 00:29:09.535 [2024-10-21 12:13:45.897939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.535 [2024-10-21 12:13:45.897969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.535 qpair failed and we were unable to recover it. 00:29:09.535 [2024-10-21 12:13:45.898254] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:29:09.535 [2024-10-21 12:13:45.898332] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:09.535 [2024-10-21 12:13:45.898343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.535 [2024-10-21 12:13:45.898373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.535 qpair failed and we were unable to recover it. 00:29:09.535 [2024-10-21 12:13:45.898754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.535 [2024-10-21 12:13:45.898784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.535 qpair failed and we were unable to recover it. 00:29:09.535 [2024-10-21 12:13:45.899193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.535 [2024-10-21 12:13:45.899223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.535 qpair failed and we were unable to recover it. 00:29:09.535 [2024-10-21 12:13:45.899570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.535 [2024-10-21 12:13:45.899603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.535 qpair failed and we were unable to recover it. 00:29:09.535 [2024-10-21 12:13:45.899856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.535 [2024-10-21 12:13:45.899888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.535 qpair failed and we were unable to recover it. 00:29:09.535 [2024-10-21 12:13:45.900237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.535 [2024-10-21 12:13:45.900268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.535 qpair failed and we were unable to recover it. 00:29:09.535 [2024-10-21 12:13:45.900658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.535 [2024-10-21 12:13:45.900691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.535 qpair failed and we were unable to recover it. 00:29:09.535 [2024-10-21 12:13:45.901063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.535 [2024-10-21 12:13:45.901094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.535 qpair failed and we were unable to recover it. 00:29:09.535 [2024-10-21 12:13:45.901352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.535 [2024-10-21 12:13:45.901387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.535 qpair failed and we were unable to recover it. 00:29:09.535 [2024-10-21 12:13:45.901798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.535 [2024-10-21 12:13:45.901832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.535 qpair failed and we were unable to recover it. 00:29:09.535 [2024-10-21 12:13:45.902186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.535 [2024-10-21 12:13:45.902222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.535 qpair failed and we were unable to recover it. 00:29:09.535 [2024-10-21 12:13:45.902471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.535 [2024-10-21 12:13:45.902504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.535 qpair failed and we were unable to recover it. 00:29:09.535 [2024-10-21 12:13:45.902868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.535 [2024-10-21 12:13:45.902900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.535 qpair failed and we were unable to recover it. 00:29:09.535 [2024-10-21 12:13:45.903286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.535 [2024-10-21 12:13:45.903319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.535 qpair failed and we were unable to recover it. 00:29:09.535 [2024-10-21 12:13:45.903702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.535 [2024-10-21 12:13:45.903734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.535 qpair failed and we were unable to recover it. 00:29:09.535 [2024-10-21 12:13:45.903971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.535 [2024-10-21 12:13:45.904002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.535 qpair failed and we were unable to recover it. 00:29:09.535 [2024-10-21 12:13:45.904346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.535 [2024-10-21 12:13:45.904379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.535 qpair failed and we were unable to recover it. 00:29:09.535 [2024-10-21 12:13:45.904591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.535 [2024-10-21 12:13:45.904622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.535 qpair failed and we were unable to recover it. 00:29:09.535 [2024-10-21 12:13:45.904855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.535 [2024-10-21 12:13:45.904886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.535 qpair failed and we were unable to recover it. 00:29:09.535 [2024-10-21 12:13:45.905116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.535 [2024-10-21 12:13:45.905147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.535 qpair failed and we were unable to recover it. 00:29:09.535 [2024-10-21 12:13:45.905513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.535 [2024-10-21 12:13:45.905544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.535 qpair failed and we were unable to recover it. 00:29:09.535 [2024-10-21 12:13:45.905914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.535 [2024-10-21 12:13:45.905946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.535 qpair failed and we were unable to recover it. 00:29:09.535 [2024-10-21 12:13:45.906212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.535 [2024-10-21 12:13:45.906244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.535 qpair failed and we were unable to recover it. 00:29:09.535 [2024-10-21 12:13:45.906598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.535 [2024-10-21 12:13:45.906631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.535 qpair failed and we were unable to recover it. 00:29:09.535 [2024-10-21 12:13:45.906992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.535 [2024-10-21 12:13:45.907023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.535 qpair failed and we were unable to recover it. 00:29:09.535 [2024-10-21 12:13:45.907381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.535 [2024-10-21 12:13:45.907414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.535 qpair failed and we were unable to recover it. 00:29:09.535 [2024-10-21 12:13:45.907808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.535 [2024-10-21 12:13:45.907840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.535 qpair failed and we were unable to recover it. 00:29:09.535 [2024-10-21 12:13:45.908216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.535 [2024-10-21 12:13:45.908248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.535 qpair failed and we were unable to recover it. 00:29:09.535 [2024-10-21 12:13:45.908602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.535 [2024-10-21 12:13:45.908636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.535 qpair failed and we were unable to recover it. 00:29:09.535 [2024-10-21 12:13:45.908997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.535 [2024-10-21 12:13:45.909029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.535 qpair failed and we were unable to recover it. 00:29:09.535 [2024-10-21 12:13:45.909288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.535 [2024-10-21 12:13:45.909355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.535 qpair failed and we were unable to recover it. 00:29:09.535 [2024-10-21 12:13:45.909607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.535 [2024-10-21 12:13:45.909639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.535 qpair failed and we were unable to recover it. 00:29:09.535 [2024-10-21 12:13:45.909885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.535 [2024-10-21 12:13:45.909918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.535 qpair failed and we were unable to recover it. 00:29:09.535 [2024-10-21 12:13:45.910275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.536 [2024-10-21 12:13:45.910308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.536 qpair failed and we were unable to recover it. 00:29:09.536 [2024-10-21 12:13:45.910568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.536 [2024-10-21 12:13:45.910601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.536 qpair failed and we were unable to recover it. 00:29:09.536 [2024-10-21 12:13:45.910850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.536 [2024-10-21 12:13:45.910881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.536 qpair failed and we were unable to recover it. 00:29:09.536 [2024-10-21 12:13:45.911258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.536 [2024-10-21 12:13:45.911290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.536 qpair failed and we were unable to recover it. 00:29:09.536 [2024-10-21 12:13:45.911701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.536 [2024-10-21 12:13:45.911741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.536 qpair failed and we were unable to recover it. 00:29:09.536 [2024-10-21 12:13:45.912100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.536 [2024-10-21 12:13:45.912132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.536 qpair failed and we were unable to recover it. 00:29:09.536 [2024-10-21 12:13:45.912500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.536 [2024-10-21 12:13:45.912535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.536 qpair failed and we were unable to recover it. 00:29:09.536 [2024-10-21 12:13:45.912918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.536 [2024-10-21 12:13:45.912952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.536 qpair failed and we were unable to recover it. 00:29:09.536 [2024-10-21 12:13:45.913340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.536 [2024-10-21 12:13:45.913374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.536 qpair failed and we were unable to recover it. 00:29:09.536 [2024-10-21 12:13:45.913608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.536 [2024-10-21 12:13:45.913638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.536 qpair failed and we were unable to recover it. 00:29:09.536 [2024-10-21 12:13:45.914003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.536 [2024-10-21 12:13:45.914035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.536 qpair failed and we were unable to recover it. 00:29:09.536 [2024-10-21 12:13:45.914403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.536 [2024-10-21 12:13:45.914437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.536 qpair failed and we were unable to recover it. 00:29:09.536 [2024-10-21 12:13:45.914687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.536 [2024-10-21 12:13:45.914718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.536 qpair failed and we were unable to recover it. 00:29:09.536 [2024-10-21 12:13:45.915092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.536 [2024-10-21 12:13:45.915123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.536 qpair failed and we were unable to recover it. 00:29:09.536 [2024-10-21 12:13:45.915487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.536 [2024-10-21 12:13:45.915519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.536 qpair failed and we were unable to recover it. 00:29:09.536 [2024-10-21 12:13:45.915894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.536 [2024-10-21 12:13:45.915924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.536 qpair failed and we were unable to recover it. 00:29:09.536 [2024-10-21 12:13:45.916261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.536 [2024-10-21 12:13:45.916291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.536 qpair failed and we were unable to recover it. 00:29:09.536 [2024-10-21 12:13:45.916672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.536 [2024-10-21 12:13:45.916705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.536 qpair failed and we were unable to recover it. 00:29:09.536 [2024-10-21 12:13:45.916960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.536 [2024-10-21 12:13:45.916992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.536 qpair failed and we were unable to recover it. 00:29:09.536 [2024-10-21 12:13:45.917373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.536 [2024-10-21 12:13:45.917406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.536 qpair failed and we were unable to recover it. 00:29:09.536 [2024-10-21 12:13:45.917799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.536 [2024-10-21 12:13:45.917830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.536 qpair failed and we were unable to recover it. 00:29:09.536 [2024-10-21 12:13:45.918206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.536 [2024-10-21 12:13:45.918236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.536 qpair failed and we were unable to recover it. 00:29:09.536 [2024-10-21 12:13:45.918528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.536 [2024-10-21 12:13:45.918560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.536 qpair failed and we were unable to recover it. 00:29:09.536 [2024-10-21 12:13:45.918963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.536 [2024-10-21 12:13:45.918993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.536 qpair failed and we were unable to recover it. 00:29:09.536 [2024-10-21 12:13:45.919254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.536 [2024-10-21 12:13:45.919286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.536 qpair failed and we were unable to recover it. 00:29:09.536 [2024-10-21 12:13:45.919648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.536 [2024-10-21 12:13:45.919681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.536 qpair failed and we were unable to recover it. 00:29:09.536 [2024-10-21 12:13:45.920027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.536 [2024-10-21 12:13:45.920057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.536 qpair failed and we were unable to recover it. 00:29:09.536 [2024-10-21 12:13:45.920501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.536 [2024-10-21 12:13:45.920534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.536 qpair failed and we were unable to recover it. 00:29:09.536 [2024-10-21 12:13:45.920885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.536 [2024-10-21 12:13:45.920916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.536 qpair failed and we were unable to recover it. 00:29:09.536 [2024-10-21 12:13:45.921286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.536 [2024-10-21 12:13:45.921317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.536 qpair failed and we were unable to recover it. 00:29:09.536 [2024-10-21 12:13:45.921684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.536 [2024-10-21 12:13:45.921716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.536 qpair failed and we were unable to recover it. 00:29:09.536 [2024-10-21 12:13:45.921953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.536 [2024-10-21 12:13:45.921983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.536 qpair failed and we were unable to recover it. 00:29:09.536 [2024-10-21 12:13:45.922408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.536 [2024-10-21 12:13:45.922443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.536 qpair failed and we were unable to recover it. 00:29:09.536 [2024-10-21 12:13:45.922803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.536 [2024-10-21 12:13:45.922834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.536 qpair failed and we were unable to recover it. 00:29:09.536 [2024-10-21 12:13:45.923211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.536 [2024-10-21 12:13:45.923241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.536 qpair failed and we were unable to recover it. 00:29:09.536 [2024-10-21 12:13:45.923609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.536 [2024-10-21 12:13:45.923640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.536 qpair failed and we were unable to recover it. 00:29:09.536 [2024-10-21 12:13:45.923991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.536 [2024-10-21 12:13:45.924023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.536 qpair failed and we were unable to recover it. 00:29:09.536 [2024-10-21 12:13:45.924435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.536 [2024-10-21 12:13:45.924468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.536 qpair failed and we were unable to recover it. 00:29:09.536 [2024-10-21 12:13:45.924841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.536 [2024-10-21 12:13:45.924872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.536 qpair failed and we were unable to recover it. 00:29:09.536 [2024-10-21 12:13:45.925254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.536 [2024-10-21 12:13:45.925286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.536 qpair failed and we were unable to recover it. 00:29:09.537 [2024-10-21 12:13:45.925674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.537 [2024-10-21 12:13:45.925706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.537 qpair failed and we were unable to recover it. 00:29:09.537 [2024-10-21 12:13:45.926043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.537 [2024-10-21 12:13:45.926073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.537 qpair failed and we were unable to recover it. 00:29:09.537 [2024-10-21 12:13:45.926443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.537 [2024-10-21 12:13:45.926477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.537 qpair failed and we were unable to recover it. 00:29:09.537 [2024-10-21 12:13:45.926845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.537 [2024-10-21 12:13:45.926876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.537 qpair failed and we were unable to recover it. 00:29:09.537 [2024-10-21 12:13:45.927179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.537 [2024-10-21 12:13:45.927217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.537 qpair failed and we were unable to recover it. 00:29:09.537 [2024-10-21 12:13:45.927606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.537 [2024-10-21 12:13:45.927637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.537 qpair failed and we were unable to recover it. 00:29:09.537 [2024-10-21 12:13:45.928015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.537 [2024-10-21 12:13:45.928045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.537 qpair failed and we were unable to recover it. 00:29:09.537 [2024-10-21 12:13:45.928399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.537 [2024-10-21 12:13:45.928431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.537 qpair failed and we were unable to recover it. 00:29:09.537 [2024-10-21 12:13:45.928808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.537 [2024-10-21 12:13:45.928839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.537 qpair failed and we were unable to recover it. 00:29:09.537 [2024-10-21 12:13:45.929249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.537 [2024-10-21 12:13:45.929281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.537 qpair failed and we were unable to recover it. 00:29:09.537 [2024-10-21 12:13:45.929667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.537 [2024-10-21 12:13:45.929700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.537 qpair failed and we were unable to recover it. 00:29:09.537 [2024-10-21 12:13:45.930053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.537 [2024-10-21 12:13:45.930084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.537 qpair failed and we were unable to recover it. 00:29:09.537 [2024-10-21 12:13:45.930487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.537 [2024-10-21 12:13:45.930521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.537 qpair failed and we were unable to recover it. 00:29:09.537 [2024-10-21 12:13:45.930893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.537 [2024-10-21 12:13:45.930925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.537 qpair failed and we were unable to recover it. 00:29:09.537 [2024-10-21 12:13:45.931268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.537 [2024-10-21 12:13:45.931300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.537 qpair failed and we were unable to recover it. 00:29:09.537 [2024-10-21 12:13:45.931733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.537 [2024-10-21 12:13:45.931766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.537 qpair failed and we were unable to recover it. 00:29:09.537 [2024-10-21 12:13:45.932119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.537 [2024-10-21 12:13:45.932149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.537 qpair failed and we were unable to recover it. 00:29:09.537 [2024-10-21 12:13:45.932513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.537 [2024-10-21 12:13:45.932545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.537 qpair failed and we were unable to recover it. 00:29:09.537 [2024-10-21 12:13:45.932951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.537 [2024-10-21 12:13:45.932983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.537 qpair failed and we were unable to recover it. 00:29:09.537 [2024-10-21 12:13:45.933252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.537 [2024-10-21 12:13:45.933284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.537 qpair failed and we were unable to recover it. 00:29:09.537 [2024-10-21 12:13:45.933530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.537 [2024-10-21 12:13:45.933565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.537 qpair failed and we were unable to recover it. 00:29:09.537 [2024-10-21 12:13:45.933991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.537 [2024-10-21 12:13:45.934023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.537 qpair failed and we were unable to recover it. 00:29:09.537 [2024-10-21 12:13:45.934385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.537 [2024-10-21 12:13:45.934417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.537 qpair failed and we were unable to recover it. 00:29:09.537 [2024-10-21 12:13:45.934687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.537 [2024-10-21 12:13:45.934717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.537 qpair failed and we were unable to recover it. 00:29:09.537 [2024-10-21 12:13:45.934968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.537 [2024-10-21 12:13:45.934998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.537 qpair failed and we were unable to recover it. 00:29:09.537 [2024-10-21 12:13:45.935375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.537 [2024-10-21 12:13:45.935408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.537 qpair failed and we were unable to recover it. 00:29:09.537 [2024-10-21 12:13:45.935809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.537 [2024-10-21 12:13:45.935841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.537 qpair failed and we were unable to recover it. 00:29:09.537 [2024-10-21 12:13:45.936081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.537 [2024-10-21 12:13:45.936116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.537 qpair failed and we were unable to recover it. 00:29:09.537 [2024-10-21 12:13:45.936492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.537 [2024-10-21 12:13:45.936524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.537 qpair failed and we were unable to recover it. 00:29:09.537 [2024-10-21 12:13:45.936777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.537 [2024-10-21 12:13:45.936808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.537 qpair failed and we were unable to recover it. 00:29:09.537 [2024-10-21 12:13:45.937187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.537 [2024-10-21 12:13:45.937219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.537 qpair failed and we were unable to recover it. 00:29:09.537 [2024-10-21 12:13:45.937582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.537 [2024-10-21 12:13:45.937615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.537 qpair failed and we were unable to recover it. 00:29:09.537 [2024-10-21 12:13:45.937866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.537 [2024-10-21 12:13:45.937897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.537 qpair failed and we were unable to recover it. 00:29:09.537 [2024-10-21 12:13:45.938260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.537 [2024-10-21 12:13:45.938291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.537 qpair failed and we were unable to recover it. 00:29:09.537 [2024-10-21 12:13:45.938665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.537 [2024-10-21 12:13:45.938698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.537 qpair failed and we were unable to recover it. 00:29:09.537 [2024-10-21 12:13:45.939066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.538 [2024-10-21 12:13:45.939097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.538 qpair failed and we were unable to recover it. 00:29:09.538 [2024-10-21 12:13:45.939481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.538 [2024-10-21 12:13:45.939514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.538 qpair failed and we were unable to recover it. 00:29:09.538 [2024-10-21 12:13:45.939806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.538 [2024-10-21 12:13:45.939837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.538 qpair failed and we were unable to recover it. 00:29:09.538 [2024-10-21 12:13:45.940224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.538 [2024-10-21 12:13:45.940257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.538 qpair failed and we were unable to recover it. 00:29:09.538 [2024-10-21 12:13:45.940659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.538 [2024-10-21 12:13:45.940690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.538 qpair failed and we were unable to recover it. 00:29:09.538 [2024-10-21 12:13:45.941063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.538 [2024-10-21 12:13:45.941093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.538 qpair failed and we were unable to recover it. 00:29:09.538 [2024-10-21 12:13:45.941484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.538 [2024-10-21 12:13:45.941516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.538 qpair failed and we were unable to recover it. 00:29:09.538 [2024-10-21 12:13:45.941896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.538 [2024-10-21 12:13:45.941927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.538 qpair failed and we were unable to recover it. 00:29:09.538 [2024-10-21 12:13:45.942190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.538 [2024-10-21 12:13:45.942220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.538 qpair failed and we were unable to recover it. 00:29:09.538 [2024-10-21 12:13:45.942571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.538 [2024-10-21 12:13:45.942616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.538 qpair failed and we were unable to recover it. 00:29:09.538 [2024-10-21 12:13:45.942985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.538 [2024-10-21 12:13:45.943015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.538 qpair failed and we were unable to recover it. 00:29:09.538 [2024-10-21 12:13:45.943258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.538 [2024-10-21 12:13:45.943289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.538 qpair failed and we were unable to recover it. 00:29:09.538 [2024-10-21 12:13:45.943665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.538 [2024-10-21 12:13:45.943698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.538 qpair failed and we were unable to recover it. 00:29:09.538 [2024-10-21 12:13:45.944061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.538 [2024-10-21 12:13:45.944091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.538 qpair failed and we were unable to recover it. 00:29:09.538 [2024-10-21 12:13:45.944455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.538 [2024-10-21 12:13:45.944487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.538 qpair failed and we were unable to recover it. 00:29:09.538 [2024-10-21 12:13:45.944846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.538 [2024-10-21 12:13:45.944878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.538 qpair failed and we were unable to recover it. 00:29:09.538 [2024-10-21 12:13:45.945242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.538 [2024-10-21 12:13:45.945272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.538 qpair failed and we were unable to recover it. 00:29:09.538 [2024-10-21 12:13:45.945642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.538 [2024-10-21 12:13:45.945676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.538 qpair failed and we were unable to recover it. 00:29:09.538 [2024-10-21 12:13:45.946029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.538 [2024-10-21 12:13:45.946059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.538 qpair failed and we were unable to recover it. 00:29:09.538 [2024-10-21 12:13:45.946429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.538 [2024-10-21 12:13:45.946463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.538 qpair failed and we were unable to recover it. 00:29:09.538 [2024-10-21 12:13:45.946819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.538 [2024-10-21 12:13:45.946851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.538 qpair failed and we were unable to recover it. 00:29:09.538 [2024-10-21 12:13:45.947214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.538 [2024-10-21 12:13:45.947244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.538 qpair failed and we were unable to recover it. 00:29:09.538 [2024-10-21 12:13:45.947595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.538 [2024-10-21 12:13:45.947627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.538 qpair failed and we were unable to recover it. 00:29:09.538 [2024-10-21 12:13:45.947859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.538 [2024-10-21 12:13:45.947890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.538 qpair failed and we were unable to recover it. 00:29:09.538 [2024-10-21 12:13:45.948158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.538 [2024-10-21 12:13:45.948192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.538 qpair failed and we were unable to recover it. 00:29:09.538 [2024-10-21 12:13:45.948475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.538 [2024-10-21 12:13:45.948507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.538 qpair failed and we were unable to recover it. 00:29:09.538 [2024-10-21 12:13:45.948875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.538 [2024-10-21 12:13:45.948906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.538 qpair failed and we were unable to recover it. 00:29:09.538 [2024-10-21 12:13:45.949268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.538 [2024-10-21 12:13:45.949300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.538 qpair failed and we were unable to recover it. 00:29:09.538 [2024-10-21 12:13:45.949685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.538 [2024-10-21 12:13:45.949717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.538 qpair failed and we were unable to recover it. 00:29:09.538 [2024-10-21 12:13:45.950088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.538 [2024-10-21 12:13:45.950119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.538 qpair failed and we were unable to recover it. 00:29:09.538 [2024-10-21 12:13:45.950477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.538 [2024-10-21 12:13:45.950511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.538 qpair failed and we were unable to recover it. 00:29:09.538 [2024-10-21 12:13:45.950872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.538 [2024-10-21 12:13:45.950904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.538 qpair failed and we were unable to recover it. 00:29:09.538 [2024-10-21 12:13:45.951261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.538 [2024-10-21 12:13:45.951293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.538 qpair failed and we were unable to recover it. 00:29:09.538 [2024-10-21 12:13:45.951561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.538 [2024-10-21 12:13:45.951592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.538 qpair failed and we were unable to recover it. 00:29:09.538 [2024-10-21 12:13:45.951947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.538 [2024-10-21 12:13:45.951976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.538 qpair failed and we were unable to recover it. 00:29:09.538 [2024-10-21 12:13:45.952342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.538 [2024-10-21 12:13:45.952375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.538 qpair failed and we were unable to recover it. 00:29:09.538 [2024-10-21 12:13:45.952743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.538 [2024-10-21 12:13:45.952774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.538 qpair failed and we were unable to recover it. 00:29:09.538 [2024-10-21 12:13:45.952995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.538 [2024-10-21 12:13:45.953026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.538 qpair failed and we were unable to recover it. 00:29:09.538 [2024-10-21 12:13:45.953415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.538 [2024-10-21 12:13:45.953448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.538 qpair failed and we were unable to recover it. 00:29:09.538 [2024-10-21 12:13:45.953712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.538 [2024-10-21 12:13:45.953747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.538 qpair failed and we were unable to recover it. 00:29:09.538 [2024-10-21 12:13:45.954035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.539 [2024-10-21 12:13:45.954065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.539 qpair failed and we were unable to recover it. 00:29:09.539 [2024-10-21 12:13:45.954444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.539 [2024-10-21 12:13:45.954477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.539 qpair failed and we were unable to recover it. 00:29:09.539 [2024-10-21 12:13:45.954852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.539 [2024-10-21 12:13:45.954882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.539 qpair failed and we were unable to recover it. 00:29:09.539 [2024-10-21 12:13:45.955253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.539 [2024-10-21 12:13:45.955285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.539 qpair failed and we were unable to recover it. 00:29:09.539 [2024-10-21 12:13:45.955670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.539 [2024-10-21 12:13:45.955703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.539 qpair failed and we were unable to recover it. 00:29:09.539 [2024-10-21 12:13:45.956062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.539 [2024-10-21 12:13:45.956093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.539 qpair failed and we were unable to recover it. 00:29:09.539 [2024-10-21 12:13:45.956457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.539 [2024-10-21 12:13:45.956490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.539 qpair failed and we were unable to recover it. 00:29:09.539 [2024-10-21 12:13:45.956825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.539 [2024-10-21 12:13:45.956855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.539 qpair failed and we were unable to recover it. 00:29:09.539 [2024-10-21 12:13:45.957151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.539 [2024-10-21 12:13:45.957183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.539 qpair failed and we were unable to recover it. 00:29:09.539 [2024-10-21 12:13:45.957513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.539 [2024-10-21 12:13:45.957553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.539 qpair failed and we were unable to recover it. 00:29:09.539 [2024-10-21 12:13:45.957922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.539 [2024-10-21 12:13:45.957954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.539 qpair failed and we were unable to recover it. 00:29:09.539 [2024-10-21 12:13:45.958314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.539 [2024-10-21 12:13:45.958359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.539 qpair failed and we were unable to recover it. 00:29:09.539 [2024-10-21 12:13:45.958612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.539 [2024-10-21 12:13:45.958644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.539 qpair failed and we were unable to recover it. 00:29:09.539 [2024-10-21 12:13:45.958978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.539 [2024-10-21 12:13:45.959008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.539 qpair failed and we were unable to recover it. 00:29:09.539 [2024-10-21 12:13:45.959345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.539 [2024-10-21 12:13:45.959378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.539 qpair failed and we were unable to recover it. 00:29:09.539 [2024-10-21 12:13:45.959734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.539 [2024-10-21 12:13:45.959765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.539 qpair failed and we were unable to recover it. 00:29:09.539 [2024-10-21 12:13:45.960139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.539 [2024-10-21 12:13:45.960173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.539 qpair failed and we were unable to recover it. 00:29:09.539 [2024-10-21 12:13:45.960546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.539 [2024-10-21 12:13:45.960580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.539 qpair failed and we were unable to recover it. 00:29:09.539 [2024-10-21 12:13:45.960941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.539 [2024-10-21 12:13:45.960971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.539 qpair failed and we were unable to recover it. 00:29:09.539 [2024-10-21 12:13:45.961345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.539 [2024-10-21 12:13:45.961378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.539 qpair failed and we were unable to recover it. 00:29:09.539 [2024-10-21 12:13:45.961733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.539 [2024-10-21 12:13:45.961764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.539 qpair failed and we were unable to recover it. 00:29:09.539 [2024-10-21 12:13:45.962032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.539 [2024-10-21 12:13:45.962064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.539 qpair failed and we were unable to recover it. 00:29:09.539 [2024-10-21 12:13:45.962422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.539 [2024-10-21 12:13:45.962457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.539 qpair failed and we were unable to recover it. 00:29:09.539 [2024-10-21 12:13:45.962826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.539 [2024-10-21 12:13:45.962857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.539 qpair failed and we were unable to recover it. 00:29:09.539 [2024-10-21 12:13:45.963206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.539 [2024-10-21 12:13:45.963237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.539 qpair failed and we were unable to recover it. 00:29:09.539 [2024-10-21 12:13:45.963591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.539 [2024-10-21 12:13:45.963623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.539 qpair failed and we were unable to recover it. 00:29:09.539 [2024-10-21 12:13:45.963843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.539 [2024-10-21 12:13:45.963873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.539 qpair failed and we were unable to recover it. 00:29:09.539 [2024-10-21 12:13:45.964339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.539 [2024-10-21 12:13:45.964374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.539 qpair failed and we were unable to recover it. 00:29:09.539 [2024-10-21 12:13:45.964738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.539 [2024-10-21 12:13:45.964768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.539 qpair failed and we were unable to recover it. 00:29:09.539 [2024-10-21 12:13:45.965176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.539 [2024-10-21 12:13:45.965206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.539 qpair failed and we were unable to recover it. 00:29:09.539 [2024-10-21 12:13:45.965582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.539 [2024-10-21 12:13:45.965616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.539 qpair failed and we were unable to recover it. 00:29:09.539 [2024-10-21 12:13:45.965997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.539 [2024-10-21 12:13:45.966028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.539 qpair failed and we were unable to recover it. 00:29:09.539 [2024-10-21 12:13:45.966395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.539 [2024-10-21 12:13:45.966428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.539 qpair failed and we were unable to recover it. 00:29:09.539 [2024-10-21 12:13:45.966635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.539 [2024-10-21 12:13:45.966670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.539 qpair failed and we were unable to recover it. 00:29:09.539 [2024-10-21 12:13:45.967065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.539 [2024-10-21 12:13:45.967097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.539 qpair failed and we were unable to recover it. 00:29:09.539 [2024-10-21 12:13:45.967442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.539 [2024-10-21 12:13:45.967475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.539 qpair failed and we were unable to recover it. 00:29:09.539 [2024-10-21 12:13:45.967741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.539 [2024-10-21 12:13:45.967773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.539 qpair failed and we were unable to recover it. 00:29:09.539 [2024-10-21 12:13:45.968141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.539 [2024-10-21 12:13:45.968172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.539 qpair failed and we were unable to recover it. 00:29:09.539 [2024-10-21 12:13:45.968551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.539 [2024-10-21 12:13:45.968584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.539 qpair failed and we were unable to recover it. 00:29:09.539 [2024-10-21 12:13:45.968796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.539 [2024-10-21 12:13:45.968830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.539 qpair failed and we were unable to recover it. 00:29:09.539 [2024-10-21 12:13:45.969059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.540 [2024-10-21 12:13:45.969091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.540 qpair failed and we were unable to recover it. 00:29:09.540 [2024-10-21 12:13:45.969466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.540 [2024-10-21 12:13:45.969499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.540 qpair failed and we were unable to recover it. 00:29:09.540 [2024-10-21 12:13:45.969902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.540 [2024-10-21 12:13:45.969933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.540 qpair failed and we were unable to recover it. 00:29:09.540 [2024-10-21 12:13:45.970300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.540 [2024-10-21 12:13:45.970355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.540 qpair failed and we were unable to recover it. 00:29:09.540 [2024-10-21 12:13:45.970714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.540 [2024-10-21 12:13:45.970745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.540 qpair failed and we were unable to recover it. 00:29:09.540 [2024-10-21 12:13:45.971120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.540 [2024-10-21 12:13:45.971151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.540 qpair failed and we were unable to recover it. 00:29:09.540 [2024-10-21 12:13:45.971500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.540 [2024-10-21 12:13:45.971533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.540 qpair failed and we were unable to recover it. 00:29:09.540 [2024-10-21 12:13:45.971889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.540 [2024-10-21 12:13:45.971920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.540 qpair failed and we were unable to recover it. 00:29:09.540 [2024-10-21 12:13:45.972281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.540 [2024-10-21 12:13:45.972313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.540 qpair failed and we were unable to recover it. 00:29:09.540 [2024-10-21 12:13:45.972685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.540 [2024-10-21 12:13:45.972723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.540 qpair failed and we were unable to recover it. 00:29:09.540 [2024-10-21 12:13:45.973111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.540 [2024-10-21 12:13:45.973143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.540 qpair failed and we were unable to recover it. 00:29:09.540 [2024-10-21 12:13:45.973381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.540 [2024-10-21 12:13:45.973414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.540 qpair failed and we were unable to recover it. 00:29:09.540 [2024-10-21 12:13:45.973629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.540 [2024-10-21 12:13:45.973658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.540 qpair failed and we were unable to recover it. 00:29:09.540 [2024-10-21 12:13:45.974055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.540 [2024-10-21 12:13:45.974086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.540 qpair failed and we were unable to recover it. 00:29:09.540 [2024-10-21 12:13:45.974447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.540 [2024-10-21 12:13:45.974480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.540 qpair failed and we were unable to recover it. 00:29:09.540 [2024-10-21 12:13:45.974831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.540 [2024-10-21 12:13:45.974863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.540 qpair failed and we were unable to recover it. 00:29:09.540 [2024-10-21 12:13:45.975211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.540 [2024-10-21 12:13:45.975242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.540 qpair failed and we were unable to recover it. 00:29:09.540 [2024-10-21 12:13:45.975610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.540 [2024-10-21 12:13:45.975644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.540 qpair failed and we were unable to recover it. 00:29:09.540 [2024-10-21 12:13:45.976001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.540 [2024-10-21 12:13:45.976032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.540 qpair failed and we were unable to recover it. 00:29:09.540 [2024-10-21 12:13:45.976419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.540 [2024-10-21 12:13:45.976452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.540 qpair failed and we were unable to recover it. 00:29:09.540 [2024-10-21 12:13:45.976797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.540 [2024-10-21 12:13:45.976828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.540 qpair failed and we were unable to recover it. 00:29:09.540 [2024-10-21 12:13:45.977053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.540 [2024-10-21 12:13:45.977084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.540 qpair failed and we were unable to recover it. 00:29:09.540 [2024-10-21 12:13:45.977450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.540 [2024-10-21 12:13:45.977482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.540 qpair failed and we were unable to recover it. 00:29:09.540 [2024-10-21 12:13:45.977852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.540 [2024-10-21 12:13:45.977885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.540 qpair failed and we were unable to recover it. 00:29:09.540 [2024-10-21 12:13:45.978250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.540 [2024-10-21 12:13:45.978283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.540 qpair failed and we were unable to recover it. 00:29:09.540 [2024-10-21 12:13:45.978672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.540 [2024-10-21 12:13:45.978706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.540 qpair failed and we were unable to recover it. 00:29:09.540 [2024-10-21 12:13:45.979065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.540 [2024-10-21 12:13:45.979098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.540 qpair failed and we were unable to recover it. 00:29:09.540 [2024-10-21 12:13:45.979506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.540 [2024-10-21 12:13:45.979537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.540 qpair failed and we were unable to recover it. 00:29:09.540 [2024-10-21 12:13:45.979912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.540 [2024-10-21 12:13:45.979942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.540 qpair failed and we were unable to recover it. 00:29:09.540 [2024-10-21 12:13:45.980299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.540 [2024-10-21 12:13:45.980344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.540 qpair failed and we were unable to recover it. 00:29:09.540 [2024-10-21 12:13:45.980700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.540 [2024-10-21 12:13:45.980731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.540 qpair failed and we were unable to recover it. 00:29:09.540 [2024-10-21 12:13:45.981098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.540 [2024-10-21 12:13:45.981129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.540 qpair failed and we were unable to recover it. 00:29:09.540 [2024-10-21 12:13:45.981506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.540 [2024-10-21 12:13:45.981539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.540 qpair failed and we were unable to recover it. 00:29:09.540 [2024-10-21 12:13:45.981858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.540 [2024-10-21 12:13:45.981890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.540 qpair failed and we were unable to recover it. 00:29:09.540 [2024-10-21 12:13:45.982211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.540 [2024-10-21 12:13:45.982244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.540 qpair failed and we were unable to recover it. 00:29:09.540 [2024-10-21 12:13:45.982612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.540 [2024-10-21 12:13:45.982645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.540 qpair failed and we were unable to recover it. 00:29:09.540 [2024-10-21 12:13:45.982977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.540 [2024-10-21 12:13:45.983008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.540 qpair failed and we were unable to recover it. 00:29:09.540 [2024-10-21 12:13:45.983363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.540 [2024-10-21 12:13:45.983395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.540 qpair failed and we were unable to recover it. 00:29:09.540 [2024-10-21 12:13:45.983785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.540 [2024-10-21 12:13:45.983817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.540 qpair failed and we were unable to recover it. 00:29:09.540 [2024-10-21 12:13:45.984044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.540 [2024-10-21 12:13:45.984075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.540 qpair failed and we were unable to recover it. 00:29:09.541 [2024-10-21 12:13:45.984446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.541 [2024-10-21 12:13:45.984479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.541 qpair failed and we were unable to recover it. 00:29:09.541 [2024-10-21 12:13:45.984846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.541 [2024-10-21 12:13:45.984877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.541 qpair failed and we were unable to recover it. 00:29:09.541 [2024-10-21 12:13:45.985238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.541 [2024-10-21 12:13:45.985269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.541 qpair failed and we were unable to recover it. 00:29:09.541 [2024-10-21 12:13:45.985641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.541 [2024-10-21 12:13:45.985674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.541 qpair failed and we were unable to recover it. 00:29:09.541 [2024-10-21 12:13:45.986032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.541 [2024-10-21 12:13:45.986063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.541 qpair failed and we were unable to recover it. 00:29:09.541 [2024-10-21 12:13:45.986423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.541 [2024-10-21 12:13:45.986456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.541 qpair failed and we were unable to recover it. 00:29:09.541 [2024-10-21 12:13:45.986820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.541 [2024-10-21 12:13:45.986852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.541 qpair failed and we were unable to recover it. 00:29:09.541 [2024-10-21 12:13:45.987213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.541 [2024-10-21 12:13:45.987244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.541 qpair failed and we were unable to recover it. 00:29:09.541 [2024-10-21 12:13:45.987585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.541 [2024-10-21 12:13:45.987619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.541 qpair failed and we were unable to recover it. 00:29:09.541 [2024-10-21 12:13:45.987988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.541 [2024-10-21 12:13:45.988027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.541 qpair failed and we were unable to recover it. 00:29:09.541 [2024-10-21 12:13:45.988384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.541 [2024-10-21 12:13:45.988417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.541 qpair failed and we were unable to recover it. 00:29:09.541 [2024-10-21 12:13:45.988764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:09.541 [2024-10-21 12:13:45.988810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.541 [2024-10-21 12:13:45.988839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.541 qpair failed and we were unable to recover it. 00:29:09.541 [2024-10-21 12:13:45.989089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.541 [2024-10-21 12:13:45.989121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.541 qpair failed and we were unable to recover it. 00:29:09.541 [2024-10-21 12:13:45.989490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.541 [2024-10-21 12:13:45.989522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.541 qpair failed and we were unable to recover it. 00:29:09.541 [2024-10-21 12:13:45.989922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.541 [2024-10-21 12:13:45.989954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.541 qpair failed and we were unable to recover it. 00:29:09.541 [2024-10-21 12:13:45.990317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.541 [2024-10-21 12:13:45.990368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.541 qpair failed and we were unable to recover it. 00:29:09.541 [2024-10-21 12:13:45.990617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.541 [2024-10-21 12:13:45.990647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.541 qpair failed and we were unable to recover it. 00:29:09.541 [2024-10-21 12:13:45.991037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.541 [2024-10-21 12:13:45.991068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.541 qpair failed and we were unable to recover it. 00:29:09.541 [2024-10-21 12:13:45.991458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.541 [2024-10-21 12:13:45.991491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.541 qpair failed and we were unable to recover it. 00:29:09.541 [2024-10-21 12:13:45.991885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.541 [2024-10-21 12:13:45.991917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.541 qpair failed and we were unable to recover it. 00:29:09.541 [2024-10-21 12:13:45.992268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.541 [2024-10-21 12:13:45.992300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.541 qpair failed and we were unable to recover it. 00:29:09.541 [2024-10-21 12:13:45.992666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.541 [2024-10-21 12:13:45.992697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.541 qpair failed and we were unable to recover it. 00:29:09.541 [2024-10-21 12:13:45.993147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.541 [2024-10-21 12:13:45.993187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.541 qpair failed and we were unable to recover it. 00:29:09.541 [2024-10-21 12:13:45.993516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.541 [2024-10-21 12:13:45.993550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.541 qpair failed and we were unable to recover it. 00:29:09.541 [2024-10-21 12:13:45.993904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.541 [2024-10-21 12:13:45.993935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.541 qpair failed and we were unable to recover it. 00:29:09.541 [2024-10-21 12:13:45.994292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.541 [2024-10-21 12:13:45.994334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.541 qpair failed and we were unable to recover it. 00:29:09.541 [2024-10-21 12:13:45.994704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.541 [2024-10-21 12:13:45.994737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.541 qpair failed and we were unable to recover it. 00:29:09.541 [2024-10-21 12:13:45.995114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.541 [2024-10-21 12:13:45.995146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.541 qpair failed and we were unable to recover it. 00:29:09.541 [2024-10-21 12:13:45.995589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.541 [2024-10-21 12:13:45.995623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.541 qpair failed and we were unable to recover it. 00:29:09.541 [2024-10-21 12:13:45.995980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.541 [2024-10-21 12:13:45.996014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.541 qpair failed and we were unable to recover it. 00:29:09.541 [2024-10-21 12:13:45.996393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.541 [2024-10-21 12:13:45.996427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.541 qpair failed and we were unable to recover it. 00:29:09.541 [2024-10-21 12:13:45.996802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.541 [2024-10-21 12:13:45.996833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.541 qpair failed and we were unable to recover it. 00:29:09.541 [2024-10-21 12:13:45.997202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.541 [2024-10-21 12:13:45.997234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.541 qpair failed and we were unable to recover it. 00:29:09.541 [2024-10-21 12:13:45.997615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.541 [2024-10-21 12:13:45.997648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.541 qpair failed and we were unable to recover it. 00:29:09.541 [2024-10-21 12:13:45.998098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.541 [2024-10-21 12:13:45.998130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.541 qpair failed and we were unable to recover it. 00:29:09.541 [2024-10-21 12:13:45.998494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.541 [2024-10-21 12:13:45.998528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.541 qpair failed and we were unable to recover it. 00:29:09.541 [2024-10-21 12:13:45.998898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.541 [2024-10-21 12:13:45.998931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.541 qpair failed and we were unable to recover it. 00:29:09.541 [2024-10-21 12:13:45.999306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.541 [2024-10-21 12:13:45.999353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.541 qpair failed and we were unable to recover it. 00:29:09.541 [2024-10-21 12:13:45.999721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.541 [2024-10-21 12:13:45.999752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.541 qpair failed and we were unable to recover it. 00:29:09.541 [2024-10-21 12:13:46.000175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.541 [2024-10-21 12:13:46.000209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.542 qpair failed and we were unable to recover it. 00:29:09.542 [2024-10-21 12:13:46.000621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.542 [2024-10-21 12:13:46.000656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.542 qpair failed and we were unable to recover it. 00:29:09.542 [2024-10-21 12:13:46.001013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.542 [2024-10-21 12:13:46.001044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.542 qpair failed and we were unable to recover it. 00:29:09.542 [2024-10-21 12:13:46.001407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.542 [2024-10-21 12:13:46.001441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.542 qpair failed and we were unable to recover it. 00:29:09.542 [2024-10-21 12:13:46.001854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.542 [2024-10-21 12:13:46.001885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.542 qpair failed and we were unable to recover it. 00:29:09.542 [2024-10-21 12:13:46.002243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.542 [2024-10-21 12:13:46.002274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.542 qpair failed and we were unable to recover it. 00:29:09.542 [2024-10-21 12:13:46.002720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.542 [2024-10-21 12:13:46.002753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.542 qpair failed and we were unable to recover it. 00:29:09.542 [2024-10-21 12:13:46.003106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.542 [2024-10-21 12:13:46.003138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.542 qpair failed and we were unable to recover it. 00:29:09.542 [2024-10-21 12:13:46.003505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.542 [2024-10-21 12:13:46.003539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.542 qpair failed and we were unable to recover it. 00:29:09.542 [2024-10-21 12:13:46.003908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.542 [2024-10-21 12:13:46.003939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.542 qpair failed and we were unable to recover it. 00:29:09.542 [2024-10-21 12:13:46.004347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.542 [2024-10-21 12:13:46.004380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.542 qpair failed and we were unable to recover it. 00:29:09.542 [2024-10-21 12:13:46.004623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.542 [2024-10-21 12:13:46.004654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.542 qpair failed and we were unable to recover it. 00:29:09.542 [2024-10-21 12:13:46.005071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.542 [2024-10-21 12:13:46.005104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.542 qpair failed and we were unable to recover it. 00:29:09.542 [2024-10-21 12:13:46.005467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.542 [2024-10-21 12:13:46.005501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.542 qpair failed and we were unable to recover it. 00:29:09.542 [2024-10-21 12:13:46.005877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.542 [2024-10-21 12:13:46.005908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.542 qpair failed and we were unable to recover it. 00:29:09.542 [2024-10-21 12:13:46.006139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.542 [2024-10-21 12:13:46.006173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.542 qpair failed and we were unable to recover it. 00:29:09.542 [2024-10-21 12:13:46.006541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.542 [2024-10-21 12:13:46.006574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.542 qpair failed and we were unable to recover it. 00:29:09.542 [2024-10-21 12:13:46.006933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.542 [2024-10-21 12:13:46.006965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.542 qpair failed and we were unable to recover it. 00:29:09.542 [2024-10-21 12:13:46.007343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.542 [2024-10-21 12:13:46.007376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.542 qpair failed and we were unable to recover it. 00:29:09.542 [2024-10-21 12:13:46.007741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.542 [2024-10-21 12:13:46.007773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.542 qpair failed and we were unable to recover it. 00:29:09.542 [2024-10-21 12:13:46.007994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.542 [2024-10-21 12:13:46.008025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.542 qpair failed and we were unable to recover it. 00:29:09.542 [2024-10-21 12:13:46.008377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.542 [2024-10-21 12:13:46.008409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.542 qpair failed and we were unable to recover it. 00:29:09.542 [2024-10-21 12:13:46.008812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.542 [2024-10-21 12:13:46.008845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.542 qpair failed and we were unable to recover it. 00:29:09.542 [2024-10-21 12:13:46.009203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.542 [2024-10-21 12:13:46.009239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.542 qpair failed and we were unable to recover it. 00:29:09.542 [2024-10-21 12:13:46.009498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.542 [2024-10-21 12:13:46.009534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.542 qpair failed and we were unable to recover it. 00:29:09.542 [2024-10-21 12:13:46.009894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.542 [2024-10-21 12:13:46.009925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.542 qpair failed and we were unable to recover it. 00:29:09.542 [2024-10-21 12:13:46.010281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.542 [2024-10-21 12:13:46.010313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.542 qpair failed and we were unable to recover it. 00:29:09.542 [2024-10-21 12:13:46.010703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.542 [2024-10-21 12:13:46.010735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.542 qpair failed and we were unable to recover it. 00:29:09.542 [2024-10-21 12:13:46.011098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.542 [2024-10-21 12:13:46.011128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.542 qpair failed and we were unable to recover it. 00:29:09.542 [2024-10-21 12:13:46.011498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.542 [2024-10-21 12:13:46.011531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.542 qpair failed and we were unable to recover it. 00:29:09.542 [2024-10-21 12:13:46.011891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.542 [2024-10-21 12:13:46.011921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.542 qpair failed and we were unable to recover it. 00:29:09.542 [2024-10-21 12:13:46.012281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.542 [2024-10-21 12:13:46.012313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.542 qpair failed and we were unable to recover it. 00:29:09.542 [2024-10-21 12:13:46.012682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.542 [2024-10-21 12:13:46.012714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.542 qpair failed and we were unable to recover it. 00:29:09.542 [2024-10-21 12:13:46.013073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.542 [2024-10-21 12:13:46.013105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.542 qpair failed and we were unable to recover it. 00:29:09.542 [2024-10-21 12:13:46.013476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.542 [2024-10-21 12:13:46.013508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.542 qpair failed and we were unable to recover it. 00:29:09.542 [2024-10-21 12:13:46.013851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.542 [2024-10-21 12:13:46.013883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.543 qpair failed and we were unable to recover it. 00:29:09.543 [2024-10-21 12:13:46.014114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.543 [2024-10-21 12:13:46.014146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.543 qpair failed and we were unable to recover it. 00:29:09.543 [2024-10-21 12:13:46.014543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.543 [2024-10-21 12:13:46.014576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.543 qpair failed and we were unable to recover it. 00:29:09.543 [2024-10-21 12:13:46.014947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.543 [2024-10-21 12:13:46.014977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.543 qpair failed and we were unable to recover it. 00:29:09.543 [2024-10-21 12:13:46.015354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.543 [2024-10-21 12:13:46.015386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.543 qpair failed and we were unable to recover it. 00:29:09.543 [2024-10-21 12:13:46.015627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.543 [2024-10-21 12:13:46.015657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.543 qpair failed and we were unable to recover it. 00:29:09.543 [2024-10-21 12:13:46.016019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.543 [2024-10-21 12:13:46.016050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.543 qpair failed and we were unable to recover it. 00:29:09.543 [2024-10-21 12:13:46.016413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.543 [2024-10-21 12:13:46.016446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.543 qpair failed and we were unable to recover it. 00:29:09.543 [2024-10-21 12:13:46.016873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.543 [2024-10-21 12:13:46.016903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.543 qpair failed and we were unable to recover it. 00:29:09.543 [2024-10-21 12:13:46.017211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.543 [2024-10-21 12:13:46.017242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.543 qpair failed and we were unable to recover it. 00:29:09.543 [2024-10-21 12:13:46.017611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.543 [2024-10-21 12:13:46.017643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.543 qpair failed and we were unable to recover it. 00:29:09.543 [2024-10-21 12:13:46.018017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.543 [2024-10-21 12:13:46.018048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.543 qpair failed and we were unable to recover it. 00:29:09.543 [2024-10-21 12:13:46.018418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.543 [2024-10-21 12:13:46.018452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.543 qpair failed and we were unable to recover it. 00:29:09.543 [2024-10-21 12:13:46.018819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.543 [2024-10-21 12:13:46.018850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.543 qpair failed and we were unable to recover it. 00:29:09.543 [2024-10-21 12:13:46.019214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.543 [2024-10-21 12:13:46.019244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.543 qpair failed and we were unable to recover it. 00:29:09.543 [2024-10-21 12:13:46.019627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.543 [2024-10-21 12:13:46.019660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.543 qpair failed and we were unable to recover it. 00:29:09.543 [2024-10-21 12:13:46.019906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.543 [2024-10-21 12:13:46.019939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.543 qpair failed and we were unable to recover it. 00:29:09.543 [2024-10-21 12:13:46.020292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.543 [2024-10-21 12:13:46.020335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.543 qpair failed and we were unable to recover it. 00:29:09.543 [2024-10-21 12:13:46.020696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.543 [2024-10-21 12:13:46.020727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.543 qpair failed and we were unable to recover it. 00:29:09.543 [2024-10-21 12:13:46.021091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.543 [2024-10-21 12:13:46.021122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.543 qpair failed and we were unable to recover it. 00:29:09.543 [2024-10-21 12:13:46.021501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.543 [2024-10-21 12:13:46.021534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.543 qpair failed and we were unable to recover it. 00:29:09.543 [2024-10-21 12:13:46.021899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.543 [2024-10-21 12:13:46.021929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.543 qpair failed and we were unable to recover it. 00:29:09.543 [2024-10-21 12:13:46.022290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.543 [2024-10-21 12:13:46.022345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.543 qpair failed and we were unable to recover it. 00:29:09.543 [2024-10-21 12:13:46.022719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.543 [2024-10-21 12:13:46.022750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.543 qpair failed and we were unable to recover it. 00:29:09.543 [2024-10-21 12:13:46.023117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.543 [2024-10-21 12:13:46.023148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.543 qpair failed and we were unable to recover it. 00:29:09.543 [2024-10-21 12:13:46.023536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.543 [2024-10-21 12:13:46.023568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.543 qpair failed and we were unable to recover it. 00:29:09.543 [2024-10-21 12:13:46.023943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.543 [2024-10-21 12:13:46.023976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.543 qpair failed and we were unable to recover it. 00:29:09.543 [2024-10-21 12:13:46.024346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.543 [2024-10-21 12:13:46.024379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.543 qpair failed and we were unable to recover it. 00:29:09.543 [2024-10-21 12:13:46.024797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.543 [2024-10-21 12:13:46.024835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.543 qpair failed and we were unable to recover it. 00:29:09.543 [2024-10-21 12:13:46.025173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.543 [2024-10-21 12:13:46.025207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.543 qpair failed and we were unable to recover it. 00:29:09.543 [2024-10-21 12:13:46.025655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.543 [2024-10-21 12:13:46.025689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.543 qpair failed and we were unable to recover it. 00:29:09.543 [2024-10-21 12:13:46.026051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.543 [2024-10-21 12:13:46.026083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.543 qpair failed and we were unable to recover it. 00:29:09.543 [2024-10-21 12:13:46.026458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.543 [2024-10-21 12:13:46.026491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.543 qpair failed and we were unable to recover it. 00:29:09.543 [2024-10-21 12:13:46.026866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.543 [2024-10-21 12:13:46.026896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.543 qpair failed and we were unable to recover it. 00:29:09.543 [2024-10-21 12:13:46.027280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.543 [2024-10-21 12:13:46.027312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.543 qpair failed and we were unable to recover it. 00:29:09.543 [2024-10-21 12:13:46.027701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.543 [2024-10-21 12:13:46.027733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.543 qpair failed and we were unable to recover it. 00:29:09.543 [2024-10-21 12:13:46.028115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.543 [2024-10-21 12:13:46.028145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.543 qpair failed and we were unable to recover it. 00:29:09.543 [2024-10-21 12:13:46.028520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.543 [2024-10-21 12:13:46.028551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.543 qpair failed and we were unable to recover it. 00:29:09.543 [2024-10-21 12:13:46.028927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.543 [2024-10-21 12:13:46.028958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.543 qpair failed and we were unable to recover it. 00:29:09.543 [2024-10-21 12:13:46.029344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.543 [2024-10-21 12:13:46.029376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.543 qpair failed and we were unable to recover it. 00:29:09.543 [2024-10-21 12:13:46.029725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.543 [2024-10-21 12:13:46.029756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.543 qpair failed and we were unable to recover it. 00:29:09.544 [2024-10-21 12:13:46.030132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.544 [2024-10-21 12:13:46.030163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.544 qpair failed and we were unable to recover it. 00:29:09.544 [2024-10-21 12:13:46.030496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.544 [2024-10-21 12:13:46.030530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.544 qpair failed and we were unable to recover it. 00:29:09.544 [2024-10-21 12:13:46.030911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.544 [2024-10-21 12:13:46.030942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.544 qpair failed and we were unable to recover it. 00:29:09.544 [2024-10-21 12:13:46.031298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.544 [2024-10-21 12:13:46.031341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.544 qpair failed and we were unable to recover it. 00:29:09.544 [2024-10-21 12:13:46.031590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.544 [2024-10-21 12:13:46.031624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.544 qpair failed and we were unable to recover it. 00:29:09.544 [2024-10-21 12:13:46.031981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.544 [2024-10-21 12:13:46.032012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.544 qpair failed and we were unable to recover it. 00:29:09.544 [2024-10-21 12:13:46.032382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.544 [2024-10-21 12:13:46.032415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.544 qpair failed and we were unable to recover it. 00:29:09.544 [2024-10-21 12:13:46.032728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.544 [2024-10-21 12:13:46.032760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.544 qpair failed and we were unable to recover it. 00:29:09.544 [2024-10-21 12:13:46.033124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.544 [2024-10-21 12:13:46.033155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.544 qpair failed and we were unable to recover it. 00:29:09.544 [2024-10-21 12:13:46.033549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.544 [2024-10-21 12:13:46.033581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.544 qpair failed and we were unable to recover it. 00:29:09.544 [2024-10-21 12:13:46.033953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.544 [2024-10-21 12:13:46.033985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.544 qpair failed and we were unable to recover it. 00:29:09.544 [2024-10-21 12:13:46.034230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.544 [2024-10-21 12:13:46.034261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.544 qpair failed and we were unable to recover it. 00:29:09.544 [2024-10-21 12:13:46.034639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.544 [2024-10-21 12:13:46.034672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.544 qpair failed and we were unable to recover it. 00:29:09.544 [2024-10-21 12:13:46.035013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.544 [2024-10-21 12:13:46.035044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.544 qpair failed and we were unable to recover it. 00:29:09.544 [2024-10-21 12:13:46.035405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.544 [2024-10-21 12:13:46.035440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.544 qpair failed and we were unable to recover it. 00:29:09.544 [2024-10-21 12:13:46.035803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.544 [2024-10-21 12:13:46.035835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.544 qpair failed and we were unable to recover it. 00:29:09.544 [2024-10-21 12:13:46.036192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.544 [2024-10-21 12:13:46.036223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.544 qpair failed and we were unable to recover it. 00:29:09.544 [2024-10-21 12:13:46.036589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.544 [2024-10-21 12:13:46.036620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.544 qpair failed and we were unable to recover it. 00:29:09.544 [2024-10-21 12:13:46.036983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.544 [2024-10-21 12:13:46.037014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.544 qpair failed and we were unable to recover it. 00:29:09.544 [2024-10-21 12:13:46.037363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.544 [2024-10-21 12:13:46.037397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.544 qpair failed and we were unable to recover it. 00:29:09.544 [2024-10-21 12:13:46.037786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.544 [2024-10-21 12:13:46.037817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.544 qpair failed and we were unable to recover it. 00:29:09.544 [2024-10-21 12:13:46.038186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.544 [2024-10-21 12:13:46.038218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.544 qpair failed and we were unable to recover it. 00:29:09.544 [2024-10-21 12:13:46.038577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.544 [2024-10-21 12:13:46.038611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.544 qpair failed and we were unable to recover it. 00:29:09.544 [2024-10-21 12:13:46.038952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.544 [2024-10-21 12:13:46.038983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.544 qpair failed and we were unable to recover it. 00:29:09.544 [2024-10-21 12:13:46.039345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.544 [2024-10-21 12:13:46.039378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.544 qpair failed and we were unable to recover it. 00:29:09.544 [2024-10-21 12:13:46.039737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.544 [2024-10-21 12:13:46.039768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.544 qpair failed and we were unable to recover it. 00:29:09.544 [2024-10-21 12:13:46.040109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.544 [2024-10-21 12:13:46.040139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.544 qpair failed and we were unable to recover it. 00:29:09.544 [2024-10-21 12:13:46.040396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.544 [2024-10-21 12:13:46.040435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.544 qpair failed and we were unable to recover it. 00:29:09.544 [2024-10-21 12:13:46.040828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.544 [2024-10-21 12:13:46.040861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.544 qpair failed and we were unable to recover it. 00:29:09.544 [2024-10-21 12:13:46.041214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.544 [2024-10-21 12:13:46.041246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.544 qpair failed and we were unable to recover it. 00:29:09.544 [2024-10-21 12:13:46.041641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.544 [2024-10-21 12:13:46.041673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.544 qpair failed and we were unable to recover it. 00:29:09.544 [2024-10-21 12:13:46.041849] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:09.544 [2024-10-21 12:13:46.041895] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:09.544 [2024-10-21 12:13:46.041903] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:09.544 [2024-10-21 12:13:46.041912] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:09.544 [2024-10-21 12:13:46.041920] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:09.544 [2024-10-21 12:13:46.042037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.544 [2024-10-21 12:13:46.042067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.544 qpair failed and we were unable to recover it. 00:29:09.544 [2024-10-21 12:13:46.042447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.544 [2024-10-21 12:13:46.042478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.544 qpair failed and we were unable to recover it. 00:29:09.544 [2024-10-21 12:13:46.042843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.544 [2024-10-21 12:13:46.042875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.544 qpair failed and we were unable to recover it. 00:29:09.544 [2024-10-21 12:13:46.043230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.544 [2024-10-21 12:13:46.043261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.544 qpair failed and we were unable to recover it. 00:29:09.544 [2024-10-21 12:13:46.043608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.544 [2024-10-21 12:13:46.043640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.544 qpair failed and we were unable to recover it. 00:29:09.544 [2024-10-21 12:13:46.043869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.544 [2024-10-21 12:13:46.043900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.544 qpair failed and we were unable to recover it. 00:29:09.544 [2024-10-21 12:13:46.044262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.544 [2024-10-21 12:13:46.044163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:09.545 [2024-10-21 12:13:46.044295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.545 qpair failed and we were unable to recover it. 00:29:09.545 [2024-10-21 12:13:46.044399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:09.545 [2024-10-21 12:13:46.044552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:09.545 [2024-10-21 12:13:46.044553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:09.545 [2024-10-21 12:13:46.044713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.545 [2024-10-21 12:13:46.044745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.545 qpair failed and we were unable to recover it. 00:29:09.545 [2024-10-21 12:13:46.045156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.545 [2024-10-21 12:13:46.045186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.545 qpair failed and we were unable to recover it. 00:29:09.545 [2024-10-21 12:13:46.045562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.545 [2024-10-21 12:13:46.045595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.545 qpair failed and we were unable to recover it. 00:29:09.545 [2024-10-21 12:13:46.045949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.545 [2024-10-21 12:13:46.045980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.545 qpair failed and we were unable to recover it. 00:29:09.545 [2024-10-21 12:13:46.046342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.545 [2024-10-21 12:13:46.046376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.545 qpair failed and we were unable to recover it. 00:29:09.545 [2024-10-21 12:13:46.046755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.545 [2024-10-21 12:13:46.046785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.545 qpair failed and we were unable to recover it. 00:29:09.545 [2024-10-21 12:13:46.047147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.545 [2024-10-21 12:13:46.047179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.545 qpair failed and we were unable to recover it. 00:29:09.545 [2024-10-21 12:13:46.047563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.545 [2024-10-21 12:13:46.047595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.545 qpair failed and we were unable to recover it. 00:29:09.545 [2024-10-21 12:13:46.047946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.545 [2024-10-21 12:13:46.047976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.545 qpair failed and we were unable to recover it. 00:29:09.545 [2024-10-21 12:13:46.048357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.545 [2024-10-21 12:13:46.048391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.545 qpair failed and we were unable to recover it. 00:29:09.545 [2024-10-21 12:13:46.048759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.545 [2024-10-21 12:13:46.048789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.545 qpair failed and we were unable to recover it. 00:29:09.545 [2024-10-21 12:13:46.049137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.545 [2024-10-21 12:13:46.049168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.545 qpair failed and we were unable to recover it. 00:29:09.545 [2024-10-21 12:13:46.049406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.545 [2024-10-21 12:13:46.049442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.545 qpair failed and we were unable to recover it. 00:29:09.545 [2024-10-21 12:13:46.049708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.545 [2024-10-21 12:13:46.049738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.545 qpair failed and we were unable to recover it. 00:29:09.545 [2024-10-21 12:13:46.050112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.545 [2024-10-21 12:13:46.050144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.545 qpair failed and we were unable to recover it. 00:29:09.545 [2024-10-21 12:13:46.050504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.545 [2024-10-21 12:13:46.050536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.545 qpair failed and we were unable to recover it. 00:29:09.545 [2024-10-21 12:13:46.050902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.545 [2024-10-21 12:13:46.050933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.545 qpair failed and we were unable to recover it. 00:29:09.545 [2024-10-21 12:13:46.051281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.545 [2024-10-21 12:13:46.051313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.545 qpair failed and we were unable to recover it. 00:29:09.545 [2024-10-21 12:13:46.051678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.545 [2024-10-21 12:13:46.051710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.545 qpair failed and we were unable to recover it. 00:29:09.545 [2024-10-21 12:13:46.051959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.545 [2024-10-21 12:13:46.051991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.545 qpair failed and we were unable to recover it. 00:29:09.545 [2024-10-21 12:13:46.052364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.545 [2024-10-21 12:13:46.052398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.545 qpair failed and we were unable to recover it. 00:29:09.545 [2024-10-21 12:13:46.052773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.545 [2024-10-21 12:13:46.052805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.545 qpair failed and we were unable to recover it. 00:29:09.545 [2024-10-21 12:13:46.053182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.545 [2024-10-21 12:13:46.053214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.545 qpair failed and we were unable to recover it. 00:29:09.545 [2024-10-21 12:13:46.053372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.545 [2024-10-21 12:13:46.053407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.545 qpair failed and we were unable to recover it. 00:29:09.545 [2024-10-21 12:13:46.053672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.545 [2024-10-21 12:13:46.053705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.545 qpair failed and we were unable to recover it. 00:29:09.545 [2024-10-21 12:13:46.054048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.545 [2024-10-21 12:13:46.054080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.545 qpair failed and we were unable to recover it. 00:29:09.545 [2024-10-21 12:13:46.054355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.545 [2024-10-21 12:13:46.054396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.545 qpair failed and we were unable to recover it. 00:29:09.545 [2024-10-21 12:13:46.054765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.545 [2024-10-21 12:13:46.054796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.545 qpair failed and we were unable to recover it. 00:29:09.545 [2024-10-21 12:13:46.055145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.545 [2024-10-21 12:13:46.055177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.545 qpair failed and we were unable to recover it. 00:29:09.545 [2024-10-21 12:13:46.055521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.545 [2024-10-21 12:13:46.055552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.545 qpair failed and we were unable to recover it. 00:29:09.545 [2024-10-21 12:13:46.055924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.545 [2024-10-21 12:13:46.055955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.545 qpair failed and we were unable to recover it. 00:29:09.545 [2024-10-21 12:13:46.056311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.545 [2024-10-21 12:13:46.056356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.545 qpair failed and we were unable to recover it. 00:29:09.545 [2024-10-21 12:13:46.056579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.545 [2024-10-21 12:13:46.056613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.545 qpair failed and we were unable to recover it. 00:29:09.545 [2024-10-21 12:13:46.056976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.545 [2024-10-21 12:13:46.057008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.545 qpair failed and we were unable to recover it. 00:29:09.545 [2024-10-21 12:13:46.057262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.545 [2024-10-21 12:13:46.057293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.545 qpair failed and we were unable to recover it. 00:29:09.545 [2024-10-21 12:13:46.057437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.545 [2024-10-21 12:13:46.057472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.545 qpair failed and we were unable to recover it. 00:29:09.545 [2024-10-21 12:13:46.057716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.545 [2024-10-21 12:13:46.057749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.545 qpair failed and we were unable to recover it. 00:29:09.545 [2024-10-21 12:13:46.058106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.545 [2024-10-21 12:13:46.058138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.545 qpair failed and we were unable to recover it. 00:29:09.545 [2024-10-21 12:13:46.058468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.545 [2024-10-21 12:13:46.058501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.546 qpair failed and we were unable to recover it. 00:29:09.546 [2024-10-21 12:13:46.058865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.546 [2024-10-21 12:13:46.058896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.546 qpair failed and we were unable to recover it. 00:29:09.546 [2024-10-21 12:13:46.059162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.546 [2024-10-21 12:13:46.059198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.546 qpair failed and we were unable to recover it. 00:29:09.546 [2024-10-21 12:13:46.059576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.546 [2024-10-21 12:13:46.059609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.546 qpair failed and we were unable to recover it. 00:29:09.546 [2024-10-21 12:13:46.059958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.546 [2024-10-21 12:13:46.059991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.546 qpair failed and we were unable to recover it. 00:29:09.546 [2024-10-21 12:13:46.060357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.546 [2024-10-21 12:13:46.060390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.546 qpair failed and we were unable to recover it. 00:29:09.546 [2024-10-21 12:13:46.060755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.546 [2024-10-21 12:13:46.060787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.546 qpair failed and we were unable to recover it. 00:29:09.546 [2024-10-21 12:13:46.061048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.546 [2024-10-21 12:13:46.061078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.546 qpair failed and we were unable to recover it. 00:29:09.546 [2024-10-21 12:13:46.061390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.546 [2024-10-21 12:13:46.061423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.546 qpair failed and we were unable to recover it. 00:29:09.546 [2024-10-21 12:13:46.061700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.546 [2024-10-21 12:13:46.061731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.546 qpair failed and we were unable to recover it. 00:29:09.546 [2024-10-21 12:13:46.062076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.546 [2024-10-21 12:13:46.062106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.546 qpair failed and we were unable to recover it. 00:29:09.546 [2024-10-21 12:13:46.062470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.546 [2024-10-21 12:13:46.062503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.546 qpair failed and we were unable to recover it. 00:29:09.546 [2024-10-21 12:13:46.062860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.546 [2024-10-21 12:13:46.062893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.546 qpair failed and we were unable to recover it. 00:29:09.546 [2024-10-21 12:13:46.063240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.546 [2024-10-21 12:13:46.063271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.546 qpair failed and we were unable to recover it. 00:29:09.546 [2024-10-21 12:13:46.063664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.546 [2024-10-21 12:13:46.063696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.546 qpair failed and we were unable to recover it. 00:29:09.546 [2024-10-21 12:13:46.064060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.546 [2024-10-21 12:13:46.064091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.546 qpair failed and we were unable to recover it. 00:29:09.546 [2024-10-21 12:13:46.064446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.546 [2024-10-21 12:13:46.064480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.546 qpair failed and we were unable to recover it. 00:29:09.546 [2024-10-21 12:13:46.064830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.546 [2024-10-21 12:13:46.064862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.546 qpair failed and we were unable to recover it. 00:29:09.546 [2024-10-21 12:13:46.065077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.546 [2024-10-21 12:13:46.065107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.546 qpair failed and we were unable to recover it. 00:29:09.546 [2024-10-21 12:13:46.065462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.546 [2024-10-21 12:13:46.065494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.546 qpair failed and we were unable to recover it. 00:29:09.546 [2024-10-21 12:13:46.065858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.546 [2024-10-21 12:13:46.065889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.546 qpair failed and we were unable to recover it. 00:29:09.546 [2024-10-21 12:13:46.066245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.546 [2024-10-21 12:13:46.066277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.546 qpair failed and we were unable to recover it. 00:29:09.546 [2024-10-21 12:13:46.066651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.546 [2024-10-21 12:13:46.066684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.546 qpair failed and we were unable to recover it. 00:29:09.546 [2024-10-21 12:13:46.067038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.546 [2024-10-21 12:13:46.067070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.546 qpair failed and we were unable to recover it. 00:29:09.546 [2024-10-21 12:13:46.067439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.546 [2024-10-21 12:13:46.067473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.546 qpair failed and we were unable to recover it. 00:29:09.546 [2024-10-21 12:13:46.067828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.546 [2024-10-21 12:13:46.067859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.546 qpair failed and we were unable to recover it. 00:29:09.546 [2024-10-21 12:13:46.068217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.546 [2024-10-21 12:13:46.068248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.546 qpair failed and we were unable to recover it. 00:29:09.546 [2024-10-21 12:13:46.068492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.546 [2024-10-21 12:13:46.068524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.546 qpair failed and we were unable to recover it. 00:29:09.546 [2024-10-21 12:13:46.068865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.546 [2024-10-21 12:13:46.068903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.546 qpair failed and we were unable to recover it. 00:29:09.546 [2024-10-21 12:13:46.069140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.546 [2024-10-21 12:13:46.069171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.546 qpair failed and we were unable to recover it. 00:29:09.546 [2024-10-21 12:13:46.069415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.546 [2024-10-21 12:13:46.069448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.546 qpair failed and we were unable to recover it. 00:29:09.546 [2024-10-21 12:13:46.069830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.546 [2024-10-21 12:13:46.069862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.546 qpair failed and we were unable to recover it. 00:29:09.546 [2024-10-21 12:13:46.070216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.546 [2024-10-21 12:13:46.070248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.546 qpair failed and we were unable to recover it. 00:29:09.546 [2024-10-21 12:13:46.070599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.546 [2024-10-21 12:13:46.070632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.546 qpair failed and we were unable to recover it. 00:29:09.546 [2024-10-21 12:13:46.070981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.546 [2024-10-21 12:13:46.071014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.546 qpair failed and we were unable to recover it. 00:29:09.546 [2024-10-21 12:13:46.071366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.546 [2024-10-21 12:13:46.071398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.546 qpair failed and we were unable to recover it. 00:29:09.546 [2024-10-21 12:13:46.071756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.547 [2024-10-21 12:13:46.071788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.547 qpair failed and we were unable to recover it. 00:29:09.547 [2024-10-21 12:13:46.072154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.547 [2024-10-21 12:13:46.072188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.547 qpair failed and we were unable to recover it. 00:29:09.547 [2024-10-21 12:13:46.072525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.547 [2024-10-21 12:13:46.072556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.547 qpair failed and we were unable to recover it. 00:29:09.547 [2024-10-21 12:13:46.072900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.547 [2024-10-21 12:13:46.072931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.547 qpair failed and we were unable to recover it. 00:29:09.547 [2024-10-21 12:13:46.073302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.547 [2024-10-21 12:13:46.073351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.547 qpair failed and we were unable to recover it. 00:29:09.547 [2024-10-21 12:13:46.073699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.547 [2024-10-21 12:13:46.073731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.547 qpair failed and we were unable to recover it. 00:29:09.547 [2024-10-21 12:13:46.074087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.547 [2024-10-21 12:13:46.074120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.547 qpair failed and we were unable to recover it. 00:29:09.547 [2024-10-21 12:13:46.074349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.547 [2024-10-21 12:13:46.074381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.547 qpair failed and we were unable to recover it. 00:29:09.547 [2024-10-21 12:13:46.074629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.547 [2024-10-21 12:13:46.074664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.547 qpair failed and we were unable to recover it. 00:29:09.547 [2024-10-21 12:13:46.075010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.547 [2024-10-21 12:13:46.075043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.547 qpair failed and we were unable to recover it. 00:29:09.547 [2024-10-21 12:13:46.075411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.547 [2024-10-21 12:13:46.075444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.547 qpair failed and we were unable to recover it. 00:29:09.547 [2024-10-21 12:13:46.075783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.547 [2024-10-21 12:13:46.075814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.547 qpair failed and we were unable to recover it. 00:29:09.547 [2024-10-21 12:13:46.076168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.547 [2024-10-21 12:13:46.076201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.547 qpair failed and we were unable to recover it. 00:29:09.547 [2024-10-21 12:13:46.076418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.547 [2024-10-21 12:13:46.076452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.547 qpair failed and we were unable to recover it. 00:29:09.547 [2024-10-21 12:13:46.076862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.547 [2024-10-21 12:13:46.076893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.547 qpair failed and we were unable to recover it. 00:29:09.547 [2024-10-21 12:13:46.077246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.547 [2024-10-21 12:13:46.077278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.547 qpair failed and we were unable to recover it. 00:29:09.547 [2024-10-21 12:13:46.077680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.547 [2024-10-21 12:13:46.077715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.547 qpair failed and we were unable to recover it. 00:29:09.547 [2024-10-21 12:13:46.078106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.547 [2024-10-21 12:13:46.078139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.547 qpair failed and we were unable to recover it. 00:29:09.547 [2024-10-21 12:13:46.078369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.547 [2024-10-21 12:13:46.078404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.547 qpair failed and we were unable to recover it. 00:29:09.547 [2024-10-21 12:13:46.078753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.547 [2024-10-21 12:13:46.078786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.547 qpair failed and we were unable to recover it. 00:29:09.547 [2024-10-21 12:13:46.079144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.547 [2024-10-21 12:13:46.079175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.547 qpair failed and we were unable to recover it. 00:29:09.547 [2024-10-21 12:13:46.079491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.547 [2024-10-21 12:13:46.079524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.547 qpair failed and we were unable to recover it. 00:29:09.547 [2024-10-21 12:13:46.079870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.547 [2024-10-21 12:13:46.079903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.547 qpair failed and we were unable to recover it. 00:29:09.547 [2024-10-21 12:13:46.080133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.547 [2024-10-21 12:13:46.080167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.547 qpair failed and we were unable to recover it. 00:29:09.547 [2024-10-21 12:13:46.080500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.547 [2024-10-21 12:13:46.080532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.547 qpair failed and we were unable to recover it. 00:29:09.547 [2024-10-21 12:13:46.080877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.547 [2024-10-21 12:13:46.080909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.547 qpair failed and we were unable to recover it. 00:29:09.547 [2024-10-21 12:13:46.081265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.547 [2024-10-21 12:13:46.081297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.547 qpair failed and we were unable to recover it. 00:29:09.547 [2024-10-21 12:13:46.081688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.547 [2024-10-21 12:13:46.081720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.547 qpair failed and we were unable to recover it. 00:29:09.547 [2024-10-21 12:13:46.082037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.547 [2024-10-21 12:13:46.082072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.547 qpair failed and we were unable to recover it. 00:29:09.547 [2024-10-21 12:13:46.082360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.547 [2024-10-21 12:13:46.082394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.547 qpair failed and we were unable to recover it. 00:29:09.547 [2024-10-21 12:13:46.082746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.547 [2024-10-21 12:13:46.082777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.547 qpair failed and we were unable to recover it. 00:29:09.547 [2024-10-21 12:13:46.083129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.547 [2024-10-21 12:13:46.083161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.547 qpair failed and we were unable to recover it. 00:29:09.547 [2024-10-21 12:13:46.083396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.547 [2024-10-21 12:13:46.083437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.547 qpair failed and we were unable to recover it. 00:29:09.547 [2024-10-21 12:13:46.083661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.547 [2024-10-21 12:13:46.083695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.547 qpair failed and we were unable to recover it. 00:29:09.547 [2024-10-21 12:13:46.084030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.547 [2024-10-21 12:13:46.084060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.547 qpair failed and we were unable to recover it. 00:29:09.547 [2024-10-21 12:13:46.084411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.547 [2024-10-21 12:13:46.084444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.547 qpair failed and we were unable to recover it. 00:29:09.547 [2024-10-21 12:13:46.084774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.547 [2024-10-21 12:13:46.084806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.547 qpair failed and we were unable to recover it. 00:29:09.547 [2024-10-21 12:13:46.085190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.547 [2024-10-21 12:13:46.085221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.547 qpair failed and we were unable to recover it. 00:29:09.547 [2024-10-21 12:13:46.085508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.547 [2024-10-21 12:13:46.085541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.547 qpair failed and we were unable to recover it. 00:29:09.547 [2024-10-21 12:13:46.085959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.547 [2024-10-21 12:13:46.085990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.547 qpair failed and we were unable to recover it. 00:29:09.547 [2024-10-21 12:13:46.086346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.547 [2024-10-21 12:13:46.086379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.548 qpair failed and we were unable to recover it. 00:29:09.548 [2024-10-21 12:13:46.086826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.548 [2024-10-21 12:13:46.086858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.548 qpair failed and we were unable to recover it. 00:29:09.548 [2024-10-21 12:13:46.087206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.548 [2024-10-21 12:13:46.087237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.548 qpair failed and we were unable to recover it. 00:29:09.548 [2024-10-21 12:13:46.087349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.548 [2024-10-21 12:13:46.087380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.548 qpair failed and we were unable to recover it. 00:29:09.548 [2024-10-21 12:13:46.087782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.548 [2024-10-21 12:13:46.087813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.548 qpair failed and we were unable to recover it. 00:29:09.548 [2024-10-21 12:13:46.088032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.548 [2024-10-21 12:13:46.088064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.548 qpair failed and we were unable to recover it. 00:29:09.548 [2024-10-21 12:13:46.088362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.548 [2024-10-21 12:13:46.088394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.548 qpair failed and we were unable to recover it. 00:29:09.548 [2024-10-21 12:13:46.088754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.548 [2024-10-21 12:13:46.088786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.548 qpair failed and we were unable to recover it. 00:29:09.548 [2024-10-21 12:13:46.089136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.548 [2024-10-21 12:13:46.089167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.548 qpair failed and we were unable to recover it. 00:29:09.548 [2024-10-21 12:13:46.089554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.548 [2024-10-21 12:13:46.089586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.548 qpair failed and we were unable to recover it. 00:29:09.548 [2024-10-21 12:13:46.089932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.548 [2024-10-21 12:13:46.089962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.548 qpair failed and we were unable to recover it. 00:29:09.548 [2024-10-21 12:13:46.090314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.548 [2024-10-21 12:13:46.090361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.548 qpair failed and we were unable to recover it. 00:29:09.548 [2024-10-21 12:13:46.090623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.548 [2024-10-21 12:13:46.090653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.548 qpair failed and we were unable to recover it. 00:29:09.548 [2024-10-21 12:13:46.090996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.548 [2024-10-21 12:13:46.091027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.548 qpair failed and we were unable to recover it. 00:29:09.548 [2024-10-21 12:13:46.091397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.548 [2024-10-21 12:13:46.091430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.548 qpair failed and we were unable to recover it. 00:29:09.548 [2024-10-21 12:13:46.091672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.548 [2024-10-21 12:13:46.091702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.548 qpair failed and we were unable to recover it. 00:29:09.548 [2024-10-21 12:13:46.092061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.548 [2024-10-21 12:13:46.092092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.548 qpair failed and we were unable to recover it. 00:29:09.548 [2024-10-21 12:13:46.092434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.548 [2024-10-21 12:13:46.092468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.548 qpair failed and we were unable to recover it. 00:29:09.548 [2024-10-21 12:13:46.092706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.548 [2024-10-21 12:13:46.092737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.548 qpair failed and we were unable to recover it. 00:29:09.548 [2024-10-21 12:13:46.093110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.548 [2024-10-21 12:13:46.093141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.548 qpair failed and we were unable to recover it. 00:29:09.548 [2024-10-21 12:13:46.093518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.548 [2024-10-21 12:13:46.093551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.548 qpair failed and we were unable to recover it. 00:29:09.548 [2024-10-21 12:13:46.093900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.548 [2024-10-21 12:13:46.093931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.548 qpair failed and we were unable to recover it. 00:29:09.548 [2024-10-21 12:13:46.094287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.548 [2024-10-21 12:13:46.094318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.548 qpair failed and we were unable to recover it. 00:29:09.548 [2024-10-21 12:13:46.094691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.548 [2024-10-21 12:13:46.094722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.548 qpair failed and we were unable to recover it. 00:29:09.548 [2024-10-21 12:13:46.095046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.548 [2024-10-21 12:13:46.095079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.548 qpair failed and we were unable to recover it. 00:29:09.548 [2024-10-21 12:13:46.095308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.548 [2024-10-21 12:13:46.095358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.548 qpair failed and we were unable to recover it. 00:29:09.548 [2024-10-21 12:13:46.095701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.548 [2024-10-21 12:13:46.095731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.548 qpair failed and we were unable to recover it. 00:29:09.548 [2024-10-21 12:13:46.096095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.548 [2024-10-21 12:13:46.096127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.548 qpair failed and we were unable to recover it. 00:29:09.548 [2024-10-21 12:13:46.096490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.548 [2024-10-21 12:13:46.096523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.548 qpair failed and we were unable to recover it. 00:29:09.548 [2024-10-21 12:13:46.096899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.548 [2024-10-21 12:13:46.096930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.548 qpair failed and we were unable to recover it. 00:29:09.548 [2024-10-21 12:13:46.097294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.548 [2024-10-21 12:13:46.097337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.548 qpair failed and we were unable to recover it. 00:29:09.548 [2024-10-21 12:13:46.097479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.548 [2024-10-21 12:13:46.097508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.548 qpair failed and we were unable to recover it. 00:29:09.548 [2024-10-21 12:13:46.097883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.548 [2024-10-21 12:13:46.097926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.548 qpair failed and we were unable to recover it. 00:29:09.548 [2024-10-21 12:13:46.098167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.548 [2024-10-21 12:13:46.098197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.548 qpair failed and we were unable to recover it. 00:29:09.548 [2024-10-21 12:13:46.098566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.548 [2024-10-21 12:13:46.098598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.548 qpair failed and we were unable to recover it. 00:29:09.548 [2024-10-21 12:13:46.098820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.548 [2024-10-21 12:13:46.098850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.548 qpair failed and we were unable to recover it. 00:29:09.548 [2024-10-21 12:13:46.099214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.548 [2024-10-21 12:13:46.099246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.548 qpair failed and we were unable to recover it. 00:29:09.548 [2024-10-21 12:13:46.099655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.548 [2024-10-21 12:13:46.099689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.548 qpair failed and we were unable to recover it. 00:29:09.548 [2024-10-21 12:13:46.100057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.548 [2024-10-21 12:13:46.100088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.548 qpair failed and we were unable to recover it. 00:29:09.548 [2024-10-21 12:13:46.100342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.548 [2024-10-21 12:13:46.100374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.548 qpair failed and we were unable to recover it. 00:29:09.548 [2024-10-21 12:13:46.100706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.549 [2024-10-21 12:13:46.100736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.549 qpair failed and we were unable to recover it. 00:29:09.549 [2024-10-21 12:13:46.100965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.549 [2024-10-21 12:13:46.100996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.549 qpair failed and we were unable to recover it. 00:29:09.549 [2024-10-21 12:13:46.101352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.549 [2024-10-21 12:13:46.101386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.549 qpair failed and we were unable to recover it. 00:29:09.549 [2024-10-21 12:13:46.101626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.549 [2024-10-21 12:13:46.101658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.549 qpair failed and we were unable to recover it. 00:29:09.549 [2024-10-21 12:13:46.102069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.549 [2024-10-21 12:13:46.102100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.549 qpair failed and we were unable to recover it. 00:29:09.549 [2024-10-21 12:13:46.102448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.549 [2024-10-21 12:13:46.102480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.549 qpair failed and we were unable to recover it. 00:29:09.549 [2024-10-21 12:13:46.102845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.549 [2024-10-21 12:13:46.102876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.549 qpair failed and we were unable to recover it. 00:29:09.549 [2024-10-21 12:13:46.103229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.549 [2024-10-21 12:13:46.103259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.549 qpair failed and we were unable to recover it. 00:29:09.549 [2024-10-21 12:13:46.103632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.549 [2024-10-21 12:13:46.103665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.549 qpair failed and we were unable to recover it. 00:29:09.549 [2024-10-21 12:13:46.104019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.549 [2024-10-21 12:13:46.104050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.549 qpair failed and we were unable to recover it. 00:29:09.549 [2024-10-21 12:13:46.104433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.549 [2024-10-21 12:13:46.104464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.549 qpair failed and we were unable to recover it. 00:29:09.549 [2024-10-21 12:13:46.104822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.549 [2024-10-21 12:13:46.104852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.549 qpair failed and we were unable to recover it. 00:29:09.549 [2024-10-21 12:13:46.105192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.549 [2024-10-21 12:13:46.105224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.549 qpair failed and we were unable to recover it. 00:29:09.549 [2024-10-21 12:13:46.105626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.549 [2024-10-21 12:13:46.105659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.549 qpair failed and we were unable to recover it. 00:29:09.549 [2024-10-21 12:13:46.106009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.549 [2024-10-21 12:13:46.106038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.549 qpair failed and we were unable to recover it. 00:29:09.549 [2024-10-21 12:13:46.106407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.549 [2024-10-21 12:13:46.106440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.549 qpair failed and we were unable to recover it. 00:29:09.549 [2024-10-21 12:13:46.106812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.549 [2024-10-21 12:13:46.106842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.549 qpair failed and we were unable to recover it. 00:29:09.549 [2024-10-21 12:13:46.107085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.549 [2024-10-21 12:13:46.107116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.549 qpair failed and we were unable to recover it. 00:29:09.549 [2024-10-21 12:13:46.107476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.549 [2024-10-21 12:13:46.107508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.549 qpair failed and we were unable to recover it. 00:29:09.549 [2024-10-21 12:13:46.107874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.549 [2024-10-21 12:13:46.107905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.549 qpair failed and we were unable to recover it. 00:29:09.549 [2024-10-21 12:13:46.108118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.549 [2024-10-21 12:13:46.108149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.549 qpair failed and we were unable to recover it. 00:29:09.549 [2024-10-21 12:13:46.108437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.823 [2024-10-21 12:13:46.108470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.823 qpair failed and we were unable to recover it. 00:29:09.823 [2024-10-21 12:13:46.108868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.823 [2024-10-21 12:13:46.108901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.823 qpair failed and we were unable to recover it. 00:29:09.823 [2024-10-21 12:13:46.109248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.823 [2024-10-21 12:13:46.109281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.823 qpair failed and we were unable to recover it. 00:29:09.823 [2024-10-21 12:13:46.109582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.823 [2024-10-21 12:13:46.109614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.823 qpair failed and we were unable to recover it. 00:29:09.823 [2024-10-21 12:13:46.109826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.823 [2024-10-21 12:13:46.109855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.823 qpair failed and we were unable to recover it. 00:29:09.823 [2024-10-21 12:13:46.110226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.823 [2024-10-21 12:13:46.110256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.823 qpair failed and we were unable to recover it. 00:29:09.823 [2024-10-21 12:13:46.110619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.823 [2024-10-21 12:13:46.110652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.823 qpair failed and we were unable to recover it. 00:29:09.823 [2024-10-21 12:13:46.110998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.823 [2024-10-21 12:13:46.111029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.823 qpair failed and we were unable to recover it. 00:29:09.823 [2024-10-21 12:13:46.111384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.823 [2024-10-21 12:13:46.111416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.823 qpair failed and we were unable to recover it. 00:29:09.823 [2024-10-21 12:13:46.111771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.823 [2024-10-21 12:13:46.111802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.823 qpair failed and we were unable to recover it. 00:29:09.823 [2024-10-21 12:13:46.112159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.823 [2024-10-21 12:13:46.112189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.823 qpair failed and we were unable to recover it. 00:29:09.823 [2024-10-21 12:13:46.112558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.823 [2024-10-21 12:13:46.112597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.823 qpair failed and we were unable to recover it. 00:29:09.823 [2024-10-21 12:13:46.112800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.823 [2024-10-21 12:13:46.112830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.823 qpair failed and we were unable to recover it. 00:29:09.823 [2024-10-21 12:13:46.113192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.823 [2024-10-21 12:13:46.113223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.823 qpair failed and we were unable to recover it. 00:29:09.823 [2024-10-21 12:13:46.113584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.823 [2024-10-21 12:13:46.113617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.823 qpair failed and we were unable to recover it. 00:29:09.823 [2024-10-21 12:13:46.113985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.823 [2024-10-21 12:13:46.114016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.823 qpair failed and we were unable to recover it. 00:29:09.823 [2024-10-21 12:13:46.114358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.823 [2024-10-21 12:13:46.114390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.823 qpair failed and we were unable to recover it. 00:29:09.823 [2024-10-21 12:13:46.114780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.823 [2024-10-21 12:13:46.114812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.823 qpair failed and we were unable to recover it. 00:29:09.823 [2024-10-21 12:13:46.115162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.823 [2024-10-21 12:13:46.115192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.823 qpair failed and we were unable to recover it. 00:29:09.823 [2024-10-21 12:13:46.115560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.823 [2024-10-21 12:13:46.115592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.823 qpair failed and we were unable to recover it. 00:29:09.823 [2024-10-21 12:13:46.115940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.823 [2024-10-21 12:13:46.115972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.823 qpair failed and we were unable to recover it. 00:29:09.823 [2024-10-21 12:13:46.116361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.823 [2024-10-21 12:13:46.116395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.823 qpair failed and we were unable to recover it. 00:29:09.823 [2024-10-21 12:13:46.116734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.823 [2024-10-21 12:13:46.116763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.823 qpair failed and we were unable to recover it. 00:29:09.823 [2024-10-21 12:13:46.117121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.823 [2024-10-21 12:13:46.117151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.823 qpair failed and we were unable to recover it. 00:29:09.823 [2024-10-21 12:13:46.117506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.823 [2024-10-21 12:13:46.117539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.823 qpair failed and we were unable to recover it. 00:29:09.823 [2024-10-21 12:13:46.117915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.823 [2024-10-21 12:13:46.117946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.823 qpair failed and we were unable to recover it. 00:29:09.823 [2024-10-21 12:13:46.118296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.823 [2024-10-21 12:13:46.118340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.823 qpair failed and we were unable to recover it. 00:29:09.823 [2024-10-21 12:13:46.118673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.823 [2024-10-21 12:13:46.118703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.823 qpair failed and we were unable to recover it. 00:29:09.823 [2024-10-21 12:13:46.119047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.823 [2024-10-21 12:13:46.119077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.823 qpair failed and we were unable to recover it. 00:29:09.823 [2024-10-21 12:13:46.119429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.823 [2024-10-21 12:13:46.119462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.823 qpair failed and we were unable to recover it. 00:29:09.823 [2024-10-21 12:13:46.119810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.823 [2024-10-21 12:13:46.119841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.823 qpair failed and we were unable to recover it. 00:29:09.823 [2024-10-21 12:13:46.120195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.823 [2024-10-21 12:13:46.120228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.823 qpair failed and we were unable to recover it. 00:29:09.823 [2024-10-21 12:13:46.120579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.823 [2024-10-21 12:13:46.120610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.823 qpair failed and we were unable to recover it. 00:29:09.823 [2024-10-21 12:13:46.120965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.824 [2024-10-21 12:13:46.120995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.824 qpair failed and we were unable to recover it. 00:29:09.824 [2024-10-21 12:13:46.121220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.824 [2024-10-21 12:13:46.121250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.824 qpair failed and we were unable to recover it. 00:29:09.824 [2024-10-21 12:13:46.121636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.824 [2024-10-21 12:13:46.121668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.824 qpair failed and we were unable to recover it. 00:29:09.824 [2024-10-21 12:13:46.121873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.824 [2024-10-21 12:13:46.121902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.824 qpair failed and we were unable to recover it. 00:29:09.824 [2024-10-21 12:13:46.122220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.824 [2024-10-21 12:13:46.122251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.824 qpair failed and we were unable to recover it. 00:29:09.824 [2024-10-21 12:13:46.122549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.824 [2024-10-21 12:13:46.122582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.824 qpair failed and we were unable to recover it. 00:29:09.824 [2024-10-21 12:13:46.122910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.824 [2024-10-21 12:13:46.122941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.824 qpair failed and we were unable to recover it. 00:29:09.824 [2024-10-21 12:13:46.123284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.824 [2024-10-21 12:13:46.123314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.824 qpair failed and we were unable to recover it. 00:29:09.824 [2024-10-21 12:13:46.123573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.824 [2024-10-21 12:13:46.123603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.824 qpair failed and we were unable to recover it. 00:29:09.824 [2024-10-21 12:13:46.123817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.824 [2024-10-21 12:13:46.123847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.824 qpair failed and we were unable to recover it. 00:29:09.824 [2024-10-21 12:13:46.124098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.824 [2024-10-21 12:13:46.124128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.824 qpair failed and we were unable to recover it. 00:29:09.824 [2024-10-21 12:13:46.124358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.824 [2024-10-21 12:13:46.124391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.824 qpair failed and we were unable to recover it. 00:29:09.824 [2024-10-21 12:13:46.124774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.824 [2024-10-21 12:13:46.124805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.824 qpair failed and we were unable to recover it. 00:29:09.824 [2024-10-21 12:13:46.125170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.824 [2024-10-21 12:13:46.125201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.824 qpair failed and we were unable to recover it. 00:29:09.824 [2024-10-21 12:13:46.125471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.824 [2024-10-21 12:13:46.125502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.824 qpair failed and we were unable to recover it. 00:29:09.824 [2024-10-21 12:13:46.125873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.824 [2024-10-21 12:13:46.125904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.824 qpair failed and we were unable to recover it. 00:29:09.824 [2024-10-21 12:13:46.126200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.824 [2024-10-21 12:13:46.126230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.824 qpair failed and we were unable to recover it. 00:29:09.824 [2024-10-21 12:13:46.126587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.824 [2024-10-21 12:13:46.126619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.824 qpair failed and we were unable to recover it. 00:29:09.824 [2024-10-21 12:13:46.126984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.824 [2024-10-21 12:13:46.127022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.824 qpair failed and we were unable to recover it. 00:29:09.824 [2024-10-21 12:13:46.127376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.824 [2024-10-21 12:13:46.127407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.824 qpair failed and we were unable to recover it. 00:29:09.824 [2024-10-21 12:13:46.127633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.824 [2024-10-21 12:13:46.127663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.824 qpair failed and we were unable to recover it. 00:29:09.824 [2024-10-21 12:13:46.128013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.824 [2024-10-21 12:13:46.128043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.824 qpair failed and we were unable to recover it. 00:29:09.824 [2024-10-21 12:13:46.128453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.824 [2024-10-21 12:13:46.128486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.824 qpair failed and we were unable to recover it. 00:29:09.824 [2024-10-21 12:13:46.128734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.824 [2024-10-21 12:13:46.128764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.824 qpair failed and we were unable to recover it. 00:29:09.824 [2024-10-21 12:13:46.129112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.824 [2024-10-21 12:13:46.129146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.824 qpair failed and we were unable to recover it. 00:29:09.824 [2024-10-21 12:13:46.129495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.824 [2024-10-21 12:13:46.129528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.824 qpair failed and we were unable to recover it. 00:29:09.824 [2024-10-21 12:13:46.129874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.824 [2024-10-21 12:13:46.129903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.824 qpair failed and we were unable to recover it. 00:29:09.824 [2024-10-21 12:13:46.130267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.824 [2024-10-21 12:13:46.130297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.824 qpair failed and we were unable to recover it. 00:29:09.824 [2024-10-21 12:13:46.130672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.824 [2024-10-21 12:13:46.130703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.824 qpair failed and we were unable to recover it. 00:29:09.824 [2024-10-21 12:13:46.131052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.824 [2024-10-21 12:13:46.131083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.824 qpair failed and we were unable to recover it. 00:29:09.824 [2024-10-21 12:13:46.131295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.824 [2024-10-21 12:13:46.131338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.824 qpair failed and we were unable to recover it. 00:29:09.824 [2024-10-21 12:13:46.131662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.824 [2024-10-21 12:13:46.131694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.824 qpair failed and we were unable to recover it. 00:29:09.824 [2024-10-21 12:13:46.132039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.824 [2024-10-21 12:13:46.132071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.824 qpair failed and we were unable to recover it. 00:29:09.824 [2024-10-21 12:13:46.132404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.824 [2024-10-21 12:13:46.132437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.824 qpair failed and we were unable to recover it. 00:29:09.824 [2024-10-21 12:13:46.132816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.824 [2024-10-21 12:13:46.132847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.824 qpair failed and we were unable to recover it. 00:29:09.824 [2024-10-21 12:13:46.133194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.824 [2024-10-21 12:13:46.133225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.824 qpair failed and we were unable to recover it. 00:29:09.824 [2024-10-21 12:13:46.133480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.824 [2024-10-21 12:13:46.133512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.825 qpair failed and we were unable to recover it. 00:29:09.825 [2024-10-21 12:13:46.133862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.825 [2024-10-21 12:13:46.133892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.825 qpair failed and we were unable to recover it. 00:29:09.825 [2024-10-21 12:13:46.134290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.825 [2024-10-21 12:13:46.134329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.825 qpair failed and we were unable to recover it. 00:29:09.825 [2024-10-21 12:13:46.134666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.825 [2024-10-21 12:13:46.134698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.825 qpair failed and we were unable to recover it. 00:29:09.825 [2024-10-21 12:13:46.134939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.825 [2024-10-21 12:13:46.134969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.825 qpair failed and we were unable to recover it. 00:29:09.825 [2024-10-21 12:13:46.135314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.825 [2024-10-21 12:13:46.135357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.825 qpair failed and we were unable to recover it. 00:29:09.825 [2024-10-21 12:13:46.135606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.825 [2024-10-21 12:13:46.135636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.825 qpair failed and we were unable to recover it. 00:29:09.825 [2024-10-21 12:13:46.135988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.825 [2024-10-21 12:13:46.136018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.825 qpair failed and we were unable to recover it. 00:29:09.825 [2024-10-21 12:13:46.136397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.825 [2024-10-21 12:13:46.136431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.825 qpair failed and we were unable to recover it. 00:29:09.825 [2024-10-21 12:13:46.136810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.825 [2024-10-21 12:13:46.136841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.825 qpair failed and we were unable to recover it. 00:29:09.825 [2024-10-21 12:13:46.137198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.825 [2024-10-21 12:13:46.137231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.825 qpair failed and we were unable to recover it. 00:29:09.825 [2024-10-21 12:13:46.137404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.825 [2024-10-21 12:13:46.137436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.825 qpair failed and we were unable to recover it. 00:29:09.825 [2024-10-21 12:13:46.137669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.825 [2024-10-21 12:13:46.137700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.825 qpair failed and we were unable to recover it. 00:29:09.825 [2024-10-21 12:13:46.138053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.825 [2024-10-21 12:13:46.138083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.825 qpair failed and we were unable to recover it. 00:29:09.825 [2024-10-21 12:13:46.138399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.825 [2024-10-21 12:13:46.138432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.825 qpair failed and we were unable to recover it. 00:29:09.825 [2024-10-21 12:13:46.138814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.825 [2024-10-21 12:13:46.138845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.825 qpair failed and we were unable to recover it. 00:29:09.825 [2024-10-21 12:13:46.139205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.825 [2024-10-21 12:13:46.139235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.825 qpair failed and we were unable to recover it. 00:29:09.825 [2024-10-21 12:13:46.139590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.825 [2024-10-21 12:13:46.139622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.825 qpair failed and we were unable to recover it. 00:29:09.825 [2024-10-21 12:13:46.139985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.825 [2024-10-21 12:13:46.140016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.825 qpair failed and we were unable to recover it. 00:29:09.825 [2024-10-21 12:13:46.140385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.825 [2024-10-21 12:13:46.140420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.825 qpair failed and we were unable to recover it. 00:29:09.825 [2024-10-21 12:13:46.140652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.825 [2024-10-21 12:13:46.140682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.825 qpair failed and we were unable to recover it. 00:29:09.825 [2024-10-21 12:13:46.140920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.825 [2024-10-21 12:13:46.140951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.825 qpair failed and we were unable to recover it. 00:29:09.825 [2024-10-21 12:13:46.141318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.825 [2024-10-21 12:13:46.141369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.825 qpair failed and we were unable to recover it. 00:29:09.825 [2024-10-21 12:13:46.141724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.825 [2024-10-21 12:13:46.141754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.825 qpair failed and we were unable to recover it. 00:29:09.825 [2024-10-21 12:13:46.142166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.825 [2024-10-21 12:13:46.142197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.825 qpair failed and we were unable to recover it. 00:29:09.825 [2024-10-21 12:13:46.142524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.825 [2024-10-21 12:13:46.142556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.825 qpair failed and we were unable to recover it. 00:29:09.825 [2024-10-21 12:13:46.142901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.825 [2024-10-21 12:13:46.142931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.825 qpair failed and we were unable to recover it. 00:29:09.825 [2024-10-21 12:13:46.143290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.825 [2024-10-21 12:13:46.143332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.825 qpair failed and we were unable to recover it. 00:29:09.825 [2024-10-21 12:13:46.143691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.825 [2024-10-21 12:13:46.143722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.825 qpair failed and we were unable to recover it. 00:29:09.825 [2024-10-21 12:13:46.143981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.825 [2024-10-21 12:13:46.144011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.825 qpair failed and we were unable to recover it. 00:29:09.825 [2024-10-21 12:13:46.144261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.825 [2024-10-21 12:13:46.144296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.825 qpair failed and we were unable to recover it. 00:29:09.825 [2024-10-21 12:13:46.144680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.825 [2024-10-21 12:13:46.144712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.825 qpair failed and we were unable to recover it. 00:29:09.825 [2024-10-21 12:13:46.144961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.825 [2024-10-21 12:13:46.144991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.825 qpair failed and we were unable to recover it. 00:29:09.825 [2024-10-21 12:13:46.145360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.825 [2024-10-21 12:13:46.145394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.825 qpair failed and we were unable to recover it. 00:29:09.825 [2024-10-21 12:13:46.145604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.825 [2024-10-21 12:13:46.145637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.825 qpair failed and we were unable to recover it. 00:29:09.825 [2024-10-21 12:13:46.145979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.825 [2024-10-21 12:13:46.146009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.825 qpair failed and we were unable to recover it. 00:29:09.826 [2024-10-21 12:13:46.146231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.826 [2024-10-21 12:13:46.146262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.826 qpair failed and we were unable to recover it. 00:29:09.826 [2024-10-21 12:13:46.146428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.826 [2024-10-21 12:13:46.146459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.826 qpair failed and we were unable to recover it. 00:29:09.826 [2024-10-21 12:13:46.146826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.826 [2024-10-21 12:13:46.146856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.826 qpair failed and we were unable to recover it. 00:29:09.826 [2024-10-21 12:13:46.147081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.826 [2024-10-21 12:13:46.147112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.826 qpair failed and we were unable to recover it. 00:29:09.826 [2024-10-21 12:13:46.147342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.826 [2024-10-21 12:13:46.147374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.826 qpair failed and we were unable to recover it. 00:29:09.826 [2024-10-21 12:13:46.147741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.826 [2024-10-21 12:13:46.147772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.826 qpair failed and we were unable to recover it. 00:29:09.826 [2024-10-21 12:13:46.148146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.826 [2024-10-21 12:13:46.148177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.826 qpair failed and we were unable to recover it. 00:29:09.826 [2024-10-21 12:13:46.148516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.826 [2024-10-21 12:13:46.148548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.826 qpair failed and we were unable to recover it. 00:29:09.826 [2024-10-21 12:13:46.148976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.826 [2024-10-21 12:13:46.149007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.826 qpair failed and we were unable to recover it. 00:29:09.826 [2024-10-21 12:13:46.149103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.826 [2024-10-21 12:13:46.149132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.826 qpair failed and we were unable to recover it. 00:29:09.826 [2024-10-21 12:13:46.149472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.826 [2024-10-21 12:13:46.149505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.826 qpair failed and we were unable to recover it. 00:29:09.826 [2024-10-21 12:13:46.149751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.826 [2024-10-21 12:13:46.149783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.826 qpair failed and we were unable to recover it. 00:29:09.826 [2024-10-21 12:13:46.150129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.826 [2024-10-21 12:13:46.150160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.826 qpair failed and we were unable to recover it. 00:29:09.826 [2024-10-21 12:13:46.150501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.826 [2024-10-21 12:13:46.150532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.826 qpair failed and we were unable to recover it. 00:29:09.826 [2024-10-21 12:13:46.150918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.826 [2024-10-21 12:13:46.150948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.826 qpair failed and we were unable to recover it. 00:29:09.826 [2024-10-21 12:13:46.151174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.826 [2024-10-21 12:13:46.151205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.826 qpair failed and we were unable to recover it. 00:29:09.826 [2024-10-21 12:13:46.151452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.826 [2024-10-21 12:13:46.151484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.826 qpair failed and we were unable to recover it. 00:29:09.826 [2024-10-21 12:13:46.151843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.826 [2024-10-21 12:13:46.151873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.826 qpair failed and we were unable to recover it. 00:29:09.826 [2024-10-21 12:13:46.152228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.826 [2024-10-21 12:13:46.152260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.826 qpair failed and we were unable to recover it. 00:29:09.826 [2024-10-21 12:13:46.152522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.826 [2024-10-21 12:13:46.152554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.826 qpair failed and we were unable to recover it. 00:29:09.826 [2024-10-21 12:13:46.152922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.826 [2024-10-21 12:13:46.152953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.826 qpair failed and we were unable to recover it. 00:29:09.826 [2024-10-21 12:13:46.153308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.826 [2024-10-21 12:13:46.153362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.826 qpair failed and we were unable to recover it. 00:29:09.826 [2024-10-21 12:13:46.153707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.826 [2024-10-21 12:13:46.153737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.826 qpair failed and we were unable to recover it. 00:29:09.826 [2024-10-21 12:13:46.154095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.826 [2024-10-21 12:13:46.154125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.826 qpair failed and we were unable to recover it. 00:29:09.826 [2024-10-21 12:13:46.154241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.826 [2024-10-21 12:13:46.154271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.826 qpair failed and we were unable to recover it. 00:29:09.826 [2024-10-21 12:13:46.154650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.826 [2024-10-21 12:13:46.154682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.826 qpair failed and we were unable to recover it. 00:29:09.826 [2024-10-21 12:13:46.154888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.826 [2024-10-21 12:13:46.154926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.826 qpair failed and we were unable to recover it. 00:29:09.826 [2024-10-21 12:13:46.155165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.826 [2024-10-21 12:13:46.155196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.826 qpair failed and we were unable to recover it. 00:29:09.826 [2024-10-21 12:13:46.155539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.826 [2024-10-21 12:13:46.155572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.826 qpair failed and we were unable to recover it. 00:29:09.826 [2024-10-21 12:13:46.155947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.826 [2024-10-21 12:13:46.155977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.826 qpair failed and we were unable to recover it. 00:29:09.826 [2024-10-21 12:13:46.156344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.826 [2024-10-21 12:13:46.156378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.826 qpair failed and we were unable to recover it. 00:29:09.826 [2024-10-21 12:13:46.156739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.826 [2024-10-21 12:13:46.156771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.826 qpair failed and we were unable to recover it. 00:29:09.826 [2024-10-21 12:13:46.157126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.826 [2024-10-21 12:13:46.157156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.826 qpair failed and we were unable to recover it. 00:29:09.826 [2024-10-21 12:13:46.157531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.826 [2024-10-21 12:13:46.157564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.826 qpair failed and we were unable to recover it. 00:29:09.826 [2024-10-21 12:13:46.157810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.826 [2024-10-21 12:13:46.157841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.826 qpair failed and we were unable to recover it. 00:29:09.826 [2024-10-21 12:13:46.158186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.826 [2024-10-21 12:13:46.158216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.826 qpair failed and we were unable to recover it. 00:29:09.826 [2024-10-21 12:13:46.158566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.826 [2024-10-21 12:13:46.158599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.826 qpair failed and we were unable to recover it. 00:29:09.826 [2024-10-21 12:13:46.158842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.826 [2024-10-21 12:13:46.158874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.826 qpair failed and we were unable to recover it. 00:29:09.826 [2024-10-21 12:13:46.159225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.826 [2024-10-21 12:13:46.159255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.826 qpair failed and we were unable to recover it. 00:29:09.826 [2024-10-21 12:13:46.159609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.826 [2024-10-21 12:13:46.159641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.826 qpair failed and we were unable to recover it. 00:29:09.827 [2024-10-21 12:13:46.160000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.827 [2024-10-21 12:13:46.160031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.827 qpair failed and we were unable to recover it. 00:29:09.827 [2024-10-21 12:13:46.160263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.827 [2024-10-21 12:13:46.160294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.827 qpair failed and we were unable to recover it. 00:29:09.827 [2024-10-21 12:13:46.160678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.827 [2024-10-21 12:13:46.160710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.827 qpair failed and we were unable to recover it. 00:29:09.827 [2024-10-21 12:13:46.161066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.827 [2024-10-21 12:13:46.161097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.827 qpair failed and we were unable to recover it. 00:29:09.827 [2024-10-21 12:13:46.161454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.827 [2024-10-21 12:13:46.161488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.827 qpair failed and we were unable to recover it. 00:29:09.827 [2024-10-21 12:13:46.161704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.827 [2024-10-21 12:13:46.161735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.827 qpair failed and we were unable to recover it. 00:29:09.827 [2024-10-21 12:13:46.162078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.827 [2024-10-21 12:13:46.162108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.827 qpair failed and we were unable to recover it. 00:29:09.827 [2024-10-21 12:13:46.162461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.827 [2024-10-21 12:13:46.162495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.827 qpair failed and we were unable to recover it. 00:29:09.827 [2024-10-21 12:13:46.162846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.827 [2024-10-21 12:13:46.162876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.827 qpair failed and we were unable to recover it. 00:29:09.827 [2024-10-21 12:13:46.162969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.827 [2024-10-21 12:13:46.162999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf1c000b90 with addr=10.0.0.2, port=4420 00:29:09.827 qpair failed and we were unable to recover it. 00:29:09.827 Read completed with error (sct=0, sc=8) 00:29:09.827 starting I/O failed 00:29:09.827 Read completed with error (sct=0, sc=8) 00:29:09.827 starting I/O failed 00:29:09.827 Read completed with error (sct=0, sc=8) 00:29:09.827 starting I/O failed 00:29:09.827 Read completed with error (sct=0, sc=8) 00:29:09.827 starting I/O failed 00:29:09.827 Read completed with error (sct=0, sc=8) 00:29:09.827 starting I/O failed 00:29:09.827 Read completed with error (sct=0, sc=8) 00:29:09.827 starting I/O failed 00:29:09.827 Read completed with error (sct=0, sc=8) 00:29:09.827 starting I/O failed 00:29:09.827 Read completed with error (sct=0, sc=8) 00:29:09.827 starting I/O failed 00:29:09.827 Read completed with error (sct=0, sc=8) 00:29:09.827 starting I/O failed 00:29:09.827 Read completed with error (sct=0, sc=8) 00:29:09.827 starting I/O failed 00:29:09.827 Read completed with error (sct=0, sc=8) 00:29:09.827 starting I/O failed 00:29:09.827 Read completed with error (sct=0, sc=8) 00:29:09.827 starting I/O failed 00:29:09.827 Read completed with error (sct=0, sc=8) 00:29:09.827 starting I/O failed 00:29:09.827 Write completed with error (sct=0, sc=8) 00:29:09.827 starting I/O failed 00:29:09.827 Read completed with error (sct=0, sc=8) 00:29:09.827 starting I/O failed 00:29:09.827 Read completed with error (sct=0, sc=8) 00:29:09.827 starting I/O failed 00:29:09.827 Read completed with error (sct=0, sc=8) 00:29:09.827 starting I/O failed 00:29:09.827 Write completed with error (sct=0, sc=8) 00:29:09.827 starting I/O failed 00:29:09.827 Write completed with error (sct=0, sc=8) 00:29:09.827 starting I/O failed 00:29:09.827 Write completed with error (sct=0, sc=8) 00:29:09.827 starting I/O failed 00:29:09.827 Read completed with error (sct=0, sc=8) 00:29:09.827 starting I/O failed 00:29:09.827 Read completed with error (sct=0, sc=8) 00:29:09.827 starting I/O failed 00:29:09.827 Read completed with error (sct=0, sc=8) 00:29:09.827 starting I/O failed 00:29:09.827 Read completed with error (sct=0, sc=8) 00:29:09.827 starting I/O failed 00:29:09.827 Read completed with error (sct=0, sc=8) 00:29:09.827 starting I/O failed 00:29:09.827 Write completed with error (sct=0, sc=8) 00:29:09.827 starting I/O failed 00:29:09.827 Read completed with error (sct=0, sc=8) 00:29:09.827 starting I/O failed 00:29:09.827 Write completed with error (sct=0, sc=8) 00:29:09.827 starting I/O failed 00:29:09.827 Read completed with error (sct=0, sc=8) 00:29:09.827 starting I/O failed 00:29:09.827 Write completed with error (sct=0, sc=8) 00:29:09.827 starting I/O failed 00:29:09.827 Read completed with error (sct=0, sc=8) 00:29:09.827 starting I/O failed 00:29:09.827 Write completed with error (sct=0, sc=8) 00:29:09.827 starting I/O failed 00:29:09.827 [2024-10-21 12:13:46.163851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:09.827 [2024-10-21 12:13:46.164371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.827 [2024-10-21 12:13:46.164435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.827 qpair failed and we were unable to recover it. 00:29:09.827 [2024-10-21 12:13:46.164912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.827 [2024-10-21 12:13:46.165019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.827 qpair failed and we were unable to recover it. 00:29:09.827 [2024-10-21 12:13:46.165350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.827 [2024-10-21 12:13:46.165392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.827 qpair failed and we were unable to recover it. 00:29:09.827 [2024-10-21 12:13:46.165882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.827 [2024-10-21 12:13:46.165988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.827 qpair failed and we were unable to recover it. 00:29:09.827 [2024-10-21 12:13:46.166397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.827 [2024-10-21 12:13:46.166464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.827 qpair failed and we were unable to recover it. 00:29:09.827 [2024-10-21 12:13:46.166831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.827 [2024-10-21 12:13:46.166865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.827 qpair failed and we were unable to recover it. 00:29:09.827 [2024-10-21 12:13:46.167189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.827 [2024-10-21 12:13:46.167221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.827 qpair failed and we were unable to recover it. 00:29:09.827 [2024-10-21 12:13:46.167364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.827 [2024-10-21 12:13:46.167401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.827 qpair failed and we were unable to recover it. 00:29:09.827 [2024-10-21 12:13:46.167766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.827 [2024-10-21 12:13:46.167798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.827 qpair failed and we were unable to recover it. 00:29:09.827 [2024-10-21 12:13:46.168144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.827 [2024-10-21 12:13:46.168175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.827 qpair failed and we were unable to recover it. 00:29:09.827 [2024-10-21 12:13:46.168524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.827 [2024-10-21 12:13:46.168556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.827 qpair failed and we were unable to recover it. 00:29:09.827 [2024-10-21 12:13:46.168919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.827 [2024-10-21 12:13:46.168950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.827 qpair failed and we were unable to recover it. 00:29:09.827 [2024-10-21 12:13:46.169319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.827 [2024-10-21 12:13:46.169362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.827 qpair failed and we were unable to recover it. 00:29:09.827 [2024-10-21 12:13:46.169709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.827 [2024-10-21 12:13:46.169741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.827 qpair failed and we were unable to recover it. 00:29:09.827 [2024-10-21 12:13:46.169966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.827 [2024-10-21 12:13:46.169997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.827 qpair failed and we were unable to recover it. 00:29:09.827 [2024-10-21 12:13:46.170239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.827 [2024-10-21 12:13:46.170270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.827 qpair failed and we were unable to recover it. 00:29:09.827 [2024-10-21 12:13:46.170587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.827 [2024-10-21 12:13:46.170620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.827 qpair failed and we were unable to recover it. 00:29:09.827 [2024-10-21 12:13:46.170973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.827 [2024-10-21 12:13:46.171004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.827 qpair failed and we were unable to recover it. 00:29:09.827 [2024-10-21 12:13:46.171381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.827 [2024-10-21 12:13:46.171412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.827 qpair failed and we were unable to recover it. 00:29:09.827 [2024-10-21 12:13:46.171799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.828 [2024-10-21 12:13:46.171830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.828 qpair failed and we were unable to recover it. 00:29:09.828 [2024-10-21 12:13:46.172181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.828 [2024-10-21 12:13:46.172212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.828 qpair failed and we were unable to recover it. 00:29:09.828 [2024-10-21 12:13:46.172598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.828 [2024-10-21 12:13:46.172630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.828 qpair failed and we were unable to recover it. 00:29:09.828 [2024-10-21 12:13:46.173005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.828 [2024-10-21 12:13:46.173036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.828 qpair failed and we were unable to recover it. 00:29:09.828 [2024-10-21 12:13:46.173336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.828 [2024-10-21 12:13:46.173369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.828 qpair failed and we were unable to recover it. 00:29:09.828 [2024-10-21 12:13:46.173733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.828 [2024-10-21 12:13:46.173764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.828 qpair failed and we were unable to recover it. 00:29:09.828 [2024-10-21 12:13:46.174098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.828 [2024-10-21 12:13:46.174129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.828 qpair failed and we were unable to recover it. 00:29:09.828 [2024-10-21 12:13:46.174497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.828 [2024-10-21 12:13:46.174529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.828 qpair failed and we were unable to recover it. 00:29:09.828 [2024-10-21 12:13:46.174877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.828 [2024-10-21 12:13:46.174909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.828 qpair failed and we were unable to recover it. 00:29:09.828 [2024-10-21 12:13:46.175304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.828 [2024-10-21 12:13:46.175347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.828 qpair failed and we were unable to recover it. 00:29:09.828 [2024-10-21 12:13:46.175742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.828 [2024-10-21 12:13:46.175773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.828 qpair failed and we were unable to recover it. 00:29:09.828 [2024-10-21 12:13:46.176151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.828 [2024-10-21 12:13:46.176181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.828 qpair failed and we were unable to recover it. 00:29:09.828 [2024-10-21 12:13:46.176517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.828 [2024-10-21 12:13:46.176550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.828 qpair failed and we were unable to recover it. 00:29:09.828 [2024-10-21 12:13:46.176898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.828 [2024-10-21 12:13:46.176930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.828 qpair failed and we were unable to recover it. 00:29:09.828 [2024-10-21 12:13:46.177177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.828 [2024-10-21 12:13:46.177208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.828 qpair failed and we were unable to recover it. 00:29:09.828 [2024-10-21 12:13:46.177575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.828 [2024-10-21 12:13:46.177609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.828 qpair failed and we were unable to recover it. 00:29:09.828 [2024-10-21 12:13:46.177959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.828 [2024-10-21 12:13:46.177989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.828 qpair failed and we were unable to recover it. 00:29:09.828 [2024-10-21 12:13:46.178339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.828 [2024-10-21 12:13:46.178378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.828 qpair failed and we were unable to recover it. 00:29:09.828 [2024-10-21 12:13:46.178680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.828 [2024-10-21 12:13:46.178711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.828 qpair failed and we were unable to recover it. 00:29:09.828 [2024-10-21 12:13:46.178925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.828 [2024-10-21 12:13:46.178955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.828 qpair failed and we were unable to recover it. 00:29:09.828 [2024-10-21 12:13:46.179300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.828 [2024-10-21 12:13:46.179347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.828 qpair failed and we were unable to recover it. 00:29:09.828 [2024-10-21 12:13:46.179694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.828 [2024-10-21 12:13:46.179726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.828 qpair failed and we were unable to recover it. 00:29:09.828 [2024-10-21 12:13:46.179960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.828 [2024-10-21 12:13:46.179992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.828 qpair failed and we were unable to recover it. 00:29:09.828 [2024-10-21 12:13:46.180344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.828 [2024-10-21 12:13:46.180376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.828 qpair failed and we were unable to recover it. 00:29:09.828 [2024-10-21 12:13:46.180628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.828 [2024-10-21 12:13:46.180659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.828 qpair failed and we were unable to recover it. 00:29:09.828 [2024-10-21 12:13:46.181009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.828 [2024-10-21 12:13:46.181039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.828 qpair failed and we were unable to recover it. 00:29:09.828 [2024-10-21 12:13:46.181277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.828 [2024-10-21 12:13:46.181308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.828 qpair failed and we were unable to recover it. 00:29:09.828 [2024-10-21 12:13:46.181685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.828 [2024-10-21 12:13:46.181717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.828 qpair failed and we were unable to recover it. 00:29:09.828 [2024-10-21 12:13:46.182087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.828 [2024-10-21 12:13:46.182117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.828 qpair failed and we were unable to recover it. 00:29:09.828 [2024-10-21 12:13:46.182472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.828 [2024-10-21 12:13:46.182505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.828 qpair failed and we were unable to recover it. 00:29:09.828 [2024-10-21 12:13:46.182718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.828 [2024-10-21 12:13:46.182749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.828 qpair failed and we were unable to recover it. 00:29:09.828 [2024-10-21 12:13:46.183103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.828 [2024-10-21 12:13:46.183134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.828 qpair failed and we were unable to recover it. 00:29:09.828 [2024-10-21 12:13:46.183511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.828 [2024-10-21 12:13:46.183542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.828 qpair failed and we were unable to recover it. 00:29:09.828 [2024-10-21 12:13:46.183892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.828 [2024-10-21 12:13:46.183924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.828 qpair failed and we were unable to recover it. 00:29:09.828 [2024-10-21 12:13:46.184159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.828 [2024-10-21 12:13:46.184190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.828 qpair failed and we were unable to recover it. 00:29:09.828 [2024-10-21 12:13:46.184571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.828 [2024-10-21 12:13:46.184603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.828 qpair failed and we were unable to recover it. 00:29:09.828 [2024-10-21 12:13:46.184818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.828 [2024-10-21 12:13:46.184849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.828 qpair failed and we were unable to recover it. 00:29:09.828 [2024-10-21 12:13:46.185073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.828 [2024-10-21 12:13:46.185103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.828 qpair failed and we were unable to recover it. 00:29:09.828 [2024-10-21 12:13:46.185464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.828 [2024-10-21 12:13:46.185495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.828 qpair failed and we were unable to recover it. 00:29:09.828 [2024-10-21 12:13:46.185848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.828 [2024-10-21 12:13:46.185879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.828 qpair failed and we were unable to recover it. 00:29:09.828 [2024-10-21 12:13:46.186226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.828 [2024-10-21 12:13:46.186257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.829 qpair failed and we were unable to recover it. 00:29:09.829 [2024-10-21 12:13:46.186499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.829 [2024-10-21 12:13:46.186531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.829 qpair failed and we were unable to recover it. 00:29:09.829 [2024-10-21 12:13:46.186881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.829 [2024-10-21 12:13:46.186912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.829 qpair failed and we were unable to recover it. 00:29:09.829 [2024-10-21 12:13:46.187138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.829 [2024-10-21 12:13:46.187170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.829 qpair failed and we were unable to recover it. 00:29:09.829 [2024-10-21 12:13:46.187570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.829 [2024-10-21 12:13:46.187601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.829 qpair failed and we were unable to recover it. 00:29:09.829 [2024-10-21 12:13:46.187958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.829 [2024-10-21 12:13:46.187989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.829 qpair failed and we were unable to recover it. 00:29:09.829 [2024-10-21 12:13:46.188345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.829 [2024-10-21 12:13:46.188376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.829 qpair failed and we were unable to recover it. 00:29:09.829 [2024-10-21 12:13:46.188602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.829 [2024-10-21 12:13:46.188634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.829 qpair failed and we were unable to recover it. 00:29:09.829 [2024-10-21 12:13:46.189013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.829 [2024-10-21 12:13:46.189044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.829 qpair failed and we were unable to recover it. 00:29:09.829 [2024-10-21 12:13:46.189406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.829 [2024-10-21 12:13:46.189440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.829 qpair failed and we were unable to recover it. 00:29:09.829 [2024-10-21 12:13:46.189811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.829 [2024-10-21 12:13:46.189841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.829 qpair failed and we were unable to recover it. 00:29:09.829 [2024-10-21 12:13:46.190187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.829 [2024-10-21 12:13:46.190217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.829 qpair failed and we were unable to recover it. 00:29:09.829 [2024-10-21 12:13:46.190581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.829 [2024-10-21 12:13:46.190614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.829 qpair failed and we were unable to recover it. 00:29:09.829 [2024-10-21 12:13:46.190973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.829 [2024-10-21 12:13:46.191005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.829 qpair failed and we were unable to recover it. 00:29:09.829 [2024-10-21 12:13:46.191228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.829 [2024-10-21 12:13:46.191260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.829 qpair failed and we were unable to recover it. 00:29:09.829 [2024-10-21 12:13:46.191637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.829 [2024-10-21 12:13:46.191669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.829 qpair failed and we were unable to recover it. 00:29:09.829 [2024-10-21 12:13:46.192031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.829 [2024-10-21 12:13:46.192062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.829 qpair failed and we were unable to recover it. 00:29:09.829 [2024-10-21 12:13:46.192425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.829 [2024-10-21 12:13:46.192463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.829 qpair failed and we were unable to recover it. 00:29:09.829 [2024-10-21 12:13:46.192831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.829 [2024-10-21 12:13:46.192862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.829 qpair failed and we were unable to recover it. 00:29:09.829 [2024-10-21 12:13:46.193169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.829 [2024-10-21 12:13:46.193200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.829 qpair failed and we were unable to recover it. 00:29:09.829 [2024-10-21 12:13:46.193570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.829 [2024-10-21 12:13:46.193602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.829 qpair failed and we were unable to recover it. 00:29:09.829 [2024-10-21 12:13:46.193952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.829 [2024-10-21 12:13:46.193983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.829 qpair failed and we were unable to recover it. 00:29:09.829 [2024-10-21 12:13:46.194346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.829 [2024-10-21 12:13:46.194378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.829 qpair failed and we were unable to recover it. 00:29:09.829 [2024-10-21 12:13:46.194733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.829 [2024-10-21 12:13:46.194763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.829 qpair failed and we were unable to recover it. 00:29:09.829 [2024-10-21 12:13:46.194984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.829 [2024-10-21 12:13:46.195015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.829 qpair failed and we were unable to recover it. 00:29:09.829 [2024-10-21 12:13:46.195298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.829 [2024-10-21 12:13:46.195342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.829 qpair failed and we were unable to recover it. 00:29:09.829 [2024-10-21 12:13:46.195723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.829 [2024-10-21 12:13:46.195755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.829 qpair failed and we were unable to recover it. 00:29:09.829 [2024-10-21 12:13:46.196101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.829 [2024-10-21 12:13:46.196131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.829 qpair failed and we were unable to recover it. 00:29:09.829 [2024-10-21 12:13:46.196499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.829 [2024-10-21 12:13:46.196530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.829 qpair failed and we were unable to recover it. 00:29:09.829 [2024-10-21 12:13:46.196887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.829 [2024-10-21 12:13:46.196918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.829 qpair failed and we were unable to recover it. 00:29:09.829 [2024-10-21 12:13:46.197229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.829 [2024-10-21 12:13:46.197260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.829 qpair failed and we were unable to recover it. 00:29:09.829 [2024-10-21 12:13:46.197668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.829 [2024-10-21 12:13:46.197701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.829 qpair failed and we were unable to recover it. 00:29:09.829 [2024-10-21 12:13:46.198053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.829 [2024-10-21 12:13:46.198085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.829 qpair failed and we were unable to recover it. 00:29:09.829 [2024-10-21 12:13:46.198482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.830 [2024-10-21 12:13:46.198514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.830 qpair failed and we were unable to recover it. 00:29:09.830 [2024-10-21 12:13:46.198872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.830 [2024-10-21 12:13:46.198902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.830 qpair failed and we were unable to recover it. 00:29:09.830 [2024-10-21 12:13:46.199254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.830 [2024-10-21 12:13:46.199286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.830 qpair failed and we were unable to recover it. 00:29:09.830 [2024-10-21 12:13:46.199523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.830 [2024-10-21 12:13:46.199554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.830 qpair failed and we were unable to recover it. 00:29:09.830 [2024-10-21 12:13:46.199902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.830 [2024-10-21 12:13:46.199934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.830 qpair failed and we were unable to recover it. 00:29:09.830 [2024-10-21 12:13:46.200158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.830 [2024-10-21 12:13:46.200188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.830 qpair failed and we were unable to recover it. 00:29:09.830 [2024-10-21 12:13:46.200569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.830 [2024-10-21 12:13:46.200602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.830 qpair failed and we were unable to recover it. 00:29:09.830 [2024-10-21 12:13:46.200933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.830 [2024-10-21 12:13:46.200964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.830 qpair failed and we were unable to recover it. 00:29:09.830 [2024-10-21 12:13:46.201318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.830 [2024-10-21 12:13:46.201362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.830 qpair failed and we were unable to recover it. 00:29:09.830 [2024-10-21 12:13:46.201706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.830 [2024-10-21 12:13:46.201736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.830 qpair failed and we were unable to recover it. 00:29:09.830 [2024-10-21 12:13:46.202091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.830 [2024-10-21 12:13:46.202122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.830 qpair failed and we were unable to recover it. 00:29:09.830 [2024-10-21 12:13:46.202494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.830 [2024-10-21 12:13:46.202528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.830 qpair failed and we were unable to recover it. 00:29:09.830 [2024-10-21 12:13:46.202887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.830 [2024-10-21 12:13:46.202918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.830 qpair failed and we were unable to recover it. 00:29:09.830 [2024-10-21 12:13:46.203145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.830 [2024-10-21 12:13:46.203177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.830 qpair failed and we were unable to recover it. 00:29:09.830 [2024-10-21 12:13:46.203493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.830 [2024-10-21 12:13:46.203525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.830 qpair failed and we were unable to recover it. 00:29:09.830 [2024-10-21 12:13:46.203898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.830 [2024-10-21 12:13:46.203928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.830 qpair failed and we were unable to recover it. 00:29:09.830 [2024-10-21 12:13:46.204284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.830 [2024-10-21 12:13:46.204316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.830 qpair failed and we were unable to recover it. 00:29:09.830 [2024-10-21 12:13:46.204683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.830 [2024-10-21 12:13:46.204714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.830 qpair failed and we were unable to recover it. 00:29:09.830 [2024-10-21 12:13:46.205073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.830 [2024-10-21 12:13:46.205104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.830 qpair failed and we were unable to recover it. 00:29:09.830 [2024-10-21 12:13:46.205455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.830 [2024-10-21 12:13:46.205486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.830 qpair failed and we were unable to recover it. 00:29:09.830 [2024-10-21 12:13:46.205851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.830 [2024-10-21 12:13:46.205882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.830 qpair failed and we were unable to recover it. 00:29:09.830 [2024-10-21 12:13:46.206239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.830 [2024-10-21 12:13:46.206269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.830 qpair failed and we were unable to recover it. 00:29:09.830 [2024-10-21 12:13:46.206514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.830 [2024-10-21 12:13:46.206550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.830 qpair failed and we were unable to recover it. 00:29:09.830 [2024-10-21 12:13:46.206901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.830 [2024-10-21 12:13:46.206933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.830 qpair failed and we were unable to recover it. 00:29:09.830 [2024-10-21 12:13:46.207290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.830 [2024-10-21 12:13:46.207341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.830 qpair failed and we were unable to recover it. 00:29:09.830 [2024-10-21 12:13:46.207724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.830 [2024-10-21 12:13:46.207754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.830 qpair failed and we were unable to recover it. 00:29:09.830 [2024-10-21 12:13:46.208014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.830 [2024-10-21 12:13:46.208048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.830 qpair failed and we were unable to recover it. 00:29:09.830 [2024-10-21 12:13:46.208418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.830 [2024-10-21 12:13:46.208450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.830 qpair failed and we were unable to recover it. 00:29:09.830 [2024-10-21 12:13:46.208837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.830 [2024-10-21 12:13:46.208867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.830 qpair failed and we were unable to recover it. 00:29:09.830 [2024-10-21 12:13:46.209215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.830 [2024-10-21 12:13:46.209247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.830 qpair failed and we were unable to recover it. 00:29:09.830 [2024-10-21 12:13:46.209620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.830 [2024-10-21 12:13:46.209652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.830 qpair failed and we were unable to recover it. 00:29:09.830 [2024-10-21 12:13:46.210031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.830 [2024-10-21 12:13:46.210062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.830 qpair failed and we were unable to recover it. 00:29:09.830 [2024-10-21 12:13:46.210423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.830 [2024-10-21 12:13:46.210455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.830 qpair failed and we were unable to recover it. 00:29:09.830 [2024-10-21 12:13:46.210825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.830 [2024-10-21 12:13:46.210855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.830 qpair failed and we were unable to recover it. 00:29:09.830 [2024-10-21 12:13:46.211266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.830 [2024-10-21 12:13:46.211298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.830 qpair failed and we were unable to recover it. 00:29:09.830 [2024-10-21 12:13:46.211651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.830 [2024-10-21 12:13:46.211683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.830 qpair failed and we were unable to recover it. 00:29:09.830 [2024-10-21 12:13:46.211897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.830 [2024-10-21 12:13:46.211927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.830 qpair failed and we were unable to recover it. 00:29:09.830 [2024-10-21 12:13:46.212296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.830 [2024-10-21 12:13:46.212369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.830 qpair failed and we were unable to recover it. 00:29:09.830 [2024-10-21 12:13:46.212723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.830 [2024-10-21 12:13:46.212755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.830 qpair failed and we were unable to recover it. 00:29:09.830 [2024-10-21 12:13:46.213134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.830 [2024-10-21 12:13:46.213164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.830 qpair failed and we were unable to recover it. 00:29:09.830 [2024-10-21 12:13:46.213556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.830 [2024-10-21 12:13:46.213589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-21 12:13:46.213937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.831 [2024-10-21 12:13:46.213967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-21 12:13:46.214314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.831 [2024-10-21 12:13:46.214352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-21 12:13:46.214672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.831 [2024-10-21 12:13:46.214702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-21 12:13:46.215066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.831 [2024-10-21 12:13:46.215097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-21 12:13:46.215458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.831 [2024-10-21 12:13:46.215488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-21 12:13:46.215841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.831 [2024-10-21 12:13:46.215872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-21 12:13:46.216239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.831 [2024-10-21 12:13:46.216271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-21 12:13:46.216633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.831 [2024-10-21 12:13:46.216664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-21 12:13:46.217027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.831 [2024-10-21 12:13:46.217057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-21 12:13:46.217290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.831 [2024-10-21 12:13:46.217339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-21 12:13:46.217670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.831 [2024-10-21 12:13:46.217701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-21 12:13:46.218064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.831 [2024-10-21 12:13:46.218094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-21 12:13:46.218443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.831 [2024-10-21 12:13:46.218476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-21 12:13:46.218847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.831 [2024-10-21 12:13:46.218878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-21 12:13:46.219246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.831 [2024-10-21 12:13:46.219276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-21 12:13:46.219495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.831 [2024-10-21 12:13:46.219527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-21 12:13:46.219845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.831 [2024-10-21 12:13:46.219876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-21 12:13:46.220228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.831 [2024-10-21 12:13:46.220260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-21 12:13:46.220633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.831 [2024-10-21 12:13:46.220666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-21 12:13:46.221015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.831 [2024-10-21 12:13:46.221045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-21 12:13:46.221421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.831 [2024-10-21 12:13:46.221453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-21 12:13:46.221799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.831 [2024-10-21 12:13:46.221830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-21 12:13:46.222207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.831 [2024-10-21 12:13:46.222237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-21 12:13:46.222584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.831 [2024-10-21 12:13:46.222622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-21 12:13:46.222955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.831 [2024-10-21 12:13:46.222987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-21 12:13:46.223345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.831 [2024-10-21 12:13:46.223376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-21 12:13:46.223672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.831 [2024-10-21 12:13:46.223702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-21 12:13:46.224053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.831 [2024-10-21 12:13:46.224083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-21 12:13:46.224450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.831 [2024-10-21 12:13:46.224482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-21 12:13:46.224849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.831 [2024-10-21 12:13:46.224880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-21 12:13:46.225125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.831 [2024-10-21 12:13:46.225155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-21 12:13:46.225535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.831 [2024-10-21 12:13:46.225568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-21 12:13:46.225913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.831 [2024-10-21 12:13:46.225944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-21 12:13:46.226294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.831 [2024-10-21 12:13:46.226335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-21 12:13:46.226539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.831 [2024-10-21 12:13:46.226570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-21 12:13:46.226819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.831 [2024-10-21 12:13:46.226849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-21 12:13:46.227205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.831 [2024-10-21 12:13:46.227235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-21 12:13:46.227589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.831 [2024-10-21 12:13:46.227622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-21 12:13:46.227994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.831 [2024-10-21 12:13:46.228025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-21 12:13:46.228247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.831 [2024-10-21 12:13:46.228277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.832 [2024-10-21 12:13:46.228381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.832 [2024-10-21 12:13:46.228409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.832 qpair failed and we were unable to recover it. 00:29:09.832 [2024-10-21 12:13:46.228795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.832 [2024-10-21 12:13:46.228826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.832 qpair failed and we were unable to recover it. 00:29:09.832 [2024-10-21 12:13:46.229063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.832 [2024-10-21 12:13:46.229097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.832 qpair failed and we were unable to recover it. 00:29:09.832 [2024-10-21 12:13:46.229357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.832 [2024-10-21 12:13:46.229389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.832 qpair failed and we were unable to recover it. 00:29:09.832 [2024-10-21 12:13:46.229758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.832 [2024-10-21 12:13:46.229790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.832 qpair failed and we were unable to recover it. 00:29:09.832 [2024-10-21 12:13:46.230142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.832 [2024-10-21 12:13:46.230173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.832 qpair failed and we were unable to recover it. 00:29:09.832 [2024-10-21 12:13:46.230500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.832 [2024-10-21 12:13:46.230532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.832 qpair failed and we were unable to recover it. 00:29:09.832 [2024-10-21 12:13:46.230892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.832 [2024-10-21 12:13:46.230924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.832 qpair failed and we were unable to recover it. 00:29:09.832 [2024-10-21 12:13:46.231277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.832 [2024-10-21 12:13:46.231308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.832 qpair failed and we were unable to recover it. 00:29:09.832 [2024-10-21 12:13:46.231638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.832 [2024-10-21 12:13:46.231670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.832 qpair failed and we were unable to recover it. 00:29:09.832 [2024-10-21 12:13:46.232022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.832 [2024-10-21 12:13:46.232055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.832 qpair failed and we were unable to recover it. 00:29:09.832 [2024-10-21 12:13:46.232419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.832 [2024-10-21 12:13:46.232453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.832 qpair failed and we were unable to recover it. 00:29:09.832 [2024-10-21 12:13:46.232680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.832 [2024-10-21 12:13:46.232710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.832 qpair failed and we were unable to recover it. 00:29:09.832 [2024-10-21 12:13:46.233058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.832 [2024-10-21 12:13:46.233088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.832 qpair failed and we were unable to recover it. 00:29:09.832 [2024-10-21 12:13:46.233466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.832 [2024-10-21 12:13:46.233498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.832 qpair failed and we were unable to recover it. 00:29:09.832 [2024-10-21 12:13:46.233856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.832 [2024-10-21 12:13:46.233887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.832 qpair failed and we were unable to recover it. 00:29:09.832 [2024-10-21 12:13:46.234236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.832 [2024-10-21 12:13:46.234266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.832 qpair failed and we were unable to recover it. 00:29:09.832 [2024-10-21 12:13:46.234493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.832 [2024-10-21 12:13:46.234529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.832 qpair failed and we were unable to recover it. 00:29:09.832 [2024-10-21 12:13:46.234876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.832 [2024-10-21 12:13:46.234907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.832 qpair failed and we were unable to recover it. 00:29:09.832 [2024-10-21 12:13:46.235280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.832 [2024-10-21 12:13:46.235311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.832 qpair failed and we were unable to recover it. 00:29:09.832 [2024-10-21 12:13:46.235693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.832 [2024-10-21 12:13:46.235724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.832 qpair failed and we were unable to recover it. 00:29:09.832 [2024-10-21 12:13:46.236082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.832 [2024-10-21 12:13:46.236112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.832 qpair failed and we were unable to recover it. 00:29:09.832 [2024-10-21 12:13:46.236467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.832 [2024-10-21 12:13:46.236500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.832 qpair failed and we were unable to recover it. 00:29:09.832 [2024-10-21 12:13:46.236865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.832 [2024-10-21 12:13:46.236904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.832 qpair failed and we were unable to recover it. 00:29:09.832 [2024-10-21 12:13:46.237150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.832 [2024-10-21 12:13:46.237184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.832 qpair failed and we were unable to recover it. 00:29:09.832 [2024-10-21 12:13:46.237564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.832 [2024-10-21 12:13:46.237596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.832 qpair failed and we were unable to recover it. 00:29:09.832 [2024-10-21 12:13:46.237957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.832 [2024-10-21 12:13:46.237989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.832 qpair failed and we were unable to recover it. 00:29:09.832 [2024-10-21 12:13:46.238341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.832 [2024-10-21 12:13:46.238373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.832 qpair failed and we were unable to recover it. 00:29:09.832 [2024-10-21 12:13:46.238720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.832 [2024-10-21 12:13:46.238749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.832 qpair failed and we were unable to recover it. 00:29:09.832 [2024-10-21 12:13:46.239114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.832 [2024-10-21 12:13:46.239146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.832 qpair failed and we were unable to recover it. 00:29:09.832 [2024-10-21 12:13:46.239502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.832 [2024-10-21 12:13:46.239535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.832 qpair failed and we were unable to recover it. 00:29:09.832 [2024-10-21 12:13:46.239892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.832 [2024-10-21 12:13:46.239922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.832 qpair failed and we were unable to recover it. 00:29:09.832 [2024-10-21 12:13:46.240275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.832 [2024-10-21 12:13:46.240307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.832 qpair failed and we were unable to recover it. 00:29:09.832 [2024-10-21 12:13:46.240707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.832 [2024-10-21 12:13:46.240738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.832 qpair failed and we were unable to recover it. 00:29:09.832 [2024-10-21 12:13:46.241104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.832 [2024-10-21 12:13:46.241135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.832 qpair failed and we were unable to recover it. 00:29:09.832 [2024-10-21 12:13:46.241363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.832 [2024-10-21 12:13:46.241397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.832 qpair failed and we were unable to recover it. 00:29:09.832 [2024-10-21 12:13:46.241741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.832 [2024-10-21 12:13:46.241772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.832 qpair failed and we were unable to recover it. 00:29:09.832 [2024-10-21 12:13:46.242003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.832 [2024-10-21 12:13:46.242033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.832 qpair failed and we were unable to recover it. 00:29:09.832 [2024-10-21 12:13:46.242378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.832 [2024-10-21 12:13:46.242410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.832 qpair failed and we were unable to recover it. 00:29:09.832 [2024-10-21 12:13:46.242768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.832 [2024-10-21 12:13:46.242798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.832 qpair failed and we were unable to recover it. 00:29:09.832 [2024-10-21 12:13:46.243145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.833 [2024-10-21 12:13:46.243177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.833 qpair failed and we were unable to recover it. 00:29:09.833 [2024-10-21 12:13:46.243565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.833 [2024-10-21 12:13:46.243597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.833 qpair failed and we were unable to recover it. 00:29:09.833 [2024-10-21 12:13:46.243839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.833 [2024-10-21 12:13:46.243869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.833 qpair failed and we were unable to recover it. 00:29:09.833 [2024-10-21 12:13:46.244106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.833 [2024-10-21 12:13:46.244137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.833 qpair failed and we were unable to recover it. 00:29:09.833 [2024-10-21 12:13:46.244498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.833 [2024-10-21 12:13:46.244531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.833 qpair failed and we were unable to recover it. 00:29:09.833 [2024-10-21 12:13:46.244894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.833 [2024-10-21 12:13:46.244924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.833 qpair failed and we were unable to recover it. 00:29:09.833 [2024-10-21 12:13:46.245279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.833 [2024-10-21 12:13:46.245310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.833 qpair failed and we were unable to recover it. 00:29:09.833 [2024-10-21 12:13:46.245678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.833 [2024-10-21 12:13:46.245710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.833 qpair failed and we were unable to recover it. 00:29:09.833 [2024-10-21 12:13:46.246050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.833 [2024-10-21 12:13:46.246081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.833 qpair failed and we were unable to recover it. 00:29:09.833 [2024-10-21 12:13:46.246445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.833 [2024-10-21 12:13:46.246478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.833 qpair failed and we were unable to recover it. 00:29:09.833 [2024-10-21 12:13:46.246854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.833 [2024-10-21 12:13:46.246886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.833 qpair failed and we were unable to recover it. 00:29:09.833 [2024-10-21 12:13:46.247123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.833 [2024-10-21 12:13:46.247157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.833 qpair failed and we were unable to recover it. 00:29:09.833 [2024-10-21 12:13:46.247340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.833 [2024-10-21 12:13:46.247372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.833 qpair failed and we were unable to recover it. 00:29:09.833 [2024-10-21 12:13:46.247617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.833 [2024-10-21 12:13:46.247652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.833 qpair failed and we were unable to recover it. 00:29:09.833 [2024-10-21 12:13:46.248006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.833 [2024-10-21 12:13:46.248037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.833 qpair failed and we were unable to recover it. 00:29:09.833 [2024-10-21 12:13:46.248409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.833 [2024-10-21 12:13:46.248442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.833 qpair failed and we were unable to recover it. 00:29:09.833 [2024-10-21 12:13:46.248801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.833 [2024-10-21 12:13:46.248832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.833 qpair failed and we were unable to recover it. 00:29:09.833 [2024-10-21 12:13:46.249197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.833 [2024-10-21 12:13:46.249228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.833 qpair failed and we were unable to recover it. 00:29:09.833 [2024-10-21 12:13:46.249489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.833 [2024-10-21 12:13:46.249524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.833 qpair failed and we were unable to recover it. 00:29:09.833 [2024-10-21 12:13:46.249903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.833 [2024-10-21 12:13:46.249935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.833 qpair failed and we were unable to recover it. 00:29:09.833 [2024-10-21 12:13:46.250161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.833 [2024-10-21 12:13:46.250191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.833 qpair failed and we were unable to recover it. 00:29:09.833 [2024-10-21 12:13:46.250576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.833 [2024-10-21 12:13:46.250608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.833 qpair failed and we were unable to recover it. 00:29:09.833 [2024-10-21 12:13:46.250978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.833 [2024-10-21 12:13:46.251009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.833 qpair failed and we were unable to recover it. 00:29:09.833 [2024-10-21 12:13:46.251240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.833 [2024-10-21 12:13:46.251277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.833 qpair failed and we were unable to recover it. 00:29:09.833 [2024-10-21 12:13:46.251664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.833 [2024-10-21 12:13:46.251696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.833 qpair failed and we were unable to recover it. 00:29:09.833 [2024-10-21 12:13:46.251911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.833 [2024-10-21 12:13:46.251942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.833 qpair failed and we were unable to recover it. 00:29:09.833 [2024-10-21 12:13:46.252166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.833 [2024-10-21 12:13:46.252197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.833 qpair failed and we were unable to recover it. 00:29:09.833 [2024-10-21 12:13:46.252543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.833 [2024-10-21 12:13:46.252575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.833 qpair failed and we were unable to recover it. 00:29:09.833 [2024-10-21 12:13:46.252816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.833 [2024-10-21 12:13:46.252850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.833 qpair failed and we were unable to recover it. 00:29:09.833 [2024-10-21 12:13:46.253207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.833 [2024-10-21 12:13:46.253240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.833 qpair failed and we were unable to recover it. 00:29:09.833 [2024-10-21 12:13:46.253345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.833 [2024-10-21 12:13:46.253378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.833 qpair failed and we were unable to recover it. 00:29:09.833 [2024-10-21 12:13:46.253624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.833 [2024-10-21 12:13:46.253654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.833 qpair failed and we were unable to recover it. 00:29:09.833 [2024-10-21 12:13:46.253995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.833 [2024-10-21 12:13:46.254026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.833 qpair failed and we were unable to recover it. 00:29:09.833 [2024-10-21 12:13:46.254385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.833 [2024-10-21 12:13:46.254416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.833 qpair failed and we were unable to recover it. 00:29:09.833 [2024-10-21 12:13:46.254718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.833 [2024-10-21 12:13:46.254750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.833 qpair failed and we were unable to recover it. 00:29:09.833 [2024-10-21 12:13:46.254975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.833 [2024-10-21 12:13:46.255010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.834 qpair failed and we were unable to recover it. 00:29:09.834 [2024-10-21 12:13:46.255384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.834 [2024-10-21 12:13:46.255416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.834 qpair failed and we were unable to recover it. 00:29:09.834 [2024-10-21 12:13:46.255786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.834 [2024-10-21 12:13:46.255817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.834 qpair failed and we were unable to recover it. 00:29:09.834 [2024-10-21 12:13:46.256165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.834 [2024-10-21 12:13:46.256196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.834 qpair failed and we were unable to recover it. 00:29:09.834 [2024-10-21 12:13:46.256540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.834 [2024-10-21 12:13:46.256573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.834 qpair failed and we were unable to recover it. 00:29:09.834 [2024-10-21 12:13:46.256801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.834 [2024-10-21 12:13:46.256834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.834 qpair failed and we were unable to recover it. 00:29:09.834 [2024-10-21 12:13:46.257196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.834 [2024-10-21 12:13:46.257228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.834 qpair failed and we were unable to recover it. 00:29:09.834 [2024-10-21 12:13:46.257598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.834 [2024-10-21 12:13:46.257629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.834 qpair failed and we were unable to recover it. 00:29:09.834 [2024-10-21 12:13:46.257973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.834 [2024-10-21 12:13:46.258004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.834 qpair failed and we were unable to recover it. 00:29:09.834 [2024-10-21 12:13:46.258353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.834 [2024-10-21 12:13:46.258386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.834 qpair failed and we were unable to recover it. 00:29:09.834 [2024-10-21 12:13:46.258777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.834 [2024-10-21 12:13:46.258808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.834 qpair failed and we were unable to recover it. 00:29:09.834 [2024-10-21 12:13:46.259178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.834 [2024-10-21 12:13:46.259210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.834 qpair failed and we were unable to recover it. 00:29:09.834 [2024-10-21 12:13:46.259586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.834 [2024-10-21 12:13:46.259618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.834 qpair failed and we were unable to recover it. 00:29:09.834 [2024-10-21 12:13:46.259842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.834 [2024-10-21 12:13:46.259876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.834 qpair failed and we were unable to recover it. 00:29:09.834 [2024-10-21 12:13:46.260231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.834 [2024-10-21 12:13:46.260262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.834 qpair failed and we were unable to recover it. 00:29:09.834 [2024-10-21 12:13:46.260490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.834 [2024-10-21 12:13:46.260529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.834 qpair failed and we were unable to recover it. 00:29:09.834 [2024-10-21 12:13:46.260901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.834 [2024-10-21 12:13:46.260932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.834 qpair failed and we were unable to recover it. 00:29:09.834 [2024-10-21 12:13:46.261275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.834 [2024-10-21 12:13:46.261306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.834 qpair failed and we were unable to recover it. 00:29:09.834 [2024-10-21 12:13:46.261533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.834 [2024-10-21 12:13:46.261565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.834 qpair failed and we were unable to recover it. 00:29:09.834 [2024-10-21 12:13:46.261905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.834 [2024-10-21 12:13:46.261935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.834 qpair failed and we were unable to recover it. 00:29:09.834 [2024-10-21 12:13:46.262288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.834 [2024-10-21 12:13:46.262328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.834 qpair failed and we were unable to recover it. 00:29:09.834 [2024-10-21 12:13:46.262583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.834 [2024-10-21 12:13:46.262613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.834 qpair failed and we were unable to recover it. 00:29:09.834 [2024-10-21 12:13:46.262963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.834 [2024-10-21 12:13:46.262994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.834 qpair failed and we were unable to recover it. 00:29:09.834 [2024-10-21 12:13:46.263357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.834 [2024-10-21 12:13:46.263389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.834 qpair failed and we were unable to recover it. 00:29:09.834 [2024-10-21 12:13:46.263748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.834 [2024-10-21 12:13:46.263779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.834 qpair failed and we were unable to recover it. 00:29:09.834 [2024-10-21 12:13:46.264124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.834 [2024-10-21 12:13:46.264154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.834 qpair failed and we were unable to recover it. 00:29:09.834 [2024-10-21 12:13:46.264375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.834 [2024-10-21 12:13:46.264407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.834 qpair failed and we were unable to recover it. 00:29:09.834 [2024-10-21 12:13:46.264630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.834 [2024-10-21 12:13:46.264661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.834 qpair failed and we were unable to recover it. 00:29:09.834 [2024-10-21 12:13:46.265055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.834 [2024-10-21 12:13:46.265086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.834 qpair failed and we were unable to recover it. 00:29:09.834 [2024-10-21 12:13:46.265309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.834 [2024-10-21 12:13:46.265350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.834 qpair failed and we were unable to recover it. 00:29:09.834 [2024-10-21 12:13:46.265720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.834 [2024-10-21 12:13:46.265750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.834 qpair failed and we were unable to recover it. 00:29:09.834 [2024-10-21 12:13:46.266125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.834 [2024-10-21 12:13:46.266156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.834 qpair failed and we were unable to recover it. 00:29:09.834 [2024-10-21 12:13:46.266524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.834 [2024-10-21 12:13:46.266556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.834 qpair failed and we were unable to recover it. 00:29:09.834 [2024-10-21 12:13:46.266900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.834 [2024-10-21 12:13:46.266932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.834 qpair failed and we were unable to recover it. 00:29:09.834 [2024-10-21 12:13:46.267292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.834 [2024-10-21 12:13:46.267332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.834 qpair failed and we were unable to recover it. 00:29:09.834 [2024-10-21 12:13:46.267687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.834 [2024-10-21 12:13:46.267717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.834 qpair failed and we were unable to recover it. 00:29:09.834 [2024-10-21 12:13:46.268056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.834 [2024-10-21 12:13:46.268087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.834 qpair failed and we were unable to recover it. 00:29:09.834 [2024-10-21 12:13:46.268444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.834 [2024-10-21 12:13:46.268477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.834 qpair failed and we were unable to recover it. 00:29:09.834 [2024-10-21 12:13:46.268723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.834 [2024-10-21 12:13:46.268753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.834 qpair failed and we were unable to recover it. 00:29:09.834 [2024-10-21 12:13:46.269101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.834 [2024-10-21 12:13:46.269133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.834 qpair failed and we were unable to recover it. 00:29:09.834 [2024-10-21 12:13:46.269354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.834 [2024-10-21 12:13:46.269387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.835 qpair failed and we were unable to recover it. 00:29:09.835 [2024-10-21 12:13:46.269764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.835 [2024-10-21 12:13:46.269795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.835 qpair failed and we were unable to recover it. 00:29:09.835 [2024-10-21 12:13:46.270169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.835 [2024-10-21 12:13:46.270200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.835 qpair failed and we were unable to recover it. 00:29:09.835 [2024-10-21 12:13:46.270463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.835 [2024-10-21 12:13:46.270498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.835 qpair failed and we were unable to recover it. 00:29:09.835 [2024-10-21 12:13:46.270854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.835 [2024-10-21 12:13:46.270886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.835 qpair failed and we were unable to recover it. 00:29:09.835 [2024-10-21 12:13:46.271248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.835 [2024-10-21 12:13:46.271279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.835 qpair failed and we were unable to recover it. 00:29:09.835 [2024-10-21 12:13:46.271624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.835 [2024-10-21 12:13:46.271658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.835 qpair failed and we were unable to recover it. 00:29:09.835 [2024-10-21 12:13:46.272028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.835 [2024-10-21 12:13:46.272060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.835 qpair failed and we were unable to recover it. 00:29:09.835 [2024-10-21 12:13:46.272285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.835 [2024-10-21 12:13:46.272316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.835 qpair failed and we were unable to recover it. 00:29:09.835 [2024-10-21 12:13:46.272728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.835 [2024-10-21 12:13:46.272760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.835 qpair failed and we were unable to recover it. 00:29:09.835 [2024-10-21 12:13:46.273103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.835 [2024-10-21 12:13:46.273134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.835 qpair failed and we were unable to recover it. 00:29:09.835 [2024-10-21 12:13:46.273495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.835 [2024-10-21 12:13:46.273528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.835 qpair failed and we were unable to recover it. 00:29:09.835 [2024-10-21 12:13:46.273847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.835 [2024-10-21 12:13:46.273878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.835 qpair failed and we were unable to recover it. 00:29:09.835 [2024-10-21 12:13:46.274239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.835 [2024-10-21 12:13:46.274271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.835 qpair failed and we were unable to recover it. 00:29:09.835 [2024-10-21 12:13:46.274513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.835 [2024-10-21 12:13:46.274544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.835 qpair failed and we were unable to recover it. 00:29:09.835 [2024-10-21 12:13:46.274763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.835 [2024-10-21 12:13:46.274799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.835 qpair failed and we were unable to recover it. 00:29:09.835 [2024-10-21 12:13:46.275188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.835 [2024-10-21 12:13:46.275219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.835 qpair failed and we were unable to recover it. 00:29:09.835 [2024-10-21 12:13:46.275579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.835 [2024-10-21 12:13:46.275611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.835 qpair failed and we were unable to recover it. 00:29:09.835 [2024-10-21 12:13:46.275964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.835 [2024-10-21 12:13:46.275995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.835 qpair failed and we were unable to recover it. 00:29:09.835 [2024-10-21 12:13:46.276205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.835 [2024-10-21 12:13:46.276237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.835 qpair failed and we were unable to recover it. 00:29:09.835 [2024-10-21 12:13:46.276579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.835 [2024-10-21 12:13:46.276612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.835 qpair failed and we were unable to recover it. 00:29:09.835 [2024-10-21 12:13:46.276856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.835 [2024-10-21 12:13:46.276887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.835 qpair failed and we were unable to recover it. 00:29:09.835 [2024-10-21 12:13:46.277239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.835 [2024-10-21 12:13:46.277270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.835 qpair failed and we were unable to recover it. 00:29:09.835 [2024-10-21 12:13:46.277501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.835 [2024-10-21 12:13:46.277535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.835 qpair failed and we were unable to recover it. 00:29:09.835 [2024-10-21 12:13:46.277902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.835 [2024-10-21 12:13:46.277931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.835 qpair failed and we were unable to recover it. 00:29:09.835 [2024-10-21 12:13:46.278151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.835 [2024-10-21 12:13:46.278182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.835 qpair failed and we were unable to recover it. 00:29:09.835 [2024-10-21 12:13:46.278596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.835 [2024-10-21 12:13:46.278628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.835 qpair failed and we were unable to recover it. 00:29:09.835 [2024-10-21 12:13:46.278975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.835 [2024-10-21 12:13:46.279005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.835 qpair failed and we were unable to recover it. 00:29:09.835 [2024-10-21 12:13:46.279398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.835 [2024-10-21 12:13:46.279430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.835 qpair failed and we were unable to recover it. 00:29:09.835 [2024-10-21 12:13:46.279796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.835 [2024-10-21 12:13:46.279828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.835 qpair failed and we were unable to recover it. 00:29:09.835 [2024-10-21 12:13:46.280197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.835 [2024-10-21 12:13:46.280227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.835 qpair failed and we were unable to recover it. 00:29:09.835 [2024-10-21 12:13:46.280609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.835 [2024-10-21 12:13:46.280642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.835 qpair failed and we were unable to recover it. 00:29:09.835 [2024-10-21 12:13:46.280861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.835 [2024-10-21 12:13:46.280895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.835 qpair failed and we were unable to recover it. 00:29:09.835 [2024-10-21 12:13:46.281247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.835 [2024-10-21 12:13:46.281277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.835 qpair failed and we were unable to recover it. 00:29:09.835 [2024-10-21 12:13:46.281612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.835 [2024-10-21 12:13:46.281645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.835 qpair failed and we were unable to recover it. 00:29:09.835 [2024-10-21 12:13:46.282003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.835 [2024-10-21 12:13:46.282033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.835 qpair failed and we were unable to recover it. 00:29:09.835 [2024-10-21 12:13:46.282394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.835 [2024-10-21 12:13:46.282426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.835 qpair failed and we were unable to recover it. 00:29:09.835 [2024-10-21 12:13:46.282811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.835 [2024-10-21 12:13:46.282842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.835 qpair failed and we were unable to recover it. 00:29:09.835 [2024-10-21 12:13:46.283186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.835 [2024-10-21 12:13:46.283218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.835 qpair failed and we were unable to recover it. 00:29:09.835 [2024-10-21 12:13:46.283591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.835 [2024-10-21 12:13:46.283624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.835 qpair failed and we were unable to recover it. 00:29:09.835 [2024-10-21 12:13:46.283973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.835 [2024-10-21 12:13:46.284003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.835 qpair failed and we were unable to recover it. 00:29:09.835 [2024-10-21 12:13:46.284364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.836 [2024-10-21 12:13:46.284396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.836 qpair failed and we were unable to recover it. 00:29:09.836 [2024-10-21 12:13:46.284793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.836 [2024-10-21 12:13:46.284823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.836 qpair failed and we were unable to recover it. 00:29:09.836 [2024-10-21 12:13:46.285192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.836 [2024-10-21 12:13:46.285225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.836 qpair failed and we were unable to recover it. 00:29:09.836 [2024-10-21 12:13:46.285578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.836 [2024-10-21 12:13:46.285610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.836 qpair failed and we were unable to recover it. 00:29:09.836 [2024-10-21 12:13:46.285830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.836 [2024-10-21 12:13:46.285860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.836 qpair failed and we were unable to recover it. 00:29:09.836 [2024-10-21 12:13:46.286116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.836 [2024-10-21 12:13:46.286147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.836 qpair failed and we were unable to recover it. 00:29:09.836 [2024-10-21 12:13:46.286558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.836 [2024-10-21 12:13:46.286589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.836 qpair failed and we were unable to recover it. 00:29:09.836 [2024-10-21 12:13:46.286940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.836 [2024-10-21 12:13:46.286970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.836 qpair failed and we were unable to recover it. 00:29:09.836 [2024-10-21 12:13:46.287366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.836 [2024-10-21 12:13:46.287398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.836 qpair failed and we were unable to recover it. 00:29:09.836 [2024-10-21 12:13:46.287648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.836 [2024-10-21 12:13:46.287680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.836 qpair failed and we were unable to recover it. 00:29:09.836 [2024-10-21 12:13:46.288024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.836 [2024-10-21 12:13:46.288055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.836 qpair failed and we were unable to recover it. 00:29:09.836 [2024-10-21 12:13:46.288307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.836 [2024-10-21 12:13:46.288360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.836 qpair failed and we were unable to recover it. 00:29:09.836 [2024-10-21 12:13:46.288590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.836 [2024-10-21 12:13:46.288621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.836 qpair failed and we were unable to recover it. 00:29:09.836 [2024-10-21 12:13:46.288988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.836 [2024-10-21 12:13:46.289019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.836 qpair failed and we were unable to recover it. 00:29:09.836 [2024-10-21 12:13:46.289368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.836 [2024-10-21 12:13:46.289406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.836 qpair failed and we were unable to recover it. 00:29:09.836 [2024-10-21 12:13:46.289766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.836 [2024-10-21 12:13:46.289796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.836 qpair failed and we were unable to recover it. 00:29:09.836 [2024-10-21 12:13:46.290149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.836 [2024-10-21 12:13:46.290180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.836 qpair failed and we were unable to recover it. 00:29:09.836 [2024-10-21 12:13:46.290562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.836 [2024-10-21 12:13:46.290594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.836 qpair failed and we were unable to recover it. 00:29:09.836 [2024-10-21 12:13:46.290952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.836 [2024-10-21 12:13:46.290982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.836 qpair failed and we were unable to recover it. 00:29:09.836 [2024-10-21 12:13:46.291236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.836 [2024-10-21 12:13:46.291266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.836 qpair failed and we were unable to recover it. 00:29:09.836 [2024-10-21 12:13:46.291517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.836 [2024-10-21 12:13:46.291552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.836 qpair failed and we were unable to recover it. 00:29:09.836 [2024-10-21 12:13:46.291925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.836 [2024-10-21 12:13:46.291954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.836 qpair failed and we were unable to recover it. 00:29:09.836 [2024-10-21 12:13:46.292315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.836 [2024-10-21 12:13:46.292355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.836 qpair failed and we were unable to recover it. 00:29:09.836 [2024-10-21 12:13:46.292683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.836 [2024-10-21 12:13:46.292714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.836 qpair failed and we were unable to recover it. 00:29:09.836 [2024-10-21 12:13:46.293071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.836 [2024-10-21 12:13:46.293101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.836 qpair failed and we were unable to recover it. 00:29:09.836 [2024-10-21 12:13:46.293503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.836 [2024-10-21 12:13:46.293535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.836 qpair failed and we were unable to recover it. 00:29:09.836 [2024-10-21 12:13:46.293913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.836 [2024-10-21 12:13:46.293944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.836 qpair failed and we were unable to recover it. 00:29:09.836 [2024-10-21 12:13:46.294306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.836 [2024-10-21 12:13:46.294345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.836 qpair failed and we were unable to recover it. 00:29:09.836 [2024-10-21 12:13:46.294707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.836 [2024-10-21 12:13:46.294738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.836 qpair failed and we were unable to recover it. 00:29:09.836 [2024-10-21 12:13:46.295093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.836 [2024-10-21 12:13:46.295124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.836 qpair failed and we were unable to recover it. 00:29:09.836 [2024-10-21 12:13:46.295475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.836 [2024-10-21 12:13:46.295507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.836 qpair failed and we were unable to recover it. 00:29:09.836 [2024-10-21 12:13:46.295910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.836 [2024-10-21 12:13:46.295941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.836 qpair failed and we were unable to recover it. 00:29:09.836 [2024-10-21 12:13:46.296161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.836 [2024-10-21 12:13:46.296193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.836 qpair failed and we were unable to recover it. 00:29:09.836 [2024-10-21 12:13:46.296531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.836 [2024-10-21 12:13:46.296564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.836 qpair failed and we were unable to recover it. 00:29:09.836 [2024-10-21 12:13:46.296914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.836 [2024-10-21 12:13:46.296944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.836 qpair failed and we were unable to recover it. 00:29:09.836 [2024-10-21 12:13:46.297300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.836 [2024-10-21 12:13:46.297342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.836 qpair failed and we were unable to recover it. 00:29:09.836 [2024-10-21 12:13:46.297596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.836 [2024-10-21 12:13:46.297628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.836 qpair failed and we were unable to recover it. 00:29:09.836 [2024-10-21 12:13:46.297976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.836 [2024-10-21 12:13:46.298007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.836 qpair failed and we were unable to recover it. 00:29:09.836 [2024-10-21 12:13:46.298365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.836 [2024-10-21 12:13:46.298397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.836 qpair failed and we were unable to recover it. 00:29:09.836 [2024-10-21 12:13:46.298736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.836 [2024-10-21 12:13:46.298767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.836 qpair failed and we were unable to recover it. 00:29:09.836 [2024-10-21 12:13:46.299117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.837 [2024-10-21 12:13:46.299148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.837 qpair failed and we were unable to recover it. 00:29:09.837 [2024-10-21 12:13:46.299503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.837 [2024-10-21 12:13:46.299536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.837 qpair failed and we were unable to recover it. 00:29:09.837 [2024-10-21 12:13:46.299890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.837 [2024-10-21 12:13:46.299920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.837 qpair failed and we were unable to recover it. 00:29:09.837 [2024-10-21 12:13:46.300279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.837 [2024-10-21 12:13:46.300309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.837 qpair failed and we were unable to recover it. 00:29:09.837 [2024-10-21 12:13:46.300699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.837 [2024-10-21 12:13:46.300730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.837 qpair failed and we were unable to recover it. 00:29:09.837 [2024-10-21 12:13:46.301087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.837 [2024-10-21 12:13:46.301119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.837 qpair failed and we were unable to recover it. 00:29:09.837 [2024-10-21 12:13:46.301340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.837 [2024-10-21 12:13:46.301372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.837 qpair failed and we were unable to recover it. 00:29:09.837 [2024-10-21 12:13:46.301685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.837 [2024-10-21 12:13:46.301717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.837 qpair failed and we were unable to recover it. 00:29:09.837 [2024-10-21 12:13:46.301940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.837 [2024-10-21 12:13:46.301971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.837 qpair failed and we were unable to recover it. 00:29:09.837 [2024-10-21 12:13:46.302221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.837 [2024-10-21 12:13:46.302252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.837 qpair failed and we were unable to recover it. 00:29:09.837 [2024-10-21 12:13:46.302635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.837 [2024-10-21 12:13:46.302667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.837 qpair failed and we were unable to recover it. 00:29:09.837 [2024-10-21 12:13:46.303050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.837 [2024-10-21 12:13:46.303080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.837 qpair failed and we were unable to recover it. 00:29:09.837 [2024-10-21 12:13:46.303426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.837 [2024-10-21 12:13:46.303458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.837 qpair failed and we were unable to recover it. 00:29:09.837 [2024-10-21 12:13:46.303827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.837 [2024-10-21 12:13:46.303859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.837 qpair failed and we were unable to recover it. 00:29:09.837 [2024-10-21 12:13:46.304215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.837 [2024-10-21 12:13:46.304251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.837 qpair failed and we were unable to recover it. 00:29:09.837 [2024-10-21 12:13:46.304471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.837 [2024-10-21 12:13:46.304503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.837 qpair failed and we were unable to recover it. 00:29:09.837 [2024-10-21 12:13:46.304873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.837 [2024-10-21 12:13:46.304903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.837 qpair failed and we were unable to recover it. 00:29:09.837 [2024-10-21 12:13:46.305093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.837 [2024-10-21 12:13:46.305123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.837 qpair failed and we were unable to recover it. 00:29:09.837 [2024-10-21 12:13:46.305474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.837 [2024-10-21 12:13:46.305506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.837 qpair failed and we were unable to recover it. 00:29:09.837 [2024-10-21 12:13:46.305862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.837 [2024-10-21 12:13:46.305893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.837 qpair failed and we were unable to recover it. 00:29:09.837 [2024-10-21 12:13:46.306250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.837 [2024-10-21 12:13:46.306281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.837 qpair failed and we were unable to recover it. 00:29:09.837 [2024-10-21 12:13:46.306542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.837 [2024-10-21 12:13:46.306573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.837 qpair failed and we were unable to recover it. 00:29:09.837 [2024-10-21 12:13:46.306917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.837 [2024-10-21 12:13:46.306948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.837 qpair failed and we were unable to recover it. 00:29:09.837 [2024-10-21 12:13:46.307291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.837 [2024-10-21 12:13:46.307333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.837 qpair failed and we were unable to recover it. 00:29:09.837 [2024-10-21 12:13:46.307539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.837 [2024-10-21 12:13:46.307570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.837 qpair failed and we were unable to recover it. 00:29:09.837 [2024-10-21 12:13:46.307832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.837 [2024-10-21 12:13:46.307863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.837 qpair failed and we were unable to recover it. 00:29:09.837 [2024-10-21 12:13:46.308209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.837 [2024-10-21 12:13:46.308240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.837 qpair failed and we were unable to recover it. 00:29:09.837 [2024-10-21 12:13:46.308599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.837 [2024-10-21 12:13:46.308633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.837 qpair failed and we were unable to recover it. 00:29:09.837 [2024-10-21 12:13:46.308933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.837 [2024-10-21 12:13:46.308964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.837 qpair failed and we were unable to recover it. 00:29:09.837 [2024-10-21 12:13:46.309218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.837 [2024-10-21 12:13:46.309252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.837 qpair failed and we were unable to recover it. 00:29:09.837 [2024-10-21 12:13:46.309584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.837 [2024-10-21 12:13:46.309616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.837 qpair failed and we were unable to recover it. 00:29:09.837 [2024-10-21 12:13:46.309981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.837 [2024-10-21 12:13:46.310011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.837 qpair failed and we were unable to recover it. 00:29:09.837 [2024-10-21 12:13:46.310328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.837 [2024-10-21 12:13:46.310361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.837 qpair failed and we were unable to recover it. 00:29:09.837 [2024-10-21 12:13:46.310718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.837 [2024-10-21 12:13:46.310750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.837 qpair failed and we were unable to recover it. 00:29:09.837 [2024-10-21 12:13:46.311105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.837 [2024-10-21 12:13:46.311136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.837 qpair failed and we were unable to recover it. 00:29:09.837 [2024-10-21 12:13:46.311520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.837 [2024-10-21 12:13:46.311552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.837 qpair failed and we were unable to recover it. 00:29:09.837 [2024-10-21 12:13:46.311952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.837 [2024-10-21 12:13:46.311983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.837 qpair failed and we were unable to recover it. 00:29:09.837 [2024-10-21 12:13:46.312342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.837 [2024-10-21 12:13:46.312373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.837 qpair failed and we were unable to recover it. 00:29:09.837 [2024-10-21 12:13:46.312719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.837 [2024-10-21 12:13:46.312750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.837 qpair failed and we were unable to recover it. 00:29:09.837 [2024-10-21 12:13:46.313130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.837 [2024-10-21 12:13:46.313161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.837 qpair failed and we were unable to recover it. 00:29:09.837 [2024-10-21 12:13:46.313537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.837 [2024-10-21 12:13:46.313568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.838 qpair failed and we were unable to recover it. 00:29:09.838 [2024-10-21 12:13:46.313961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.838 [2024-10-21 12:13:46.313993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.838 qpair failed and we were unable to recover it. 00:29:09.838 [2024-10-21 12:13:46.314343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.838 [2024-10-21 12:13:46.314376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.838 qpair failed and we were unable to recover it. 00:29:09.838 [2024-10-21 12:13:46.314612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.838 [2024-10-21 12:13:46.314642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.838 qpair failed and we were unable to recover it. 00:29:09.838 [2024-10-21 12:13:46.314983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.838 [2024-10-21 12:13:46.315014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.838 qpair failed and we were unable to recover it. 00:29:09.838 [2024-10-21 12:13:46.315370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.838 [2024-10-21 12:13:46.315402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.838 qpair failed and we were unable to recover it. 00:29:09.838 [2024-10-21 12:13:46.315644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.838 [2024-10-21 12:13:46.315675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.838 qpair failed and we were unable to recover it. 00:29:09.838 [2024-10-21 12:13:46.316018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.838 [2024-10-21 12:13:46.316049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.838 qpair failed and we were unable to recover it. 00:29:09.838 [2024-10-21 12:13:46.316406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.838 [2024-10-21 12:13:46.316439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.838 qpair failed and we were unable to recover it. 00:29:09.838 [2024-10-21 12:13:46.316776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.838 [2024-10-21 12:13:46.316806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.838 qpair failed and we were unable to recover it. 00:29:09.838 [2024-10-21 12:13:46.317164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.838 [2024-10-21 12:13:46.317195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.838 qpair failed and we were unable to recover it. 00:29:09.838 [2024-10-21 12:13:46.317543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.838 [2024-10-21 12:13:46.317575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.838 qpair failed and we were unable to recover it. 00:29:09.838 [2024-10-21 12:13:46.317822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.838 [2024-10-21 12:13:46.317853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.838 qpair failed and we were unable to recover it. 00:29:09.838 [2024-10-21 12:13:46.318225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.838 [2024-10-21 12:13:46.318255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.838 qpair failed and we were unable to recover it. 00:29:09.838 [2024-10-21 12:13:46.318609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.838 [2024-10-21 12:13:46.318647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.838 qpair failed and we were unable to recover it. 00:29:09.838 [2024-10-21 12:13:46.318762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.838 [2024-10-21 12:13:46.318795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.838 qpair failed and we were unable to recover it. 00:29:09.838 [2024-10-21 12:13:46.319223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.838 [2024-10-21 12:13:46.319253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.838 qpair failed and we were unable to recover it. 00:29:09.838 [2024-10-21 12:13:46.319620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.838 [2024-10-21 12:13:46.319653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.838 qpair failed and we were unable to recover it. 00:29:09.838 [2024-10-21 12:13:46.319880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.838 [2024-10-21 12:13:46.319910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.838 qpair failed and we were unable to recover it. 00:29:09.838 [2024-10-21 12:13:46.320166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.838 [2024-10-21 12:13:46.320197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.838 qpair failed and we were unable to recover it. 00:29:09.838 [2024-10-21 12:13:46.320517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.838 [2024-10-21 12:13:46.320550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.838 qpair failed and we were unable to recover it. 00:29:09.838 [2024-10-21 12:13:46.320896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.838 [2024-10-21 12:13:46.320928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.838 qpair failed and we were unable to recover it. 00:29:09.838 [2024-10-21 12:13:46.321152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.838 [2024-10-21 12:13:46.321182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.838 qpair failed and we were unable to recover it. 00:29:09.838 [2024-10-21 12:13:46.321587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.838 [2024-10-21 12:13:46.321620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.838 qpair failed and we were unable to recover it. 00:29:09.838 [2024-10-21 12:13:46.321964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.838 [2024-10-21 12:13:46.321996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.838 qpair failed and we were unable to recover it. 00:29:09.838 [2024-10-21 12:13:46.322378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.838 [2024-10-21 12:13:46.322409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.838 qpair failed and we were unable to recover it. 00:29:09.838 [2024-10-21 12:13:46.322797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.838 [2024-10-21 12:13:46.322828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.838 qpair failed and we were unable to recover it. 00:29:09.838 [2024-10-21 12:13:46.323035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.838 [2024-10-21 12:13:46.323067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.838 qpair failed and we were unable to recover it. 00:29:09.838 [2024-10-21 12:13:46.323427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.838 [2024-10-21 12:13:46.323459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.838 qpair failed and we were unable to recover it. 00:29:09.838 [2024-10-21 12:13:46.323696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.838 [2024-10-21 12:13:46.323727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.838 qpair failed and we were unable to recover it. 00:29:09.838 [2024-10-21 12:13:46.324104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.838 [2024-10-21 12:13:46.324135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.838 qpair failed and we were unable to recover it. 00:29:09.838 [2024-10-21 12:13:46.324389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.838 [2024-10-21 12:13:46.324421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.838 qpair failed and we were unable to recover it. 00:29:09.838 [2024-10-21 12:13:46.324719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.838 [2024-10-21 12:13:46.324750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.838 qpair failed and we were unable to recover it. 00:29:09.838 [2024-10-21 12:13:46.325101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.838 [2024-10-21 12:13:46.325132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.838 qpair failed and we were unable to recover it. 00:29:09.838 [2024-10-21 12:13:46.325350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.838 [2024-10-21 12:13:46.325381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.838 qpair failed and we were unable to recover it. 00:29:09.838 [2024-10-21 12:13:46.325693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.838 [2024-10-21 12:13:46.325723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.838 qpair failed and we were unable to recover it. 00:29:09.838 [2024-10-21 12:13:46.326086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.839 [2024-10-21 12:13:46.326118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.839 qpair failed and we were unable to recover it. 00:29:09.839 [2024-10-21 12:13:46.326387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.839 [2024-10-21 12:13:46.326420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.839 qpair failed and we were unable to recover it. 00:29:09.839 [2024-10-21 12:13:46.326777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.839 [2024-10-21 12:13:46.326808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.839 qpair failed and we were unable to recover it. 00:29:09.839 [2024-10-21 12:13:46.327167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.839 [2024-10-21 12:13:46.327197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.839 qpair failed and we were unable to recover it. 00:29:09.839 [2024-10-21 12:13:46.327588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.839 [2024-10-21 12:13:46.327620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.839 qpair failed and we were unable to recover it. 00:29:09.839 [2024-10-21 12:13:46.327948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.839 [2024-10-21 12:13:46.327979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.839 qpair failed and we were unable to recover it. 00:29:09.839 [2024-10-21 12:13:46.328341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.839 [2024-10-21 12:13:46.328374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.839 qpair failed and we were unable to recover it. 00:29:09.839 [2024-10-21 12:13:46.328693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.839 [2024-10-21 12:13:46.328725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.839 qpair failed and we were unable to recover it. 00:29:09.839 [2024-10-21 12:13:46.329070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.839 [2024-10-21 12:13:46.329102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.839 qpair failed and we were unable to recover it. 00:29:09.839 [2024-10-21 12:13:46.329462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.839 [2024-10-21 12:13:46.329495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.839 qpair failed and we were unable to recover it. 00:29:09.839 [2024-10-21 12:13:46.329865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.839 [2024-10-21 12:13:46.329896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.839 qpair failed and we were unable to recover it. 00:29:09.839 [2024-10-21 12:13:46.330261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.839 [2024-10-21 12:13:46.330293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.839 qpair failed and we were unable to recover it. 00:29:09.839 [2024-10-21 12:13:46.330545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.839 [2024-10-21 12:13:46.330577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.839 qpair failed and we were unable to recover it. 00:29:09.839 [2024-10-21 12:13:46.330817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.839 [2024-10-21 12:13:46.330848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.839 qpair failed and we were unable to recover it. 00:29:09.839 [2024-10-21 12:13:46.331259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.839 [2024-10-21 12:13:46.331289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.839 qpair failed and we were unable to recover it. 00:29:09.839 [2024-10-21 12:13:46.331646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.839 [2024-10-21 12:13:46.331679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.839 qpair failed and we were unable to recover it. 00:29:09.839 [2024-10-21 12:13:46.332037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.839 [2024-10-21 12:13:46.332067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.839 qpair failed and we were unable to recover it. 00:29:09.839 [2024-10-21 12:13:46.332448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.839 [2024-10-21 12:13:46.332482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.839 qpair failed and we were unable to recover it. 00:29:09.839 [2024-10-21 12:13:46.332852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.839 [2024-10-21 12:13:46.332889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.839 qpair failed and we were unable to recover it. 00:29:09.839 [2024-10-21 12:13:46.333242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.839 [2024-10-21 12:13:46.333273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.839 qpair failed and we were unable to recover it. 00:29:09.839 [2024-10-21 12:13:46.333662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.839 [2024-10-21 12:13:46.333694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.839 qpair failed and we were unable to recover it. 00:29:09.839 [2024-10-21 12:13:46.334060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.839 [2024-10-21 12:13:46.334091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.839 qpair failed and we were unable to recover it. 00:29:09.839 [2024-10-21 12:13:46.334453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.839 [2024-10-21 12:13:46.334484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.839 qpair failed and we were unable to recover it. 00:29:09.839 [2024-10-21 12:13:46.334851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.839 [2024-10-21 12:13:46.334881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.839 qpair failed and we were unable to recover it. 00:29:09.839 [2024-10-21 12:13:46.335236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.839 [2024-10-21 12:13:46.335268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.839 qpair failed and we were unable to recover it. 00:29:09.839 [2024-10-21 12:13:46.335631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.839 [2024-10-21 12:13:46.335663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.839 qpair failed and we were unable to recover it. 00:29:09.839 [2024-10-21 12:13:46.336020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.839 [2024-10-21 12:13:46.336051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.839 qpair failed and we were unable to recover it. 00:29:09.839 [2024-10-21 12:13:46.336422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.839 [2024-10-21 12:13:46.336453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.839 qpair failed and we were unable to recover it. 00:29:09.839 [2024-10-21 12:13:46.336812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.839 [2024-10-21 12:13:46.336843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.839 qpair failed and we were unable to recover it. 00:29:09.839 [2024-10-21 12:13:46.337074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.839 [2024-10-21 12:13:46.337105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.839 qpair failed and we were unable to recover it. 00:29:09.839 [2024-10-21 12:13:46.337541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.839 [2024-10-21 12:13:46.337573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.839 qpair failed and we were unable to recover it. 00:29:09.839 [2024-10-21 12:13:46.337898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.839 [2024-10-21 12:13:46.337931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.839 qpair failed and we were unable to recover it. 00:29:09.839 [2024-10-21 12:13:46.338283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.839 [2024-10-21 12:13:46.338313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.839 qpair failed and we were unable to recover it. 00:29:09.839 [2024-10-21 12:13:46.338672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.839 [2024-10-21 12:13:46.338704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.839 qpair failed and we were unable to recover it. 00:29:09.839 [2024-10-21 12:13:46.339077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.839 [2024-10-21 12:13:46.339108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.839 qpair failed and we were unable to recover it. 00:29:09.839 [2024-10-21 12:13:46.339488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.839 [2024-10-21 12:13:46.339521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.839 qpair failed and we were unable to recover it. 00:29:09.839 [2024-10-21 12:13:46.339864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.839 [2024-10-21 12:13:46.339896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.839 qpair failed and we were unable to recover it. 00:29:09.839 [2024-10-21 12:13:46.340253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.839 [2024-10-21 12:13:46.340284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.839 qpair failed and we were unable to recover it. 00:29:09.839 [2024-10-21 12:13:46.340661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.839 [2024-10-21 12:13:46.340694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.839 qpair failed and we were unable to recover it. 00:29:09.839 [2024-10-21 12:13:46.340952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.839 [2024-10-21 12:13:46.340984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.839 qpair failed and we were unable to recover it. 00:29:09.839 [2024-10-21 12:13:46.341370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.840 [2024-10-21 12:13:46.341411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.840 qpair failed and we were unable to recover it. 00:29:09.840 [2024-10-21 12:13:46.341795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.840 [2024-10-21 12:13:46.341827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.840 qpair failed and we were unable to recover it. 00:29:09.840 [2024-10-21 12:13:46.342180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.840 [2024-10-21 12:13:46.342211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.840 qpair failed and we were unable to recover it. 00:29:09.840 [2024-10-21 12:13:46.342591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.840 [2024-10-21 12:13:46.342623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.840 qpair failed and we were unable to recover it. 00:29:09.840 [2024-10-21 12:13:46.343014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.840 [2024-10-21 12:13:46.343045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.840 qpair failed and we were unable to recover it. 00:29:09.840 [2024-10-21 12:13:46.343405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.840 [2024-10-21 12:13:46.343438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.840 qpair failed and we were unable to recover it. 00:29:09.840 [2024-10-21 12:13:46.343803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.840 [2024-10-21 12:13:46.343833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.840 qpair failed and we were unable to recover it. 00:29:09.840 [2024-10-21 12:13:46.344203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.840 [2024-10-21 12:13:46.344235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.840 qpair failed and we were unable to recover it. 00:29:09.840 [2024-10-21 12:13:46.344586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.840 [2024-10-21 12:13:46.344618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.840 qpair failed and we were unable to recover it. 00:29:09.840 [2024-10-21 12:13:46.344963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.840 [2024-10-21 12:13:46.344994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.840 qpair failed and we were unable to recover it. 00:29:09.840 [2024-10-21 12:13:46.345351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.840 [2024-10-21 12:13:46.345382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.840 qpair failed and we were unable to recover it. 00:29:09.840 [2024-10-21 12:13:46.345742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.840 [2024-10-21 12:13:46.345773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.840 qpair failed and we were unable to recover it. 00:29:09.840 [2024-10-21 12:13:46.346134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.840 [2024-10-21 12:13:46.346165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.840 qpair failed and we were unable to recover it. 00:29:09.840 [2024-10-21 12:13:46.346544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.840 [2024-10-21 12:13:46.346576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.840 qpair failed and we were unable to recover it. 00:29:09.840 [2024-10-21 12:13:46.346792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.840 [2024-10-21 12:13:46.346822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.840 qpair failed and we were unable to recover it. 00:29:09.840 [2024-10-21 12:13:46.347115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.840 [2024-10-21 12:13:46.347146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.840 qpair failed and we were unable to recover it. 00:29:09.840 [2024-10-21 12:13:46.347505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.840 [2024-10-21 12:13:46.347536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.840 qpair failed and we were unable to recover it. 00:29:09.840 [2024-10-21 12:13:46.347903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.840 [2024-10-21 12:13:46.347935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.840 qpair failed and we were unable to recover it. 00:29:09.840 [2024-10-21 12:13:46.348147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.840 [2024-10-21 12:13:46.348184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.840 qpair failed and we were unable to recover it. 00:29:09.840 [2024-10-21 12:13:46.348545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.840 [2024-10-21 12:13:46.348577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.840 qpair failed and we were unable to recover it. 00:29:09.840 [2024-10-21 12:13:46.348934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.840 [2024-10-21 12:13:46.348964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.840 qpair failed and we were unable to recover it. 00:29:09.840 [2024-10-21 12:13:46.349171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.840 [2024-10-21 12:13:46.349202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.840 qpair failed and we were unable to recover it. 00:29:09.840 [2024-10-21 12:13:46.349464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.840 [2024-10-21 12:13:46.349500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.840 qpair failed and we were unable to recover it. 00:29:09.840 [2024-10-21 12:13:46.349855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.840 [2024-10-21 12:13:46.349885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.840 qpair failed and we were unable to recover it. 00:29:09.840 [2024-10-21 12:13:46.350235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.840 [2024-10-21 12:13:46.350265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.840 qpair failed and we were unable to recover it. 00:29:09.840 [2024-10-21 12:13:46.350628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.840 [2024-10-21 12:13:46.350661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.840 qpair failed and we were unable to recover it. 00:29:09.840 [2024-10-21 12:13:46.351021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.840 [2024-10-21 12:13:46.351051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.840 qpair failed and we were unable to recover it. 00:29:09.840 [2024-10-21 12:13:46.351434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.840 [2024-10-21 12:13:46.351466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.840 qpair failed and we were unable to recover it. 00:29:09.840 [2024-10-21 12:13:46.351693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.840 [2024-10-21 12:13:46.351724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.840 qpair failed and we were unable to recover it. 00:29:09.840 [2024-10-21 12:13:46.351980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.840 [2024-10-21 12:13:46.352012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.840 qpair failed and we were unable to recover it. 00:29:09.840 [2024-10-21 12:13:46.352231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.840 [2024-10-21 12:13:46.352262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.840 qpair failed and we were unable to recover it. 00:29:09.840 [2024-10-21 12:13:46.352635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.840 [2024-10-21 12:13:46.352667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.840 qpair failed and we were unable to recover it. 00:29:09.840 [2024-10-21 12:13:46.353030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.840 [2024-10-21 12:13:46.353062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.840 qpair failed and we were unable to recover it. 00:29:09.840 [2024-10-21 12:13:46.353438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.840 [2024-10-21 12:13:46.353471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.840 qpair failed and we were unable to recover it. 00:29:09.840 [2024-10-21 12:13:46.353841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.840 [2024-10-21 12:13:46.353871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.840 qpair failed and we were unable to recover it. 00:29:09.840 [2024-10-21 12:13:46.354252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.840 [2024-10-21 12:13:46.354283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.840 qpair failed and we were unable to recover it. 00:29:09.840 [2024-10-21 12:13:46.354645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.840 [2024-10-21 12:13:46.354678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.840 qpair failed and we were unable to recover it. 00:29:09.840 [2024-10-21 12:13:46.355037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.840 [2024-10-21 12:13:46.355068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.840 qpair failed and we were unable to recover it. 00:29:09.840 [2024-10-21 12:13:46.355331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.840 [2024-10-21 12:13:46.355364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.840 qpair failed and we were unable to recover it. 00:29:09.840 [2024-10-21 12:13:46.355721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.840 [2024-10-21 12:13:46.355752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.840 qpair failed and we were unable to recover it. 00:29:09.840 [2024-10-21 12:13:46.356096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.840 [2024-10-21 12:13:46.356127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.841 qpair failed and we were unable to recover it. 00:29:09.841 [2024-10-21 12:13:46.356500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.841 [2024-10-21 12:13:46.356532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.841 qpair failed and we were unable to recover it. 00:29:09.841 [2024-10-21 12:13:46.356881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.841 [2024-10-21 12:13:46.356912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.841 qpair failed and we were unable to recover it. 00:29:09.841 [2024-10-21 12:13:46.357281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.841 [2024-10-21 12:13:46.357312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.841 qpair failed and we were unable to recover it. 00:29:09.841 [2024-10-21 12:13:46.357532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.841 [2024-10-21 12:13:46.357564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.841 qpair failed and we were unable to recover it. 00:29:09.841 [2024-10-21 12:13:46.357814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.841 [2024-10-21 12:13:46.357844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.841 qpair failed and we were unable to recover it. 00:29:09.841 [2024-10-21 12:13:46.358108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.841 [2024-10-21 12:13:46.358138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.841 qpair failed and we were unable to recover it. 00:29:09.841 [2024-10-21 12:13:46.358532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.841 [2024-10-21 12:13:46.358566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.841 qpair failed and we were unable to recover it. 00:29:09.841 [2024-10-21 12:13:46.358919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.841 [2024-10-21 12:13:46.358950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.841 qpair failed and we were unable to recover it. 00:29:09.841 [2024-10-21 12:13:46.359314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.841 [2024-10-21 12:13:46.359354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.841 qpair failed and we were unable to recover it. 00:29:09.841 [2024-10-21 12:13:46.359708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.841 [2024-10-21 12:13:46.359738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.841 qpair failed and we were unable to recover it. 00:29:09.841 [2024-10-21 12:13:46.360112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.841 [2024-10-21 12:13:46.360143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.841 qpair failed and we were unable to recover it. 00:29:09.841 [2024-10-21 12:13:46.360505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.841 [2024-10-21 12:13:46.360537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.841 qpair failed and we were unable to recover it. 00:29:09.841 [2024-10-21 12:13:46.360748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.841 [2024-10-21 12:13:46.360779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.841 qpair failed and we were unable to recover it. 00:29:09.841 [2024-10-21 12:13:46.361142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.841 [2024-10-21 12:13:46.361173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.841 qpair failed and we were unable to recover it. 00:29:09.841 [2024-10-21 12:13:46.361540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.841 [2024-10-21 12:13:46.361571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.841 qpair failed and we were unable to recover it. 00:29:09.841 [2024-10-21 12:13:46.361916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.841 [2024-10-21 12:13:46.361947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.841 qpair failed and we were unable to recover it. 00:29:09.841 [2024-10-21 12:13:46.362305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.841 [2024-10-21 12:13:46.362353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.841 qpair failed and we were unable to recover it. 00:29:09.841 [2024-10-21 12:13:46.362697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.841 [2024-10-21 12:13:46.362735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.841 qpair failed and we were unable to recover it. 00:29:09.841 [2024-10-21 12:13:46.362977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.841 [2024-10-21 12:13:46.363007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.841 qpair failed and we were unable to recover it. 00:29:09.841 [2024-10-21 12:13:46.363353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.841 [2024-10-21 12:13:46.363385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.841 qpair failed and we were unable to recover it. 00:29:09.841 [2024-10-21 12:13:46.363626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.841 [2024-10-21 12:13:46.363657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.841 qpair failed and we were unable to recover it. 00:29:09.841 [2024-10-21 12:13:46.364003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.841 [2024-10-21 12:13:46.364033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.841 qpair failed and we were unable to recover it. 00:29:09.841 [2024-10-21 12:13:46.364404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.841 [2024-10-21 12:13:46.364436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.841 qpair failed and we were unable to recover it. 00:29:09.841 [2024-10-21 12:13:46.364761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.841 [2024-10-21 12:13:46.364793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.841 qpair failed and we were unable to recover it. 00:29:09.841 [2024-10-21 12:13:46.365166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.841 [2024-10-21 12:13:46.365196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.841 qpair failed and we were unable to recover it. 00:29:09.841 [2024-10-21 12:13:46.365422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.841 [2024-10-21 12:13:46.365454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.841 qpair failed and we were unable to recover it. 00:29:09.841 [2024-10-21 12:13:46.365882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.841 [2024-10-21 12:13:46.365912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.841 qpair failed and we were unable to recover it. 00:29:09.841 [2024-10-21 12:13:46.366278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.841 [2024-10-21 12:13:46.366308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.841 qpair failed and we were unable to recover it. 00:29:09.841 [2024-10-21 12:13:46.366528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.841 [2024-10-21 12:13:46.366560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.841 qpair failed and we were unable to recover it. 00:29:09.841 [2024-10-21 12:13:46.366918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.841 [2024-10-21 12:13:46.366948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.841 qpair failed and we were unable to recover it. 00:29:09.841 [2024-10-21 12:13:46.367171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.841 [2024-10-21 12:13:46.367203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.841 qpair failed and we were unable to recover it. 00:29:09.841 [2024-10-21 12:13:46.367583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.841 [2024-10-21 12:13:46.367615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.841 qpair failed and we were unable to recover it. 00:29:09.841 [2024-10-21 12:13:46.367962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.841 [2024-10-21 12:13:46.367992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.841 qpair failed and we were unable to recover it. 00:29:09.841 [2024-10-21 12:13:46.368348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.841 [2024-10-21 12:13:46.368380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.841 qpair failed and we were unable to recover it. 00:29:09.841 [2024-10-21 12:13:46.368710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.841 [2024-10-21 12:13:46.368742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.841 qpair failed and we were unable to recover it. 00:29:09.841 [2024-10-21 12:13:46.369095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.841 [2024-10-21 12:13:46.369125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.841 qpair failed and we were unable to recover it. 00:29:09.841 [2024-10-21 12:13:46.369503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.841 [2024-10-21 12:13:46.369535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.841 qpair failed and we were unable to recover it. 00:29:09.841 [2024-10-21 12:13:46.369892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.841 [2024-10-21 12:13:46.369922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.841 qpair failed and we were unable to recover it. 00:29:09.841 [2024-10-21 12:13:46.370286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.841 [2024-10-21 12:13:46.370317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.841 qpair failed and we were unable to recover it. 00:29:09.841 [2024-10-21 12:13:46.370709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.842 [2024-10-21 12:13:46.370741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.842 qpair failed and we were unable to recover it. 00:29:09.842 [2024-10-21 12:13:46.370963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.842 [2024-10-21 12:13:46.370994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.842 qpair failed and we were unable to recover it. 00:29:09.842 [2024-10-21 12:13:46.371412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.842 [2024-10-21 12:13:46.371444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.842 qpair failed and we were unable to recover it. 00:29:09.842 [2024-10-21 12:13:46.371647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.842 [2024-10-21 12:13:46.371678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.842 qpair failed and we were unable to recover it. 00:29:09.842 [2024-10-21 12:13:46.372049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.842 [2024-10-21 12:13:46.372079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.842 qpair failed and we were unable to recover it. 00:29:09.842 [2024-10-21 12:13:46.372495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.842 [2024-10-21 12:13:46.372528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.842 qpair failed and we were unable to recover it. 00:29:09.842 [2024-10-21 12:13:46.372885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.842 [2024-10-21 12:13:46.372915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.842 qpair failed and we were unable to recover it. 00:29:09.842 [2024-10-21 12:13:46.373143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.842 [2024-10-21 12:13:46.373174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.842 qpair failed and we were unable to recover it. 00:29:09.842 [2024-10-21 12:13:46.373513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.842 [2024-10-21 12:13:46.373545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.842 qpair failed and we were unable to recover it. 00:29:09.842 [2024-10-21 12:13:46.373918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.842 [2024-10-21 12:13:46.373948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.842 qpair failed and we were unable to recover it. 00:29:09.842 [2024-10-21 12:13:46.374311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.842 [2024-10-21 12:13:46.374353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.842 qpair failed and we were unable to recover it. 00:29:09.842 [2024-10-21 12:13:46.374740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.842 [2024-10-21 12:13:46.374770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.842 qpair failed and we were unable to recover it. 00:29:09.842 [2024-10-21 12:13:46.375024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.842 [2024-10-21 12:13:46.375057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.842 qpair failed and we were unable to recover it. 00:29:09.842 [2024-10-21 12:13:46.375407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.842 [2024-10-21 12:13:46.375440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.842 qpair failed and we were unable to recover it. 00:29:09.842 [2024-10-21 12:13:46.375821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.842 [2024-10-21 12:13:46.375852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.842 qpair failed and we were unable to recover it. 00:29:09.842 [2024-10-21 12:13:46.376218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.842 [2024-10-21 12:13:46.376249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.842 qpair failed and we were unable to recover it. 00:29:09.842 [2024-10-21 12:13:46.376616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.842 [2024-10-21 12:13:46.376647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.842 qpair failed and we were unable to recover it. 00:29:09.842 [2024-10-21 12:13:46.377012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.842 [2024-10-21 12:13:46.377042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.842 qpair failed and we were unable to recover it. 00:29:09.842 [2024-10-21 12:13:46.377420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.842 [2024-10-21 12:13:46.377459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.842 qpair failed and we were unable to recover it. 00:29:09.842 [2024-10-21 12:13:46.377729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.842 [2024-10-21 12:13:46.377759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.842 qpair failed and we were unable to recover it. 00:29:09.842 [2024-10-21 12:13:46.378096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.842 [2024-10-21 12:13:46.378126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.842 qpair failed and we were unable to recover it. 00:29:09.842 [2024-10-21 12:13:46.378500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.842 [2024-10-21 12:13:46.378532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.842 qpair failed and we were unable to recover it. 00:29:09.842 [2024-10-21 12:13:46.378892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.842 [2024-10-21 12:13:46.378923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.842 qpair failed and we were unable to recover it. 00:29:09.842 [2024-10-21 12:13:46.379280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.842 [2024-10-21 12:13:46.379310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.842 qpair failed and we were unable to recover it. 00:29:09.842 [2024-10-21 12:13:46.379697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.842 [2024-10-21 12:13:46.379729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.842 qpair failed and we were unable to recover it. 00:29:09.842 [2024-10-21 12:13:46.380094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.842 [2024-10-21 12:13:46.380124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.842 qpair failed and we were unable to recover it. 00:29:09.842 [2024-10-21 12:13:46.380484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.842 [2024-10-21 12:13:46.380516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.842 qpair failed and we were unable to recover it. 00:29:09.842 [2024-10-21 12:13:46.380884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.842 [2024-10-21 12:13:46.380915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.842 qpair failed and we were unable to recover it. 00:29:09.842 [2024-10-21 12:13:46.381264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.842 [2024-10-21 12:13:46.381295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.842 qpair failed and we were unable to recover it. 00:29:09.842 [2024-10-21 12:13:46.381670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.842 [2024-10-21 12:13:46.381702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.842 qpair failed and we were unable to recover it. 00:29:09.842 [2024-10-21 12:13:46.382058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.842 [2024-10-21 12:13:46.382089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.842 qpair failed and we were unable to recover it. 00:29:09.842 [2024-10-21 12:13:46.382442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.842 [2024-10-21 12:13:46.382473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.842 qpair failed and we were unable to recover it. 00:29:09.842 [2024-10-21 12:13:46.382847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.842 [2024-10-21 12:13:46.382879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.842 qpair failed and we were unable to recover it. 00:29:09.842 [2024-10-21 12:13:46.383229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.842 [2024-10-21 12:13:46.383262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.842 qpair failed and we were unable to recover it. 00:29:09.842 [2024-10-21 12:13:46.383627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.842 [2024-10-21 12:13:46.383659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.842 qpair failed and we were unable to recover it. 00:29:09.842 [2024-10-21 12:13:46.384030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.842 [2024-10-21 12:13:46.384061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.842 qpair failed and we were unable to recover it. 00:29:09.842 [2024-10-21 12:13:46.384422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.843 [2024-10-21 12:13:46.384454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.843 qpair failed and we were unable to recover it. 00:29:09.843 [2024-10-21 12:13:46.384826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.843 [2024-10-21 12:13:46.384857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.843 qpair failed and we were unable to recover it. 00:29:09.843 [2024-10-21 12:13:46.385208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.843 [2024-10-21 12:13:46.385240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.843 qpair failed and we were unable to recover it. 00:29:09.843 [2024-10-21 12:13:46.385607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.843 [2024-10-21 12:13:46.385639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.843 qpair failed and we were unable to recover it. 00:29:09.843 [2024-10-21 12:13:46.385989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.843 [2024-10-21 12:13:46.386019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.843 qpair failed and we were unable to recover it. 00:29:09.843 [2024-10-21 12:13:46.386386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.843 [2024-10-21 12:13:46.386417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.843 qpair failed and we were unable to recover it. 00:29:09.843 [2024-10-21 12:13:46.386800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.843 [2024-10-21 12:13:46.386830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.843 qpair failed and we were unable to recover it. 00:29:09.843 [2024-10-21 12:13:46.387218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.843 [2024-10-21 12:13:46.387249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.843 qpair failed and we were unable to recover it. 00:29:09.843 [2024-10-21 12:13:46.387597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.843 [2024-10-21 12:13:46.387629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.843 qpair failed and we were unable to recover it. 00:29:09.843 [2024-10-21 12:13:46.388016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.843 [2024-10-21 12:13:46.388047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.843 qpair failed and we were unable to recover it. 00:29:09.843 [2024-10-21 12:13:46.388268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.843 [2024-10-21 12:13:46.388299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.843 qpair failed and we were unable to recover it. 00:29:09.843 [2024-10-21 12:13:46.388551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.843 [2024-10-21 12:13:46.388582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.843 qpair failed and we were unable to recover it. 00:29:09.843 [2024-10-21 12:13:46.389000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.843 [2024-10-21 12:13:46.389031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.843 qpair failed and we were unable to recover it. 00:29:09.843 [2024-10-21 12:13:46.389251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.843 [2024-10-21 12:13:46.389281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.843 qpair failed and we were unable to recover it. 00:29:09.843 [2024-10-21 12:13:46.389662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.843 [2024-10-21 12:13:46.389692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.843 qpair failed and we were unable to recover it. 00:29:09.843 [2024-10-21 12:13:46.390056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.843 [2024-10-21 12:13:46.390087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.843 qpair failed and we were unable to recover it. 00:29:09.843 [2024-10-21 12:13:46.390315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.843 [2024-10-21 12:13:46.390366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.843 qpair failed and we were unable to recover it. 00:29:09.843 [2024-10-21 12:13:46.390724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.843 [2024-10-21 12:13:46.390755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.843 qpair failed and we were unable to recover it. 00:29:09.843 [2024-10-21 12:13:46.391125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.843 [2024-10-21 12:13:46.391156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.843 qpair failed and we were unable to recover it. 00:29:09.843 [2024-10-21 12:13:46.391531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.843 [2024-10-21 12:13:46.391562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.843 qpair failed and we were unable to recover it. 00:29:09.843 [2024-10-21 12:13:46.391915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.843 [2024-10-21 12:13:46.391945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.843 qpair failed and we were unable to recover it. 00:29:09.843 [2024-10-21 12:13:46.392298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.843 [2024-10-21 12:13:46.392339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.843 qpair failed and we were unable to recover it. 00:29:09.843 [2024-10-21 12:13:46.392552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.843 [2024-10-21 12:13:46.392588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.843 qpair failed and we were unable to recover it. 00:29:09.843 [2024-10-21 12:13:46.392933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.843 [2024-10-21 12:13:46.392964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.843 qpair failed and we were unable to recover it. 00:29:09.843 [2024-10-21 12:13:46.393336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.843 [2024-10-21 12:13:46.393367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.843 qpair failed and we were unable to recover it. 00:29:09.843 [2024-10-21 12:13:46.393573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.843 [2024-10-21 12:13:46.393604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.843 qpair failed and we were unable to recover it. 00:29:09.843 [2024-10-21 12:13:46.393963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.843 [2024-10-21 12:13:46.393993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.843 qpair failed and we were unable to recover it. 00:29:09.843 [2024-10-21 12:13:46.394363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.843 [2024-10-21 12:13:46.394395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.843 qpair failed and we were unable to recover it. 00:29:09.843 [2024-10-21 12:13:46.394739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.843 [2024-10-21 12:13:46.394770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.843 qpair failed and we were unable to recover it. 00:29:09.843 [2024-10-21 12:13:46.395151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.843 [2024-10-21 12:13:46.395181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.843 qpair failed and we were unable to recover it. 00:29:09.843 [2024-10-21 12:13:46.395568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.843 [2024-10-21 12:13:46.395599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.843 qpair failed and we were unable to recover it. 00:29:09.843 [2024-10-21 12:13:46.395947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.843 [2024-10-21 12:13:46.395977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.843 qpair failed and we were unable to recover it. 00:29:09.843 [2024-10-21 12:13:46.396354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.843 [2024-10-21 12:13:46.396385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.843 qpair failed and we were unable to recover it. 00:29:09.843 [2024-10-21 12:13:46.396734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.843 [2024-10-21 12:13:46.396764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.843 qpair failed and we were unable to recover it. 00:29:09.843 [2024-10-21 12:13:46.397119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.843 [2024-10-21 12:13:46.397151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.843 qpair failed and we were unable to recover it. 00:29:09.843 [2024-10-21 12:13:46.397508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.843 [2024-10-21 12:13:46.397539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.843 qpair failed and we were unable to recover it. 00:29:09.843 [2024-10-21 12:13:46.397781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.843 [2024-10-21 12:13:46.397811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.843 qpair failed and we were unable to recover it. 00:29:09.843 [2024-10-21 12:13:46.398196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.843 [2024-10-21 12:13:46.398226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.843 qpair failed and we were unable to recover it. 00:29:09.843 [2024-10-21 12:13:46.398596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.843 [2024-10-21 12:13:46.398627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.843 qpair failed and we were unable to recover it. 00:29:09.843 [2024-10-21 12:13:46.398990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.843 [2024-10-21 12:13:46.399020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.843 qpair failed and we were unable to recover it. 00:29:09.843 [2024-10-21 12:13:46.399379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.844 [2024-10-21 12:13:46.399410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.844 qpair failed and we were unable to recover it. 00:29:09.844 [2024-10-21 12:13:46.399637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.844 [2024-10-21 12:13:46.399666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.844 qpair failed and we were unable to recover it. 00:29:09.844 [2024-10-21 12:13:46.399925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.844 [2024-10-21 12:13:46.399955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.844 qpair failed and we were unable to recover it. 00:29:09.844 [2024-10-21 12:13:46.400333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.844 [2024-10-21 12:13:46.400364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.844 qpair failed and we were unable to recover it. 00:29:09.844 [2024-10-21 12:13:46.400717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.844 [2024-10-21 12:13:46.400747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.844 qpair failed and we were unable to recover it. 00:29:09.844 [2024-10-21 12:13:46.401105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.844 [2024-10-21 12:13:46.401136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.844 qpair failed and we were unable to recover it. 00:29:09.844 [2024-10-21 12:13:46.401475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.844 [2024-10-21 12:13:46.401506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.844 qpair failed and we were unable to recover it. 00:29:09.844 [2024-10-21 12:13:46.401857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.844 [2024-10-21 12:13:46.401889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.844 qpair failed and we were unable to recover it. 00:29:09.844 [2024-10-21 12:13:46.402234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.844 [2024-10-21 12:13:46.402265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.844 qpair failed and we were unable to recover it. 00:29:09.844 [2024-10-21 12:13:46.402625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.844 [2024-10-21 12:13:46.402658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.844 qpair failed and we were unable to recover it. 00:29:09.844 [2024-10-21 12:13:46.403011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.844 [2024-10-21 12:13:46.403041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.844 qpair failed and we were unable to recover it. 00:29:09.844 [2024-10-21 12:13:46.403423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.844 [2024-10-21 12:13:46.403455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.844 qpair failed and we were unable to recover it. 00:29:09.844 [2024-10-21 12:13:46.403790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.844 [2024-10-21 12:13:46.403821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.844 qpair failed and we were unable to recover it. 00:29:09.844 [2024-10-21 12:13:46.404229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.844 [2024-10-21 12:13:46.404259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.844 qpair failed and we were unable to recover it. 00:29:09.844 [2024-10-21 12:13:46.404485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.844 [2024-10-21 12:13:46.404516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.844 qpair failed and we were unable to recover it. 00:29:09.844 [2024-10-21 12:13:46.404870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.844 [2024-10-21 12:13:46.404901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.844 qpair failed and we were unable to recover it. 00:29:09.844 [2024-10-21 12:13:46.405123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.844 [2024-10-21 12:13:46.405154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.844 qpair failed and we were unable to recover it. 00:29:09.844 [2024-10-21 12:13:46.405414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.844 [2024-10-21 12:13:46.405450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.844 qpair failed and we were unable to recover it. 00:29:09.844 [2024-10-21 12:13:46.405801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.844 [2024-10-21 12:13:46.405831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:09.844 qpair failed and we were unable to recover it. 00:29:10.118 [2024-10-21 12:13:46.406205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.118 [2024-10-21 12:13:46.406238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.118 [2024-10-21 12:13:46.406614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.118 [2024-10-21 12:13:46.406648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.118 [2024-10-21 12:13:46.406993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.118 [2024-10-21 12:13:46.407024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.118 [2024-10-21 12:13:46.407384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.118 [2024-10-21 12:13:46.407423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.118 [2024-10-21 12:13:46.407744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.118 [2024-10-21 12:13:46.407776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.118 [2024-10-21 12:13:46.408148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.118 [2024-10-21 12:13:46.408180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.118 [2024-10-21 12:13:46.408544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.118 [2024-10-21 12:13:46.408575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.118 [2024-10-21 12:13:46.408921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.118 [2024-10-21 12:13:46.408951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.118 [2024-10-21 12:13:46.409318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.118 [2024-10-21 12:13:46.409358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.118 [2024-10-21 12:13:46.409707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.118 [2024-10-21 12:13:46.409737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.118 [2024-10-21 12:13:46.410087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.118 [2024-10-21 12:13:46.410116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.118 [2024-10-21 12:13:46.410484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.118 [2024-10-21 12:13:46.410516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.118 [2024-10-21 12:13:46.410871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.118 [2024-10-21 12:13:46.410901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.118 [2024-10-21 12:13:46.411122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.118 [2024-10-21 12:13:46.411153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.118 [2024-10-21 12:13:46.411551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.118 [2024-10-21 12:13:46.411585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.118 [2024-10-21 12:13:46.411919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.118 [2024-10-21 12:13:46.411949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.118 [2024-10-21 12:13:46.412300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.118 [2024-10-21 12:13:46.412339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.118 [2024-10-21 12:13:46.412689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.118 [2024-10-21 12:13:46.412720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.118 [2024-10-21 12:13:46.413080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.118 [2024-10-21 12:13:46.413109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.118 [2024-10-21 12:13:46.413344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.118 [2024-10-21 12:13:46.413376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.118 [2024-10-21 12:13:46.413631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-10-21 12:13:46.413661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-10-21 12:13:46.414019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-10-21 12:13:46.414049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-10-21 12:13:46.414415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-10-21 12:13:46.414449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-10-21 12:13:46.414810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-10-21 12:13:46.414840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-10-21 12:13:46.415201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-10-21 12:13:46.415233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-10-21 12:13:46.415585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-10-21 12:13:46.415617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-10-21 12:13:46.415996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-10-21 12:13:46.416027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-10-21 12:13:46.416383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-10-21 12:13:46.416415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-10-21 12:13:46.416665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-10-21 12:13:46.416697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-10-21 12:13:46.416913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-10-21 12:13:46.416943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-10-21 12:13:46.417187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-10-21 12:13:46.417219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-10-21 12:13:46.417504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-10-21 12:13:46.417536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-10-21 12:13:46.417784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-10-21 12:13:46.417814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-10-21 12:13:46.418166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-10-21 12:13:46.418196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-10-21 12:13:46.418571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-10-21 12:13:46.418604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-10-21 12:13:46.418831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-10-21 12:13:46.418865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-10-21 12:13:46.419114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-10-21 12:13:46.419144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-10-21 12:13:46.419541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-10-21 12:13:46.419574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-10-21 12:13:46.419893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-10-21 12:13:46.419924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-10-21 12:13:46.420131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-10-21 12:13:46.420161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-10-21 12:13:46.420522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-10-21 12:13:46.420554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-10-21 12:13:46.420920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-10-21 12:13:46.420951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-10-21 12:13:46.421166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-10-21 12:13:46.421197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-10-21 12:13:46.421562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-10-21 12:13:46.421599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-10-21 12:13:46.421818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-10-21 12:13:46.421848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-10-21 12:13:46.422216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-10-21 12:13:46.422246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-10-21 12:13:46.422468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-10-21 12:13:46.422500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-10-21 12:13:46.422873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-10-21 12:13:46.422903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-10-21 12:13:46.423260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-10-21 12:13:46.423290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-10-21 12:13:46.423544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-10-21 12:13:46.423576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-10-21 12:13:46.423933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-10-21 12:13:46.423964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-10-21 12:13:46.424333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-10-21 12:13:46.424365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-10-21 12:13:46.424686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-10-21 12:13:46.424717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-10-21 12:13:46.425125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-10-21 12:13:46.425155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-10-21 12:13:46.425501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-10-21 12:13:46.425533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-10-21 12:13:46.425885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-10-21 12:13:46.425916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-10-21 12:13:46.426276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-10-21 12:13:46.426307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-10-21 12:13:46.426710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-10-21 12:13:46.426741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-10-21 12:13:46.426947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-10-21 12:13:46.426979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-10-21 12:13:46.427343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-10-21 12:13:46.427374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-10-21 12:13:46.427688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-10-21 12:13:46.427718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-10-21 12:13:46.428073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-10-21 12:13:46.428103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-10-21 12:13:46.428319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-10-21 12:13:46.428361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-10-21 12:13:46.428612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-10-21 12:13:46.428642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-10-21 12:13:46.428879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-10-21 12:13:46.428909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-10-21 12:13:46.429251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-10-21 12:13:46.429282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-10-21 12:13:46.429655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-10-21 12:13:46.429687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-10-21 12:13:46.429897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-10-21 12:13:46.429927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-10-21 12:13:46.430301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-10-21 12:13:46.430342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-10-21 12:13:46.430686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-10-21 12:13:46.430717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-10-21 12:13:46.431081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-10-21 12:13:46.431111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-10-21 12:13:46.431472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-10-21 12:13:46.431504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-10-21 12:13:46.431870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-10-21 12:13:46.431901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-10-21 12:13:46.432274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-10-21 12:13:46.432304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-10-21 12:13:46.432551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-10-21 12:13:46.432585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-10-21 12:13:46.432927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-10-21 12:13:46.432959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-10-21 12:13:46.433311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-10-21 12:13:46.433352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-10-21 12:13:46.433567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-10-21 12:13:46.433597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-10-21 12:13:46.433966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-10-21 12:13:46.433997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-10-21 12:13:46.434214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-10-21 12:13:46.434244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-10-21 12:13:46.434494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-10-21 12:13:46.434526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-10-21 12:13:46.434900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-10-21 12:13:46.434930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-10-21 12:13:46.435299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-10-21 12:13:46.435337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-10-21 12:13:46.435685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-10-21 12:13:46.435723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-10-21 12:13:46.436091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-10-21 12:13:46.436123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-10-21 12:13:46.436481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-10-21 12:13:46.436512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-10-21 12:13:46.436861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-10-21 12:13:46.436892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-10-21 12:13:46.437111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-10-21 12:13:46.437145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-10-21 12:13:46.437491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-10-21 12:13:46.437522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-10-21 12:13:46.437747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-10-21 12:13:46.437777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-10-21 12:13:46.438116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-10-21 12:13:46.438146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-10-21 12:13:46.438508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-10-21 12:13:46.438540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-10-21 12:13:46.438897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-10-21 12:13:46.438929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-10-21 12:13:46.439276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-10-21 12:13:46.439307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-10-21 12:13:46.439631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-10-21 12:13:46.439662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-10-21 12:13:46.440032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-10-21 12:13:46.440063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-10-21 12:13:46.440417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-10-21 12:13:46.440449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-10-21 12:13:46.440831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-10-21 12:13:46.440863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-10-21 12:13:46.441225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-10-21 12:13:46.441255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-10-21 12:13:46.441616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-10-21 12:13:46.441647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-10-21 12:13:46.442001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-10-21 12:13:46.442031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.121 [2024-10-21 12:13:46.442392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-10-21 12:13:46.442424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-10-21 12:13:46.442786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-10-21 12:13:46.442815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-10-21 12:13:46.443165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-10-21 12:13:46.443196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-10-21 12:13:46.443571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-10-21 12:13:46.443603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-10-21 12:13:46.443952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-10-21 12:13:46.443983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-10-21 12:13:46.444345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-10-21 12:13:46.444376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-10-21 12:13:46.444748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-10-21 12:13:46.444779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-10-21 12:13:46.445118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-10-21 12:13:46.445149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-10-21 12:13:46.445497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-10-21 12:13:46.445529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-10-21 12:13:46.445887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-10-21 12:13:46.445919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-10-21 12:13:46.446296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-10-21 12:13:46.446348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-10-21 12:13:46.446552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-10-21 12:13:46.446584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-10-21 12:13:46.446945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-10-21 12:13:46.446975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-10-21 12:13:46.447342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-10-21 12:13:46.447374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-10-21 12:13:46.447787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-10-21 12:13:46.447817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-10-21 12:13:46.448178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-10-21 12:13:46.448208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-10-21 12:13:46.448437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-10-21 12:13:46.448468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-10-21 12:13:46.448718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-10-21 12:13:46.448748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-10-21 12:13:46.449094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-10-21 12:13:46.449123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-10-21 12:13:46.449478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-10-21 12:13:46.449509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-10-21 12:13:46.449863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-10-21 12:13:46.449894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-10-21 12:13:46.450150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-10-21 12:13:46.450180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-10-21 12:13:46.450585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-10-21 12:13:46.450617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-10-21 12:13:46.450991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-10-21 12:13:46.451021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-10-21 12:13:46.451266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-10-21 12:13:46.451297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-10-21 12:13:46.451664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-10-21 12:13:46.451696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-10-21 12:13:46.452058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-10-21 12:13:46.452088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-10-21 12:13:46.452446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-10-21 12:13:46.452479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-10-21 12:13:46.452829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-10-21 12:13:46.452859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-10-21 12:13:46.453074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-10-21 12:13:46.453104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-10-21 12:13:46.453474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-10-21 12:13:46.453505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-10-21 12:13:46.453747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-10-21 12:13:46.453778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-10-21 12:13:46.454010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-10-21 12:13:46.454044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-10-21 12:13:46.454398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-10-21 12:13:46.454430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-10-21 12:13:46.454532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-10-21 12:13:46.454562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-10-21 12:13:46.454917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-10-21 12:13:46.454946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-10-21 12:13:46.455295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-10-21 12:13:46.455338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-10-21 12:13:46.455683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-10-21 12:13:46.455714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-10-21 12:13:46.456078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-10-21 12:13:46.456108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-10-21 12:13:46.456421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-10-21 12:13:46.456453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-10-21 12:13:46.456827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-10-21 12:13:46.456857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-10-21 12:13:46.457201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-10-21 12:13:46.457231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-10-21 12:13:46.457588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-10-21 12:13:46.457619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-10-21 12:13:46.457971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-10-21 12:13:46.458001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-10-21 12:13:46.458348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-10-21 12:13:46.458380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-10-21 12:13:46.458638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-10-21 12:13:46.458670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-10-21 12:13:46.459026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-10-21 12:13:46.459056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-10-21 12:13:46.459261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-10-21 12:13:46.459292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-10-21 12:13:46.459624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-10-21 12:13:46.459656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-10-21 12:13:46.460012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-10-21 12:13:46.460049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-10-21 12:13:46.460407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-10-21 12:13:46.460439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-10-21 12:13:46.460792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-10-21 12:13:46.460822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-10-21 12:13:46.461185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-10-21 12:13:46.461215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-10-21 12:13:46.461578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-10-21 12:13:46.461609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-10-21 12:13:46.461843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-10-21 12:13:46.461873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-10-21 12:13:46.462131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-10-21 12:13:46.462161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-10-21 12:13:46.462514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-10-21 12:13:46.462546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-10-21 12:13:46.462889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-10-21 12:13:46.462918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-10-21 12:13:46.463247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-10-21 12:13:46.463277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-10-21 12:13:46.463634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-10-21 12:13:46.463666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-10-21 12:13:46.463909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-10-21 12:13:46.463941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-10-21 12:13:46.464289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-10-21 12:13:46.464332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-10-21 12:13:46.464542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-10-21 12:13:46.464571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-10-21 12:13:46.464922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-10-21 12:13:46.464954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-10-21 12:13:46.465307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-10-21 12:13:46.465350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-10-21 12:13:46.465694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-10-21 12:13:46.465724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-10-21 12:13:46.465950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-10-21 12:13:46.465981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-10-21 12:13:46.466187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-10-21 12:13:46.466218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-10-21 12:13:46.466573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-10-21 12:13:46.466606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-10-21 12:13:46.466814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-10-21 12:13:46.466845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-10-21 12:13:46.467205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-10-21 12:13:46.467235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-10-21 12:13:46.467462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-10-21 12:13:46.467494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-10-21 12:13:46.467695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-10-21 12:13:46.467726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.123 [2024-10-21 12:13:46.467939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-10-21 12:13:46.467969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-10-21 12:13:46.468213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-10-21 12:13:46.468243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-10-21 12:13:46.468504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-10-21 12:13:46.468537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-10-21 12:13:46.468796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-10-21 12:13:46.468829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-10-21 12:13:46.469187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-10-21 12:13:46.469217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-10-21 12:13:46.469583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-10-21 12:13:46.469615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-10-21 12:13:46.469983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-10-21 12:13:46.470013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-10-21 12:13:46.470270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-10-21 12:13:46.470304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-10-21 12:13:46.470552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-10-21 12:13:46.470583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-10-21 12:13:46.470940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-10-21 12:13:46.470970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-10-21 12:13:46.471329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-10-21 12:13:46.471361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-10-21 12:13:46.471718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-10-21 12:13:46.471749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-10-21 12:13:46.472106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-10-21 12:13:46.472136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-10-21 12:13:46.472503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-10-21 12:13:46.472535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-10-21 12:13:46.472768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-10-21 12:13:46.472803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-10-21 12:13:46.473168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-10-21 12:13:46.473199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-10-21 12:13:46.473568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-10-21 12:13:46.473606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-10-21 12:13:46.473818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-10-21 12:13:46.473850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-10-21 12:13:46.474191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-10-21 12:13:46.474222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-10-21 12:13:46.474465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-10-21 12:13:46.474498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-10-21 12:13:46.474741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-10-21 12:13:46.474771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-10-21 12:13:46.475122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-10-21 12:13:46.475152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-10-21 12:13:46.475394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-10-21 12:13:46.475426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-10-21 12:13:46.475789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-10-21 12:13:46.475819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-10-21 12:13:46.476170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-10-21 12:13:46.476201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-10-21 12:13:46.476568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-10-21 12:13:46.476599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-10-21 12:13:46.476954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-10-21 12:13:46.476984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-10-21 12:13:46.477354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-10-21 12:13:46.477385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-10-21 12:13:46.477485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-10-21 12:13:46.477513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-10-21 12:13:46.477859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-10-21 12:13:46.477889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-10-21 12:13:46.478247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-10-21 12:13:46.478278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-10-21 12:13:46.478665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-10-21 12:13:46.478697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.124 [2024-10-21 12:13:46.479103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-10-21 12:13:46.479134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-10-21 12:13:46.479386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-10-21 12:13:46.479417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-10-21 12:13:46.479673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-10-21 12:13:46.479704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-10-21 12:13:46.480051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-10-21 12:13:46.480081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-10-21 12:13:46.480453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-10-21 12:13:46.480484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-10-21 12:13:46.480690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-10-21 12:13:46.480720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-10-21 12:13:46.481071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-10-21 12:13:46.481102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-10-21 12:13:46.481450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-10-21 12:13:46.481482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-10-21 12:13:46.481855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-10-21 12:13:46.481885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-10-21 12:13:46.482220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-10-21 12:13:46.482251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-10-21 12:13:46.482606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-10-21 12:13:46.482638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-10-21 12:13:46.482999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-10-21 12:13:46.483029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-10-21 12:13:46.483380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-10-21 12:13:46.483412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-10-21 12:13:46.483635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-10-21 12:13:46.483667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-10-21 12:13:46.484004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-10-21 12:13:46.484034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-10-21 12:13:46.484387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-10-21 12:13:46.484420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-10-21 12:13:46.484819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-10-21 12:13:46.484850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-10-21 12:13:46.485199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-10-21 12:13:46.485229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-10-21 12:13:46.485586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-10-21 12:13:46.485618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-10-21 12:13:46.485832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-10-21 12:13:46.485863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-10-21 12:13:46.486228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-10-21 12:13:46.486258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-10-21 12:13:46.486619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-10-21 12:13:46.486650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-10-21 12:13:46.487005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-10-21 12:13:46.487035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-10-21 12:13:46.487417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-10-21 12:13:46.487448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-10-21 12:13:46.487667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-10-21 12:13:46.487703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-10-21 12:13:46.488074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-10-21 12:13:46.488106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-10-21 12:13:46.488460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-10-21 12:13:46.488491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-10-21 12:13:46.488833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-10-21 12:13:46.488863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-10-21 12:13:46.489219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-10-21 12:13:46.489248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-10-21 12:13:46.489605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-10-21 12:13:46.489636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-10-21 12:13:46.489982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-10-21 12:13:46.490013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-10-21 12:13:46.490234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-10-21 12:13:46.490264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-10-21 12:13:46.490675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-10-21 12:13:46.490706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-10-21 12:13:46.491056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-10-21 12:13:46.491086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-10-21 12:13:46.491449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-10-21 12:13:46.491482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-10-21 12:13:46.491864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-10-21 12:13:46.491895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-10-21 12:13:46.492235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-10-21 12:13:46.492265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-10-21 12:13:46.492635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-10-21 12:13:46.492666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-10-21 12:13:46.493030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-10-21 12:13:46.493060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-10-21 12:13:46.493412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-10-21 12:13:46.493444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-10-21 12:13:46.493795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-10-21 12:13:46.493826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-10-21 12:13:46.494036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-10-21 12:13:46.494066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-10-21 12:13:46.494295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-10-21 12:13:46.494336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-10-21 12:13:46.494682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-10-21 12:13:46.494711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-10-21 12:13:46.495070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-10-21 12:13:46.495101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-10-21 12:13:46.495344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-10-21 12:13:46.495375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-10-21 12:13:46.495738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-10-21 12:13:46.495768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-10-21 12:13:46.495993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-10-21 12:13:46.496022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-10-21 12:13:46.496367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-10-21 12:13:46.496398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-10-21 12:13:46.496756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-10-21 12:13:46.496786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-10-21 12:13:46.497137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-10-21 12:13:46.497168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-10-21 12:13:46.497386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-10-21 12:13:46.497418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-10-21 12:13:46.497758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-10-21 12:13:46.497789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-10-21 12:13:46.498140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-10-21 12:13:46.498171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-10-21 12:13:46.498395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-10-21 12:13:46.498425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-10-21 12:13:46.498806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-10-21 12:13:46.498836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-10-21 12:13:46.499076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-10-21 12:13:46.499109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-10-21 12:13:46.499459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-10-21 12:13:46.499491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-10-21 12:13:46.499696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-10-21 12:13:46.499727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-10-21 12:13:46.499946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-10-21 12:13:46.499976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-10-21 12:13:46.500339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-10-21 12:13:46.500371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-10-21 12:13:46.500716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-10-21 12:13:46.500749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-10-21 12:13:46.501093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-10-21 12:13:46.501123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-10-21 12:13:46.501490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-10-21 12:13:46.501522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-10-21 12:13:46.501876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-10-21 12:13:46.501913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-10-21 12:13:46.502259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-10-21 12:13:46.502289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-10-21 12:13:46.502668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-10-21 12:13:46.502700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-10-21 12:13:46.503072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-10-21 12:13:46.503104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-10-21 12:13:46.503370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-10-21 12:13:46.503405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-10-21 12:13:46.503751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-10-21 12:13:46.503783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-10-21 12:13:46.504152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-10-21 12:13:46.504184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-10-21 12:13:46.504511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-10-21 12:13:46.504541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-10-21 12:13:46.504901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-10-21 12:13:46.504931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.126 [2024-10-21 12:13:46.505297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-10-21 12:13:46.505336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-10-21 12:13:46.505682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-10-21 12:13:46.505713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-10-21 12:13:46.505960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-10-21 12:13:46.505990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-10-21 12:13:46.506350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-10-21 12:13:46.506383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-10-21 12:13:46.506739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-10-21 12:13:46.506770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-10-21 12:13:46.507123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-10-21 12:13:46.507154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-10-21 12:13:46.507541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-10-21 12:13:46.507574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-10-21 12:13:46.507920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-10-21 12:13:46.507950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-10-21 12:13:46.508164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-10-21 12:13:46.508194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-10-21 12:13:46.508569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-10-21 12:13:46.508601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-10-21 12:13:46.508980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-10-21 12:13:46.509011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-10-21 12:13:46.509237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-10-21 12:13:46.509267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-10-21 12:13:46.509623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-10-21 12:13:46.509655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-10-21 12:13:46.510010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-10-21 12:13:46.510042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-10-21 12:13:46.510396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-10-21 12:13:46.510427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-10-21 12:13:46.510689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-10-21 12:13:46.510720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-10-21 12:13:46.511075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-10-21 12:13:46.511106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-10-21 12:13:46.511466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-10-21 12:13:46.511497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-10-21 12:13:46.511751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-10-21 12:13:46.511784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-10-21 12:13:46.512138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-10-21 12:13:46.512169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-10-21 12:13:46.512396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-10-21 12:13:46.512428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-10-21 12:13:46.512779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-10-21 12:13:46.512810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-10-21 12:13:46.513027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-10-21 12:13:46.513057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-10-21 12:13:46.513406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-10-21 12:13:46.513437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-10-21 12:13:46.513837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-10-21 12:13:46.513867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-10-21 12:13:46.514214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-10-21 12:13:46.514245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-10-21 12:13:46.514616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-10-21 12:13:46.514648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-10-21 12:13:46.514884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-10-21 12:13:46.514915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-10-21 12:13:46.515255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-10-21 12:13:46.515287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-10-21 12:13:46.515660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-10-21 12:13:46.515690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-10-21 12:13:46.516048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-10-21 12:13:46.516079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-10-21 12:13:46.516442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-10-21 12:13:46.516486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-10-21 12:13:46.516831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-10-21 12:13:46.516862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-10-21 12:13:46.517089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-10-21 12:13:46.517120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-10-21 12:13:46.517522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-10-21 12:13:46.517554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-10-21 12:13:46.517899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-10-21 12:13:46.517930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-10-21 12:13:46.518150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-10-21 12:13:46.518180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-10-21 12:13:46.518527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-10-21 12:13:46.518560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-10-21 12:13:46.518914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-10-21 12:13:46.518945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-10-21 12:13:46.519308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-10-21 12:13:46.519364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-10-21 12:13:46.519698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-10-21 12:13:46.519729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-10-21 12:13:46.520080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-10-21 12:13:46.520110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-10-21 12:13:46.520334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-10-21 12:13:46.520366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-10-21 12:13:46.520745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-10-21 12:13:46.520776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-10-21 12:13:46.521140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-10-21 12:13:46.521171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-10-21 12:13:46.521548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-10-21 12:13:46.521578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-10-21 12:13:46.521927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-10-21 12:13:46.521958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-10-21 12:13:46.522312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-10-21 12:13:46.522351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-10-21 12:13:46.522572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-10-21 12:13:46.522603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-10-21 12:13:46.522868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-10-21 12:13:46.522899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-10-21 12:13:46.523119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-10-21 12:13:46.523149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-10-21 12:13:46.523503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-10-21 12:13:46.523535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-10-21 12:13:46.523908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-10-21 12:13:46.523938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-10-21 12:13:46.524303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-10-21 12:13:46.524346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-10-21 12:13:46.524659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-10-21 12:13:46.524689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-10-21 12:13:46.525061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-10-21 12:13:46.525091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-10-21 12:13:46.525455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-10-21 12:13:46.525486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-10-21 12:13:46.525835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-10-21 12:13:46.525868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-10-21 12:13:46.526114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-10-21 12:13:46.526144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-10-21 12:13:46.526367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-10-21 12:13:46.526399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-10-21 12:13:46.526742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-10-21 12:13:46.526772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-10-21 12:13:46.527133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-10-21 12:13:46.527164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-10-21 12:13:46.527386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-10-21 12:13:46.527417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-10-21 12:13:46.527629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-10-21 12:13:46.527659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-10-21 12:13:46.527758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-10-21 12:13:46.527786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-10-21 12:13:46.528096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-10-21 12:13:46.528127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-10-21 12:13:46.528481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-10-21 12:13:46.528512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-10-21 12:13:46.528853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-10-21 12:13:46.528884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-10-21 12:13:46.529236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-10-21 12:13:46.529267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-10-21 12:13:46.529736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-10-21 12:13:46.529771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-10-21 12:13:46.530064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-10-21 12:13:46.530098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-10-21 12:13:46.530447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-10-21 12:13:46.530486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-10-21 12:13:46.530728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-10-21 12:13:46.530759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-10-21 12:13:46.531076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-10-21 12:13:46.531108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-10-21 12:13:46.531329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-10-21 12:13:46.531361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-10-21 12:13:46.531774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-10-21 12:13:46.531804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-10-21 12:13:46.532173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-10-21 12:13:46.532203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-10-21 12:13:46.532577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-10-21 12:13:46.532608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-10-21 12:13:46.532961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-10-21 12:13:46.532993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-10-21 12:13:46.533363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-10-21 12:13:46.533396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-10-21 12:13:46.533753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-10-21 12:13:46.533783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-10-21 12:13:46.533877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-10-21 12:13:46.533906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf18000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 Read completed with error (sct=0, sc=8) 00:29:10.128 starting I/O failed 00:29:10.128 Read completed with error (sct=0, sc=8) 00:29:10.128 starting I/O failed 00:29:10.128 Read completed with error (sct=0, sc=8) 00:29:10.128 starting I/O failed 00:29:10.128 Read completed with error (sct=0, sc=8) 00:29:10.128 starting I/O failed 00:29:10.128 Read completed with error (sct=0, sc=8) 00:29:10.128 starting I/O failed 00:29:10.128 Read completed with error (sct=0, sc=8) 00:29:10.128 starting I/O failed 00:29:10.128 Read completed with error (sct=0, sc=8) 00:29:10.128 starting I/O failed 00:29:10.128 Read completed with error (sct=0, sc=8) 00:29:10.129 starting I/O failed 00:29:10.129 Read completed with error (sct=0, sc=8) 00:29:10.129 starting I/O failed 00:29:10.129 Read completed with error (sct=0, sc=8) 00:29:10.129 starting I/O failed 00:29:10.129 Read completed with error (sct=0, sc=8) 00:29:10.129 starting I/O failed 00:29:10.129 Read completed with error (sct=0, sc=8) 00:29:10.129 starting I/O failed 00:29:10.129 Read completed with error (sct=0, sc=8) 00:29:10.129 starting I/O failed 00:29:10.129 Read completed with error (sct=0, sc=8) 00:29:10.129 starting I/O failed 00:29:10.129 Write completed with error (sct=0, sc=8) 00:29:10.129 starting I/O failed 00:29:10.129 Read completed with error (sct=0, sc=8) 00:29:10.129 starting I/O failed 00:29:10.129 Write completed with error (sct=0, sc=8) 00:29:10.129 starting I/O failed 00:29:10.129 Write completed with error (sct=0, sc=8) 00:29:10.129 starting I/O failed 00:29:10.129 Read completed with error (sct=0, sc=8) 00:29:10.129 starting I/O failed 00:29:10.129 Write completed with error (sct=0, sc=8) 00:29:10.129 starting I/O failed 00:29:10.129 Write completed with error (sct=0, sc=8) 00:29:10.129 starting I/O failed 00:29:10.129 Write completed with error (sct=0, sc=8) 00:29:10.129 starting I/O failed 00:29:10.129 Read completed with error (sct=0, sc=8) 00:29:10.129 starting I/O failed 00:29:10.129 Write completed with error (sct=0, sc=8) 00:29:10.129 starting I/O failed 00:29:10.129 Write completed with error (sct=0, sc=8) 00:29:10.129 starting I/O failed 00:29:10.129 Write completed with error (sct=0, sc=8) 00:29:10.129 starting I/O failed 00:29:10.129 Read completed with error (sct=0, sc=8) 00:29:10.129 starting I/O failed 00:29:10.129 Write completed with error (sct=0, sc=8) 00:29:10.129 starting I/O failed 00:29:10.129 Read completed with error (sct=0, sc=8) 00:29:10.129 starting I/O failed 00:29:10.129 Write completed with error (sct=0, sc=8) 00:29:10.129 starting I/O failed 00:29:10.129 Write completed with error (sct=0, sc=8) 00:29:10.129 starting I/O failed 00:29:10.129 Read completed with error (sct=0, sc=8) 00:29:10.129 starting I/O failed 00:29:10.129 [2024-10-21 12:13:46.534722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.129 [2024-10-21 12:13:46.535208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-10-21 12:13:46.535268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-10-21 12:13:46.535781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-10-21 12:13:46.535888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-10-21 12:13:46.536342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-10-21 12:13:46.536382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-10-21 12:13:46.536727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-10-21 12:13:46.536760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-10-21 12:13:46.537116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-10-21 12:13:46.537147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-10-21 12:13:46.537607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-10-21 12:13:46.537714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-10-21 12:13:46.538157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-10-21 12:13:46.538196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-10-21 12:13:46.538549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-10-21 12:13:46.538583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-10-21 12:13:46.538935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-10-21 12:13:46.538972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-10-21 12:13:46.539218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-10-21 12:13:46.539261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-10-21 12:13:46.539635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-10-21 12:13:46.539667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-10-21 12:13:46.539982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-10-21 12:13:46.540011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-10-21 12:13:46.540384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-10-21 12:13:46.540415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-10-21 12:13:46.540867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-10-21 12:13:46.540898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-10-21 12:13:46.541241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-10-21 12:13:46.541270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-10-21 12:13:46.541611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-10-21 12:13:46.541656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-10-21 12:13:46.542023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-10-21 12:13:46.542055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-10-21 12:13:46.542308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-10-21 12:13:46.542360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-10-21 12:13:46.542757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-10-21 12:13:46.542787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-10-21 12:13:46.543127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-10-21 12:13:46.543157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-10-21 12:13:46.543371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-10-21 12:13:46.543403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-10-21 12:13:46.543767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-10-21 12:13:46.543796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-10-21 12:13:46.544185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-10-21 12:13:46.544214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-10-21 12:13:46.544602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-10-21 12:13:46.544632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-10-21 12:13:46.545007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-10-21 12:13:46.545036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-10-21 12:13:46.545138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-10-21 12:13:46.545167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-10-21 12:13:46.545576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-10-21 12:13:46.545605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-10-21 12:13:46.545988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-10-21 12:13:46.546017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-10-21 12:13:46.546386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-10-21 12:13:46.546418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-10-21 12:13:46.546785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-10-21 12:13:46.546814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-10-21 12:13:46.547189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-10-21 12:13:46.547217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-10-21 12:13:46.547631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-10-21 12:13:46.547662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-10-21 12:13:46.547907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-10-21 12:13:46.547935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-10-21 12:13:46.548304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-10-21 12:13:46.548341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-10-21 12:13:46.548607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-10-21 12:13:46.548637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-10-21 12:13:46.548984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-10-21 12:13:46.549013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-10-21 12:13:46.549392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-10-21 12:13:46.549423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-10-21 12:13:46.549790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-10-21 12:13:46.549821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-10-21 12:13:46.550174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-10-21 12:13:46.550204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-10-21 12:13:46.550555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-10-21 12:13:46.550587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-10-21 12:13:46.550922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-10-21 12:13:46.550952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-10-21 12:13:46.551292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-10-21 12:13:46.551336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-10-21 12:13:46.551569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-10-21 12:13:46.551600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-10-21 12:13:46.551891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-10-21 12:13:46.551921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-10-21 12:13:46.552261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-10-21 12:13:46.552290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-10-21 12:13:46.552544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-10-21 12:13:46.552575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-10-21 12:13:46.552943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-10-21 12:13:46.552972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-10-21 12:13:46.553292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-10-21 12:13:46.553331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-10-21 12:13:46.553675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-10-21 12:13:46.553706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-10-21 12:13:46.554064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-10-21 12:13:46.554100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-10-21 12:13:46.554443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-10-21 12:13:46.554474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-10-21 12:13:46.554817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-10-21 12:13:46.554846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-10-21 12:13:46.555190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-10-21 12:13:46.555219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-10-21 12:13:46.555561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-10-21 12:13:46.555591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-10-21 12:13:46.555972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-10-21 12:13:46.556000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-10-21 12:13:46.556363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-10-21 12:13:46.556392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-10-21 12:13:46.556723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-10-21 12:13:46.556751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-10-21 12:13:46.557111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-10-21 12:13:46.557141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-10-21 12:13:46.557366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-10-21 12:13:46.557397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-10-21 12:13:46.557735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-10-21 12:13:46.557765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-10-21 12:13:46.558129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-10-21 12:13:46.558158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-10-21 12:13:46.558569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-10-21 12:13:46.558601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-10-21 12:13:46.558951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-10-21 12:13:46.558980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-10-21 12:13:46.559333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-10-21 12:13:46.559363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-10-21 12:13:46.559710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-10-21 12:13:46.559739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.131 [2024-10-21 12:13:46.559971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-10-21 12:13:46.560000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-10-21 12:13:46.560266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-10-21 12:13:46.560296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-10-21 12:13:46.560660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-10-21 12:13:46.560690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-10-21 12:13:46.560911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-10-21 12:13:46.560940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-10-21 12:13:46.561178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-10-21 12:13:46.561212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-10-21 12:13:46.561578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-10-21 12:13:46.561609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-10-21 12:13:46.561926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-10-21 12:13:46.561956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-10-21 12:13:46.562307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-10-21 12:13:46.562346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-10-21 12:13:46.562732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-10-21 12:13:46.562760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-10-21 12:13:46.563118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-10-21 12:13:46.563146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-10-21 12:13:46.563467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-10-21 12:13:46.563497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-10-21 12:13:46.563770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-10-21 12:13:46.563805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-10-21 12:13:46.564014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-10-21 12:13:46.564044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-10-21 12:13:46.564392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-10-21 12:13:46.564423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-10-21 12:13:46.564762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-10-21 12:13:46.564791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-10-21 12:13:46.565162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-10-21 12:13:46.565192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-10-21 12:13:46.565576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-10-21 12:13:46.565607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-10-21 12:13:46.565854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-10-21 12:13:46.565886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-10-21 12:13:46.566088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-10-21 12:13:46.566123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-10-21 12:13:46.566486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-10-21 12:13:46.566517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-10-21 12:13:46.566767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-10-21 12:13:46.566796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-10-21 12:13:46.567149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-10-21 12:13:46.567178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-10-21 12:13:46.567540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-10-21 12:13:46.567573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-10-21 12:13:46.567921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-10-21 12:13:46.567950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-10-21 12:13:46.568331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-10-21 12:13:46.568370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-10-21 12:13:46.568582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-10-21 12:13:46.568612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-10-21 12:13:46.568994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-10-21 12:13:46.569024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-10-21 12:13:46.569235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-10-21 12:13:46.569266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-10-21 12:13:46.569669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-10-21 12:13:46.569700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-10-21 12:13:46.570070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-10-21 12:13:46.570099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-10-21 12:13:46.570420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-10-21 12:13:46.570451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-10-21 12:13:46.570829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-10-21 12:13:46.570858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-10-21 12:13:46.571089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-10-21 12:13:46.571120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-10-21 12:13:46.571492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-10-21 12:13:46.571526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-10-21 12:13:46.571886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-10-21 12:13:46.571916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-10-21 12:13:46.572312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-10-21 12:13:46.572353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-10-21 12:13:46.572559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-10-21 12:13:46.572589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-10-21 12:13:46.572962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-10-21 12:13:46.572992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-10-21 12:13:46.573363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-10-21 12:13:46.573394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-10-21 12:13:46.573617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-10-21 12:13:46.573646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-10-21 12:13:46.574016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-10-21 12:13:46.574045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-10-21 12:13:46.574405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-10-21 12:13:46.574435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-10-21 12:13:46.574665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-10-21 12:13:46.574698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-10-21 12:13:46.575095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-10-21 12:13:46.575123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-10-21 12:13:46.575503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-10-21 12:13:46.575534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-10-21 12:13:46.575879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-10-21 12:13:46.575908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-10-21 12:13:46.576277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-10-21 12:13:46.576306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-10-21 12:13:46.576692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-10-21 12:13:46.576723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-10-21 12:13:46.577070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-10-21 12:13:46.577100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-10-21 12:13:46.577471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-10-21 12:13:46.577502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-10-21 12:13:46.577842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-10-21 12:13:46.577872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-10-21 12:13:46.578251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-10-21 12:13:46.578282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-10-21 12:13:46.578660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-10-21 12:13:46.578691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-10-21 12:13:46.579066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-10-21 12:13:46.579095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-10-21 12:13:46.579455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-10-21 12:13:46.579486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-10-21 12:13:46.579802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-10-21 12:13:46.579831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-10-21 12:13:46.580080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-10-21 12:13:46.580109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-10-21 12:13:46.580469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-10-21 12:13:46.580500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-10-21 12:13:46.580871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-10-21 12:13:46.580899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-10-21 12:13:46.581218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-10-21 12:13:46.581248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-10-21 12:13:46.581610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-10-21 12:13:46.581640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-10-21 12:13:46.581868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-10-21 12:13:46.581898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-10-21 12:13:46.582295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-10-21 12:13:46.582332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-10-21 12:13:46.582668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-10-21 12:13:46.582697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-10-21 12:13:46.583068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-10-21 12:13:46.583103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-10-21 12:13:46.583472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-10-21 12:13:46.583503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-10-21 12:13:46.583867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-10-21 12:13:46.583896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-10-21 12:13:46.584231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-10-21 12:13:46.584260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-10-21 12:13:46.584609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-10-21 12:13:46.584639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-10-21 12:13:46.585002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-10-21 12:13:46.585031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-10-21 12:13:46.585392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-10-21 12:13:46.585422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-10-21 12:13:46.585716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-10-21 12:13:46.585749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-10-21 12:13:46.586102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-10-21 12:13:46.586130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-10-21 12:13:46.586488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-10-21 12:13:46.586519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-10-21 12:13:46.586709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-10-21 12:13:46.586739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-10-21 12:13:46.587059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-10-21 12:13:46.587089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-10-21 12:13:46.587304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-10-21 12:13:46.587344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-10-21 12:13:46.587715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-10-21 12:13:46.587745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-10-21 12:13:46.588125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-10-21 12:13:46.588155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-10-21 12:13:46.588477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-10-21 12:13:46.588508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-10-21 12:13:46.588852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-10-21 12:13:46.588881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-10-21 12:13:46.589249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-10-21 12:13:46.589278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-10-21 12:13:46.589650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-10-21 12:13:46.589679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-10-21 12:13:46.589923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-10-21 12:13:46.589952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-10-21 12:13:46.590298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-10-21 12:13:46.590337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-10-21 12:13:46.590694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-10-21 12:13:46.590723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-10-21 12:13:46.591096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-10-21 12:13:46.591125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-10-21 12:13:46.591334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-10-21 12:13:46.591367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-10-21 12:13:46.591727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-10-21 12:13:46.591756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-10-21 12:13:46.592124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-10-21 12:13:46.592153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-10-21 12:13:46.592395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-10-21 12:13:46.592428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-10-21 12:13:46.592802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-10-21 12:13:46.592831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-10-21 12:13:46.593205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-10-21 12:13:46.593233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-10-21 12:13:46.593499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-10-21 12:13:46.593530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-10-21 12:13:46.593889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-10-21 12:13:46.593917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-10-21 12:13:46.594285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-10-21 12:13:46.594314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-10-21 12:13:46.594666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-10-21 12:13:46.594695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-10-21 12:13:46.595059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-10-21 12:13:46.595088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-10-21 12:13:46.595458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-10-21 12:13:46.595488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-10-21 12:13:46.595731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-10-21 12:13:46.595760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-10-21 12:13:46.596109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-10-21 12:13:46.596138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-10-21 12:13:46.596499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-10-21 12:13:46.596529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-10-21 12:13:46.596893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-10-21 12:13:46.596921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-10-21 12:13:46.597237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-10-21 12:13:46.597265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.134 [2024-10-21 12:13:46.597619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-10-21 12:13:46.597662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-10-21 12:13:46.598023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-10-21 12:13:46.598052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-10-21 12:13:46.598280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-10-21 12:13:46.598309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-10-21 12:13:46.598561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-10-21 12:13:46.598592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-10-21 12:13:46.598845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-10-21 12:13:46.598877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-10-21 12:13:46.599236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-10-21 12:13:46.599265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-10-21 12:13:46.599653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-10-21 12:13:46.599683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-10-21 12:13:46.600033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-10-21 12:13:46.600062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-10-21 12:13:46.600442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-10-21 12:13:46.600473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-10-21 12:13:46.600860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-10-21 12:13:46.600888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-10-21 12:13:46.601211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-10-21 12:13:46.601239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-10-21 12:13:46.601581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-10-21 12:13:46.601612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-10-21 12:13:46.601970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-10-21 12:13:46.601999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-10-21 12:13:46.602234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-10-21 12:13:46.602264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-10-21 12:13:46.602647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-10-21 12:13:46.602678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-10-21 12:13:46.603035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-10-21 12:13:46.603064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-10-21 12:13:46.603431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-10-21 12:13:46.603462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-10-21 12:13:46.603885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-10-21 12:13:46.603913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-10-21 12:13:46.604287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-10-21 12:13:46.604317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-10-21 12:13:46.604686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-10-21 12:13:46.604716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-10-21 12:13:46.605087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-10-21 12:13:46.605116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-10-21 12:13:46.605343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-10-21 12:13:46.605374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-10-21 12:13:46.605710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-10-21 12:13:46.605739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-10-21 12:13:46.605946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-10-21 12:13:46.605975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-10-21 12:13:46.606361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-10-21 12:13:46.606391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-10-21 12:13:46.606747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-10-21 12:13:46.606778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-10-21 12:13:46.607144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-10-21 12:13:46.607175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-10-21 12:13:46.607470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-10-21 12:13:46.607500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-10-21 12:13:46.607716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-10-21 12:13:46.607746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-10-21 12:13:46.608128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-10-21 12:13:46.608157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-10-21 12:13:46.608488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-10-21 12:13:46.608519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-10-21 12:13:46.608844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-10-21 12:13:46.608873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-10-21 12:13:46.609242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-10-21 12:13:46.609271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.135 [2024-10-21 12:13:46.609501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-10-21 12:13:46.609532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-10-21 12:13:46.609906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-10-21 12:13:46.609935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-10-21 12:13:46.610289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-10-21 12:13:46.610318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-10-21 12:13:46.610693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-10-21 12:13:46.610722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-10-21 12:13:46.610955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-10-21 12:13:46.610985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-10-21 12:13:46.611271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-10-21 12:13:46.611303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-10-21 12:13:46.611540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-10-21 12:13:46.611569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-10-21 12:13:46.611943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-10-21 12:13:46.611980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-10-21 12:13:46.612378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-10-21 12:13:46.612411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-10-21 12:13:46.612648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-10-21 12:13:46.612680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-10-21 12:13:46.612890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-10-21 12:13:46.612923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-10-21 12:13:46.613305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-10-21 12:13:46.613345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-10-21 12:13:46.613693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-10-21 12:13:46.613722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-10-21 12:13:46.614051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-10-21 12:13:46.614079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-10-21 12:13:46.614435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-10-21 12:13:46.614466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-10-21 12:13:46.614692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-10-21 12:13:46.614721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-10-21 12:13:46.615068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-10-21 12:13:46.615097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-10-21 12:13:46.615469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-10-21 12:13:46.615501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-10-21 12:13:46.615863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-10-21 12:13:46.615892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-10-21 12:13:46.616122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-10-21 12:13:46.616151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-10-21 12:13:46.616363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-10-21 12:13:46.616394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-10-21 12:13:46.616742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-10-21 12:13:46.616773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-10-21 12:13:46.617024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-10-21 12:13:46.617053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-10-21 12:13:46.617435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-10-21 12:13:46.617466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-10-21 12:13:46.617687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-10-21 12:13:46.617718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-10-21 12:13:46.618045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-10-21 12:13:46.618075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-10-21 12:13:46.618417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-10-21 12:13:46.618448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-10-21 12:13:46.618657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-10-21 12:13:46.618686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-10-21 12:13:46.619067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-10-21 12:13:46.619096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-10-21 12:13:46.619420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-10-21 12:13:46.619450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-10-21 12:13:46.619824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-10-21 12:13:46.619852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-10-21 12:13:46.620229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-10-21 12:13:46.620258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-10-21 12:13:46.620483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-10-21 12:13:46.620516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-10-21 12:13:46.620738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-10-21 12:13:46.620771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-10-21 12:13:46.621112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-10-21 12:13:46.621140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-10-21 12:13:46.621480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-10-21 12:13:46.621511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-10-21 12:13:46.621863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-10-21 12:13:46.621892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-10-21 12:13:46.622214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-10-21 12:13:46.622244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-10-21 12:13:46.622560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-10-21 12:13:46.622590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-10-21 12:13:46.622943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-10-21 12:13:46.622972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-10-21 12:13:46.623337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-10-21 12:13:46.623368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-10-21 12:13:46.623720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-10-21 12:13:46.623749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-10-21 12:13:46.624074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-10-21 12:13:46.624102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-10-21 12:13:46.624458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-10-21 12:13:46.624488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-10-21 12:13:46.624877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-10-21 12:13:46.624906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-10-21 12:13:46.625288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-10-21 12:13:46.625317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-10-21 12:13:46.625661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-10-21 12:13:46.625689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-10-21 12:13:46.626060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-10-21 12:13:46.626095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-10-21 12:13:46.626450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-10-21 12:13:46.626480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-10-21 12:13:46.626853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-10-21 12:13:46.626882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-10-21 12:13:46.627199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-10-21 12:13:46.627228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-10-21 12:13:46.627565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-10-21 12:13:46.627595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-10-21 12:13:46.627809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-10-21 12:13:46.627838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-10-21 12:13:46.628211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-10-21 12:13:46.628240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-10-21 12:13:46.628575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-10-21 12:13:46.628606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-10-21 12:13:46.628990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-10-21 12:13:46.629019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-10-21 12:13:46.629402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-10-21 12:13:46.629434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-10-21 12:13:46.629796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-10-21 12:13:46.629827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-10-21 12:13:46.630198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-10-21 12:13:46.630228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-10-21 12:13:46.630456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-10-21 12:13:46.630488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-10-21 12:13:46.630858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-10-21 12:13:46.630887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-10-21 12:13:46.631126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-10-21 12:13:46.631155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-10-21 12:13:46.631493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-10-21 12:13:46.631524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-10-21 12:13:46.631892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-10-21 12:13:46.631921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-10-21 12:13:46.632291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-10-21 12:13:46.632330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-10-21 12:13:46.632542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-10-21 12:13:46.632572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-10-21 12:13:46.632891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-10-21 12:13:46.632920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-10-21 12:13:46.633301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-10-21 12:13:46.633338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-10-21 12:13:46.633678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-10-21 12:13:46.633706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-10-21 12:13:46.634055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-10-21 12:13:46.634085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.137 [2024-10-21 12:13:46.634402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-10-21 12:13:46.634434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-10-21 12:13:46.634670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-10-21 12:13:46.634700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-10-21 12:13:46.635063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-10-21 12:13:46.635092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-10-21 12:13:46.635463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-10-21 12:13:46.635493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-10-21 12:13:46.635815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-10-21 12:13:46.635845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-10-21 12:13:46.636169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-10-21 12:13:46.636198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-10-21 12:13:46.636436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-10-21 12:13:46.636466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-10-21 12:13:46.636700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-10-21 12:13:46.636729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-10-21 12:13:46.637066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-10-21 12:13:46.637095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-10-21 12:13:46.637316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-10-21 12:13:46.637357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-10-21 12:13:46.637732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-10-21 12:13:46.637762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-10-21 12:13:46.637971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-10-21 12:13:46.638000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-10-21 12:13:46.638338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-10-21 12:13:46.638369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-10-21 12:13:46.638692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-10-21 12:13:46.638721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-10-21 12:13:46.639071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-10-21 12:13:46.639101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-10-21 12:13:46.639467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-10-21 12:13:46.639498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-10-21 12:13:46.639812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-10-21 12:13:46.639841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-10-21 12:13:46.640230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-10-21 12:13:46.640265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-10-21 12:13:46.640608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-10-21 12:13:46.640638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-10-21 12:13:46.640851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-10-21 12:13:46.640880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-10-21 12:13:46.641240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-10-21 12:13:46.641268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-10-21 12:13:46.641641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-10-21 12:13:46.641672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-10-21 12:13:46.642020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-10-21 12:13:46.642049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-10-21 12:13:46.642406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-10-21 12:13:46.642436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-10-21 12:13:46.642823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-10-21 12:13:46.642852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-10-21 12:13:46.643067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-10-21 12:13:46.643097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-10-21 12:13:46.643451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-10-21 12:13:46.643482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-10-21 12:13:46.643835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-10-21 12:13:46.643864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-10-21 12:13:46.644230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-10-21 12:13:46.644260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-10-21 12:13:46.644633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-10-21 12:13:46.644663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-10-21 12:13:46.645027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-10-21 12:13:46.645055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-10-21 12:13:46.645414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-10-21 12:13:46.645445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-10-21 12:13:46.645814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-10-21 12:13:46.645844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-10-21 12:13:46.646236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-10-21 12:13:46.646265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-10-21 12:13:46.646645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-10-21 12:13:46.646676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-10-21 12:13:46.647057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-10-21 12:13:46.647085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-10-21 12:13:46.647404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-10-21 12:13:46.647435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-10-21 12:13:46.647651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-10-21 12:13:46.647681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-10-21 12:13:46.648058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-10-21 12:13:46.648087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-10-21 12:13:46.648348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-10-21 12:13:46.648381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-10-21 12:13:46.648614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-10-21 12:13:46.648646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-10-21 12:13:46.649036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-10-21 12:13:46.649066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-10-21 12:13:46.649435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-10-21 12:13:46.649466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-10-21 12:13:46.649790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-10-21 12:13:46.649820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-10-21 12:13:46.650181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-10-21 12:13:46.650212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-10-21 12:13:46.650555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-10-21 12:13:46.650586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-10-21 12:13:46.650939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-10-21 12:13:46.650968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-10-21 12:13:46.651291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-10-21 12:13:46.651332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-10-21 12:13:46.651665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-10-21 12:13:46.651694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-10-21 12:13:46.652043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-10-21 12:13:46.652073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-10-21 12:13:46.652454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-10-21 12:13:46.652487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-10-21 12:13:46.652708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-10-21 12:13:46.652742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-10-21 12:13:46.653130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-10-21 12:13:46.653159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-10-21 12:13:46.653490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-10-21 12:13:46.653521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-10-21 12:13:46.653902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-10-21 12:13:46.653932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-10-21 12:13:46.654304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-10-21 12:13:46.654342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-10-21 12:13:46.654671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-10-21 12:13:46.654700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-10-21 12:13:46.655062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-10-21 12:13:46.655097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-10-21 12:13:46.655447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-10-21 12:13:46.655479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-10-21 12:13:46.655835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-10-21 12:13:46.655865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-10-21 12:13:46.656111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-10-21 12:13:46.656142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-10-21 12:13:46.656467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-10-21 12:13:46.656498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-10-21 12:13:46.656879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-10-21 12:13:46.656909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-10-21 12:13:46.657269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-10-21 12:13:46.657299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-10-21 12:13:46.657553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-10-21 12:13:46.657584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-10-21 12:13:46.657931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-10-21 12:13:46.657961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-10-21 12:13:46.658372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-10-21 12:13:46.658403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-10-21 12:13:46.658746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-10-21 12:13:46.658775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-10-21 12:13:46.659097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-10-21 12:13:46.659126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-10-21 12:13:46.659339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-10-21 12:13:46.659372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-10-21 12:13:46.659655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-10-21 12:13:46.659684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-10-21 12:13:46.660066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-10-21 12:13:46.660096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-10-21 12:13:46.660457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-10-21 12:13:46.660488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-10-21 12:13:46.660876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-10-21 12:13:46.660905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-10-21 12:13:46.661264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-10-21 12:13:46.661295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-10-21 12:13:46.661689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-10-21 12:13:46.661721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-10-21 12:13:46.662040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-10-21 12:13:46.662070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-10-21 12:13:46.662266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-10-21 12:13:46.662298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-10-21 12:13:46.662580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-10-21 12:13:46.662611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-10-21 12:13:46.662877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-10-21 12:13:46.662907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-10-21 12:13:46.663309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-10-21 12:13:46.663348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-10-21 12:13:46.663762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-10-21 12:13:46.663792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-10-21 12:13:46.664006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-10-21 12:13:46.664038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-10-21 12:13:46.664423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-10-21 12:13:46.664455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-10-21 12:13:46.664794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-10-21 12:13:46.664825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-10-21 12:13:46.665063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-10-21 12:13:46.665093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-10-21 12:13:46.665546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-10-21 12:13:46.665576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-10-21 12:13:46.665938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-10-21 12:13:46.665968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-10-21 12:13:46.666347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-10-21 12:13:46.666379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-10-21 12:13:46.666746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-10-21 12:13:46.666776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-10-21 12:13:46.666998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-10-21 12:13:46.667028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-10-21 12:13:46.667286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-10-21 12:13:46.667315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-10-21 12:13:46.667648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-10-21 12:13:46.667679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-10-21 12:13:46.668050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-10-21 12:13:46.668080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-10-21 12:13:46.668458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-10-21 12:13:46.668490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-10-21 12:13:46.668874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-10-21 12:13:46.668904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-10-21 12:13:46.669114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-10-21 12:13:46.669148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-10-21 12:13:46.669380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-10-21 12:13:46.669424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-10-21 12:13:46.669796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-10-21 12:13:46.669826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-10-21 12:13:46.670214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-10-21 12:13:46.670244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-10-21 12:13:46.670627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-10-21 12:13:46.670657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-10-21 12:13:46.671037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-10-21 12:13:46.671068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-10-21 12:13:46.671431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-10-21 12:13:46.671462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-10-21 12:13:46.671799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-10-21 12:13:46.671829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-10-21 12:13:46.672222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-10-21 12:13:46.672252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-10-21 12:13:46.672634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-10-21 12:13:46.672665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-10-21 12:13:46.673034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-10-21 12:13:46.673064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-10-21 12:13:46.673391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-10-21 12:13:46.673422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-10-21 12:13:46.673873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-10-21 12:13:46.673903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-10-21 12:13:46.674253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-10-21 12:13:46.674283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-10-21 12:13:46.674540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-10-21 12:13:46.674571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-10-21 12:13:46.674812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-10-21 12:13:46.674845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-10-21 12:13:46.675219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-10-21 12:13:46.675248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-10-21 12:13:46.675466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-10-21 12:13:46.675497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-10-21 12:13:46.675872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-10-21 12:13:46.675902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-10-21 12:13:46.676275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-10-21 12:13:46.676305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-10-21 12:13:46.676532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-10-21 12:13:46.676565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-10-21 12:13:46.676789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-10-21 12:13:46.676819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-10-21 12:13:46.677092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-10-21 12:13:46.677125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-10-21 12:13:46.677353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-10-21 12:13:46.677388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-10-21 12:13:46.677751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-10-21 12:13:46.677781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-10-21 12:13:46.677992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-10-21 12:13:46.678022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-10-21 12:13:46.678362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-10-21 12:13:46.678393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-10-21 12:13:46.678645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-10-21 12:13:46.678674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-10-21 12:13:46.678978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-10-21 12:13:46.679008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-10-21 12:13:46.679213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-10-21 12:13:46.679243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-10-21 12:13:46.679535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-10-21 12:13:46.679566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-10-21 12:13:46.679991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-10-21 12:13:46.680020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-10-21 12:13:46.680340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-10-21 12:13:46.680371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-10-21 12:13:46.680732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-10-21 12:13:46.680762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-10-21 12:13:46.681119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-10-21 12:13:46.681149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-10-21 12:13:46.681499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-10-21 12:13:46.681530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-10-21 12:13:46.681850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-10-21 12:13:46.681880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-10-21 12:13:46.682220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-10-21 12:13:46.682250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-10-21 12:13:46.682479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-10-21 12:13:46.682510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-10-21 12:13:46.682884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-10-21 12:13:46.682914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-10-21 12:13:46.683120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-10-21 12:13:46.683151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-10-21 12:13:46.683330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-10-21 12:13:46.683366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-10-21 12:13:46.683733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-10-21 12:13:46.683764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-10-21 12:13:46.684112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-10-21 12:13:46.684142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-10-21 12:13:46.684510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-10-21 12:13:46.684541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-10-21 12:13:46.684794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-10-21 12:13:46.684824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-10-21 12:13:46.685076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-10-21 12:13:46.685105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-10-21 12:13:46.685352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-10-21 12:13:46.685384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-10-21 12:13:46.685762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-10-21 12:13:46.685792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-10-21 12:13:46.686160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-10-21 12:13:46.686190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-10-21 12:13:46.686539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-10-21 12:13:46.686571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-10-21 12:13:46.686890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-10-21 12:13:46.686920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-10-21 12:13:46.687318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-10-21 12:13:46.687355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-10-21 12:13:46.687564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-10-21 12:13:46.687593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-10-21 12:13:46.687838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-10-21 12:13:46.687868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-10-21 12:13:46.688242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-10-21 12:13:46.688272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-10-21 12:13:46.688699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-10-21 12:13:46.688730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-10-21 12:13:46.689076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-10-21 12:13:46.689106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-10-21 12:13:46.689518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-10-21 12:13:46.689550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-10-21 12:13:46.689905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-10-21 12:13:46.689935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-10-21 12:13:46.690178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-10-21 12:13:46.690207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-10-21 12:13:46.690583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-10-21 12:13:46.690614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-10-21 12:13:46.690983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-10-21 12:13:46.691014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-10-21 12:13:46.691339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-10-21 12:13:46.691370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-10-21 12:13:46.691729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-10-21 12:13:46.691759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-10-21 12:13:46.692123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-10-21 12:13:46.692152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-10-21 12:13:46.692512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-10-21 12:13:46.692543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-10-21 12:13:46.692878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-10-21 12:13:46.692908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-10-21 12:13:46.693153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-10-21 12:13:46.693183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-10-21 12:13:46.693549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-10-21 12:13:46.693581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-10-21 12:13:46.693846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-10-21 12:13:46.693879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-10-21 12:13:46.694251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-10-21 12:13:46.694282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-10-21 12:13:46.694678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-10-21 12:13:46.694710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-10-21 12:13:46.695064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-10-21 12:13:46.695094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-10-21 12:13:46.695465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-10-21 12:13:46.695497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-10-21 12:13:46.695861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-10-21 12:13:46.695890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-10-21 12:13:46.696296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-10-21 12:13:46.696332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-10-21 12:13:46.696537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-10-21 12:13:46.696567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-10-21 12:13:46.696918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-10-21 12:13:46.696947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-10-21 12:13:46.697283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-10-21 12:13:46.697314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.142 qpair failed and we were unable to recover it. 00:29:10.142 [2024-10-21 12:13:46.697693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.142 [2024-10-21 12:13:46.697724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.142 qpair failed and we were unable to recover it. 00:29:10.142 [2024-10-21 12:13:46.698083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.142 [2024-10-21 12:13:46.698120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.142 qpair failed and we were unable to recover it. 00:29:10.142 [2024-10-21 12:13:46.698340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.142 [2024-10-21 12:13:46.698371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.142 qpair failed and we were unable to recover it. 00:29:10.142 [2024-10-21 12:13:46.698597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.142 [2024-10-21 12:13:46.698627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.142 qpair failed and we were unable to recover it. 00:29:10.142 [2024-10-21 12:13:46.698878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.142 [2024-10-21 12:13:46.698907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.142 qpair failed and we were unable to recover it. 00:29:10.142 [2024-10-21 12:13:46.699276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.142 [2024-10-21 12:13:46.699306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.142 qpair failed and we were unable to recover it. 00:29:10.142 [2024-10-21 12:13:46.699659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.142 [2024-10-21 12:13:46.699690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.142 qpair failed and we were unable to recover it. 00:29:10.142 [2024-10-21 12:13:46.700003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.142 [2024-10-21 12:13:46.700032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.142 qpair failed and we were unable to recover it. 00:29:10.142 [2024-10-21 12:13:46.700276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.142 [2024-10-21 12:13:46.700306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.142 qpair failed and we were unable to recover it. 00:29:10.142 [2024-10-21 12:13:46.700701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.142 [2024-10-21 12:13:46.700730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.142 qpair failed and we were unable to recover it. 00:29:10.142 [2024-10-21 12:13:46.701091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.142 [2024-10-21 12:13:46.701119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.142 qpair failed and we were unable to recover it. 00:29:10.413 [2024-10-21 12:13:46.701494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.413 [2024-10-21 12:13:46.701528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.413 qpair failed and we were unable to recover it. 00:29:10.413 [2024-10-21 12:13:46.701899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.413 [2024-10-21 12:13:46.701931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.413 qpair failed and we were unable to recover it. 00:29:10.413 [2024-10-21 12:13:46.702181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.413 [2024-10-21 12:13:46.702210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.413 qpair failed and we were unable to recover it. 00:29:10.413 [2024-10-21 12:13:46.702580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.413 [2024-10-21 12:13:46.702609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.413 qpair failed and we were unable to recover it. 00:29:10.413 [2024-10-21 12:13:46.702978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.413 [2024-10-21 12:13:46.703007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.413 qpair failed and we were unable to recover it. 00:29:10.413 [2024-10-21 12:13:46.703343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.413 [2024-10-21 12:13:46.703373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.413 qpair failed and we were unable to recover it. 00:29:10.413 [2024-10-21 12:13:46.703827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.413 [2024-10-21 12:13:46.703856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.413 qpair failed and we were unable to recover it. 00:29:10.413 [2024-10-21 12:13:46.704234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.413 [2024-10-21 12:13:46.704265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.413 qpair failed and we were unable to recover it. 00:29:10.413 [2024-10-21 12:13:46.704625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.413 [2024-10-21 12:13:46.704654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.413 qpair failed and we were unable to recover it. 00:29:10.413 [2024-10-21 12:13:46.704859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.413 [2024-10-21 12:13:46.704891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.413 qpair failed and we were unable to recover it. 00:29:10.413 [2024-10-21 12:13:46.705281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.413 [2024-10-21 12:13:46.705309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.413 qpair failed and we were unable to recover it. 00:29:10.413 [2024-10-21 12:13:46.705684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.413 [2024-10-21 12:13:46.705713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.413 qpair failed and we were unable to recover it. 00:29:10.413 [2024-10-21 12:13:46.705976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.413 [2024-10-21 12:13:46.706005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.413 qpair failed and we were unable to recover it. 00:29:10.413 [2024-10-21 12:13:46.706371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.413 [2024-10-21 12:13:46.706401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.413 qpair failed and we were unable to recover it. 00:29:10.413 [2024-10-21 12:13:46.706793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.413 [2024-10-21 12:13:46.706821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.413 qpair failed and we were unable to recover it. 00:29:10.413 [2024-10-21 12:13:46.707036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.413 [2024-10-21 12:13:46.707064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.413 qpair failed and we were unable to recover it. 00:29:10.413 [2024-10-21 12:13:46.707284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.413 [2024-10-21 12:13:46.707312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.413 qpair failed and we were unable to recover it. 00:29:10.413 [2024-10-21 12:13:46.707711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.413 [2024-10-21 12:13:46.707747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.413 qpair failed and we were unable to recover it. 00:29:10.413 [2024-10-21 12:13:46.707960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.413 [2024-10-21 12:13:46.707990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.413 qpair failed and we were unable to recover it. 00:29:10.413 [2024-10-21 12:13:46.708352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.413 [2024-10-21 12:13:46.708382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.413 qpair failed and we were unable to recover it. 00:29:10.413 [2024-10-21 12:13:46.708751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.413 [2024-10-21 12:13:46.708779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.413 qpair failed and we were unable to recover it. 00:29:10.413 [2024-10-21 12:13:46.709163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.413 [2024-10-21 12:13:46.709192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.413 qpair failed and we were unable to recover it. 00:29:10.413 [2024-10-21 12:13:46.709413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.413 [2024-10-21 12:13:46.709445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.413 qpair failed and we were unable to recover it. 00:29:10.413 [2024-10-21 12:13:46.709800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.413 [2024-10-21 12:13:46.709828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.413 qpair failed and we were unable to recover it. 00:29:10.413 [2024-10-21 12:13:46.710166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.413 [2024-10-21 12:13:46.710194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.413 qpair failed and we were unable to recover it. 00:29:10.413 [2024-10-21 12:13:46.710459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.413 [2024-10-21 12:13:46.710492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.413 qpair failed and we were unable to recover it. 00:29:10.413 [2024-10-21 12:13:46.710719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.413 [2024-10-21 12:13:46.710747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.413 qpair failed and we were unable to recover it. 00:29:10.413 [2024-10-21 12:13:46.711132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.413 [2024-10-21 12:13:46.711161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.413 qpair failed and we were unable to recover it. 00:29:10.413 [2024-10-21 12:13:46.711398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.413 [2024-10-21 12:13:46.711428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.413 qpair failed and we were unable to recover it. 00:29:10.413 [2024-10-21 12:13:46.711789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.414 [2024-10-21 12:13:46.711817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.414 qpair failed and we were unable to recover it. 00:29:10.414 [2024-10-21 12:13:46.712239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.414 [2024-10-21 12:13:46.712267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.414 qpair failed and we were unable to recover it. 00:29:10.414 [2024-10-21 12:13:46.712539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.414 [2024-10-21 12:13:46.712569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.414 qpair failed and we were unable to recover it. 00:29:10.414 [2024-10-21 12:13:46.712957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.414 [2024-10-21 12:13:46.712985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.414 qpair failed and we were unable to recover it. 00:29:10.414 [2024-10-21 12:13:46.713204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.414 [2024-10-21 12:13:46.713234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.414 qpair failed and we were unable to recover it. 00:29:10.414 [2024-10-21 12:13:46.713591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.414 [2024-10-21 12:13:46.713620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.414 qpair failed and we were unable to recover it. 00:29:10.414 [2024-10-21 12:13:46.713970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.414 [2024-10-21 12:13:46.713998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.414 qpair failed and we were unable to recover it. 00:29:10.414 [2024-10-21 12:13:46.714318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.414 [2024-10-21 12:13:46.714357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.414 qpair failed and we were unable to recover it. 00:29:10.414 [2024-10-21 12:13:46.714716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.414 [2024-10-21 12:13:46.714745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.414 qpair failed and we were unable to recover it. 00:29:10.414 [2024-10-21 12:13:46.715084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.414 [2024-10-21 12:13:46.715113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.414 qpair failed and we were unable to recover it. 00:29:10.414 [2024-10-21 12:13:46.715366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.414 [2024-10-21 12:13:46.715398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.414 qpair failed and we were unable to recover it. 00:29:10.414 [2024-10-21 12:13:46.715790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.414 [2024-10-21 12:13:46.715819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.414 qpair failed and we were unable to recover it. 00:29:10.414 [2024-10-21 12:13:46.715947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.414 [2024-10-21 12:13:46.715975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.414 qpair failed and we were unable to recover it. 00:29:10.414 [2024-10-21 12:13:46.716296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.414 [2024-10-21 12:13:46.716333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.414 qpair failed and we were unable to recover it. 00:29:10.414 [2024-10-21 12:13:46.716703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.414 [2024-10-21 12:13:46.716731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.414 qpair failed and we were unable to recover it. 00:29:10.414 [2024-10-21 12:13:46.717099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.414 [2024-10-21 12:13:46.717128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.414 qpair failed and we were unable to recover it. 00:29:10.414 [2024-10-21 12:13:46.717362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.414 [2024-10-21 12:13:46.717393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.414 qpair failed and we were unable to recover it. 00:29:10.414 [2024-10-21 12:13:46.717613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.414 [2024-10-21 12:13:46.717644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.414 qpair failed and we were unable to recover it. 00:29:10.414 [2024-10-21 12:13:46.717984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.414 [2024-10-21 12:13:46.718012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.414 qpair failed and we were unable to recover it. 00:29:10.414 [2024-10-21 12:13:46.718247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.414 [2024-10-21 12:13:46.718278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.414 qpair failed and we were unable to recover it. 00:29:10.414 [2024-10-21 12:13:46.718635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.414 [2024-10-21 12:13:46.718665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.414 qpair failed and we were unable to recover it. 00:29:10.414 [2024-10-21 12:13:46.718899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.414 [2024-10-21 12:13:46.718928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.414 qpair failed and we were unable to recover it. 00:29:10.414 [2024-10-21 12:13:46.719251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.414 [2024-10-21 12:13:46.719281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.414 qpair failed and we were unable to recover it. 00:29:10.414 [2024-10-21 12:13:46.719654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.414 [2024-10-21 12:13:46.719685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.414 qpair failed and we were unable to recover it. 00:29:10.414 [2024-10-21 12:13:46.719947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.414 [2024-10-21 12:13:46.719975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.414 qpair failed and we were unable to recover it. 00:29:10.414 [2024-10-21 12:13:46.720351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.414 [2024-10-21 12:13:46.720381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.414 qpair failed and we were unable to recover it. 00:29:10.414 [2024-10-21 12:13:46.720664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.414 [2024-10-21 12:13:46.720696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.414 qpair failed and we were unable to recover it. 00:29:10.414 [2024-10-21 12:13:46.721077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.414 [2024-10-21 12:13:46.721107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.414 qpair failed and we were unable to recover it. 00:29:10.414 [2024-10-21 12:13:46.721516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.414 [2024-10-21 12:13:46.721551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.414 qpair failed and we were unable to recover it. 00:29:10.414 [2024-10-21 12:13:46.721899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.414 [2024-10-21 12:13:46.721928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.414 qpair failed and we were unable to recover it. 00:29:10.414 [2024-10-21 12:13:46.722288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.414 [2024-10-21 12:13:46.722317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.414 qpair failed and we were unable to recover it. 00:29:10.414 [2024-10-21 12:13:46.722595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.414 [2024-10-21 12:13:46.722624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.414 qpair failed and we were unable to recover it. 00:29:10.414 [2024-10-21 12:13:46.723017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.414 [2024-10-21 12:13:46.723045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.414 qpair failed and we were unable to recover it. 00:29:10.414 [2024-10-21 12:13:46.723378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.414 [2024-10-21 12:13:46.723407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.414 qpair failed and we were unable to recover it. 00:29:10.414 [2024-10-21 12:13:46.723753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.414 [2024-10-21 12:13:46.723781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.414 qpair failed and we were unable to recover it. 00:29:10.414 [2024-10-21 12:13:46.724115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.414 [2024-10-21 12:13:46.724145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.414 qpair failed and we were unable to recover it. 00:29:10.414 [2024-10-21 12:13:46.724473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.414 [2024-10-21 12:13:46.724503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.414 qpair failed and we were unable to recover it. 00:29:10.414 [2024-10-21 12:13:46.724853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.414 [2024-10-21 12:13:46.724881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.414 qpair failed and we were unable to recover it. 00:29:10.414 [2024-10-21 12:13:46.725088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.414 [2024-10-21 12:13:46.725125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.414 qpair failed and we were unable to recover it. 00:29:10.414 [2024-10-21 12:13:46.725479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.414 [2024-10-21 12:13:46.725508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.414 qpair failed and we were unable to recover it. 00:29:10.414 [2024-10-21 12:13:46.725991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.415 [2024-10-21 12:13:46.726020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.415 qpair failed and we were unable to recover it. 00:29:10.415 [2024-10-21 12:13:46.726401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.415 [2024-10-21 12:13:46.726431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.415 qpair failed and we were unable to recover it. 00:29:10.415 [2024-10-21 12:13:46.726835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.415 [2024-10-21 12:13:46.726864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.415 qpair failed and we were unable to recover it. 00:29:10.415 [2024-10-21 12:13:46.727234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.415 [2024-10-21 12:13:46.727263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.415 qpair failed and we were unable to recover it. 00:29:10.415 [2024-10-21 12:13:46.727616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.415 [2024-10-21 12:13:46.727646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.415 qpair failed and we were unable to recover it. 00:29:10.415 [2024-10-21 12:13:46.727875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.415 [2024-10-21 12:13:46.727906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.415 qpair failed and we were unable to recover it. 00:29:10.415 [2024-10-21 12:13:46.728249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.415 [2024-10-21 12:13:46.728277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.415 qpair failed and we were unable to recover it. 00:29:10.415 [2024-10-21 12:13:46.728522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.415 [2024-10-21 12:13:46.728552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.415 qpair failed and we were unable to recover it. 00:29:10.415 [2024-10-21 12:13:46.728911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.415 [2024-10-21 12:13:46.728940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.415 qpair failed and we were unable to recover it. 00:29:10.415 [2024-10-21 12:13:46.729305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.415 [2024-10-21 12:13:46.729342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.415 qpair failed and we were unable to recover it. 00:29:10.415 [2024-10-21 12:13:46.729671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.415 [2024-10-21 12:13:46.729700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.415 qpair failed and we were unable to recover it. 00:29:10.415 [2024-10-21 12:13:46.730085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.415 [2024-10-21 12:13:46.730113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.415 qpair failed and we were unable to recover it. 00:29:10.415 [2024-10-21 12:13:46.730483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.415 [2024-10-21 12:13:46.730515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.415 qpair failed and we were unable to recover it. 00:29:10.415 [2024-10-21 12:13:46.730881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.415 [2024-10-21 12:13:46.730910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.415 qpair failed and we were unable to recover it. 00:29:10.415 [2024-10-21 12:13:46.731137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.415 [2024-10-21 12:13:46.731167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.415 qpair failed and we were unable to recover it. 00:29:10.415 [2024-10-21 12:13:46.731535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.415 [2024-10-21 12:13:46.731566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.415 qpair failed and we were unable to recover it. 00:29:10.415 [2024-10-21 12:13:46.731826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.415 [2024-10-21 12:13:46.731854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.415 qpair failed and we were unable to recover it. 00:29:10.415 [2024-10-21 12:13:46.732241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.415 [2024-10-21 12:13:46.732270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.415 qpair failed and we were unable to recover it. 00:29:10.415 [2024-10-21 12:13:46.732664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.415 [2024-10-21 12:13:46.732695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.415 qpair failed and we were unable to recover it. 00:29:10.415 [2024-10-21 12:13:46.733062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.415 [2024-10-21 12:13:46.733090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.415 qpair failed and we were unable to recover it. 00:29:10.415 [2024-10-21 12:13:46.733339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.415 [2024-10-21 12:13:46.733369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.415 qpair failed and we were unable to recover it. 00:29:10.415 [2024-10-21 12:13:46.733739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.415 [2024-10-21 12:13:46.733768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.415 qpair failed and we were unable to recover it. 00:29:10.415 [2024-10-21 12:13:46.734090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.415 [2024-10-21 12:13:46.734118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.415 qpair failed and we were unable to recover it. 00:29:10.415 [2024-10-21 12:13:46.734498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.415 [2024-10-21 12:13:46.734528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.415 qpair failed and we were unable to recover it. 00:29:10.415 [2024-10-21 12:13:46.734763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.415 [2024-10-21 12:13:46.734792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.415 qpair failed and we were unable to recover it. 00:29:10.415 [2024-10-21 12:13:46.735155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.415 [2024-10-21 12:13:46.735183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.415 qpair failed and we were unable to recover it. 00:29:10.415 [2024-10-21 12:13:46.735433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.415 [2024-10-21 12:13:46.735464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.415 qpair failed and we were unable to recover it. 00:29:10.415 [2024-10-21 12:13:46.735803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.415 [2024-10-21 12:13:46.735831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.415 qpair failed and we were unable to recover it. 00:29:10.415 [2024-10-21 12:13:46.736065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.415 [2024-10-21 12:13:46.736101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.415 qpair failed and we were unable to recover it. 00:29:10.415 [2024-10-21 12:13:46.736375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.415 [2024-10-21 12:13:46.736406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.415 qpair failed and we were unable to recover it. 00:29:10.415 [2024-10-21 12:13:46.736661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.415 [2024-10-21 12:13:46.736689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.415 qpair failed and we were unable to recover it. 00:29:10.415 [2024-10-21 12:13:46.737021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.415 [2024-10-21 12:13:46.737051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.415 qpair failed and we were unable to recover it. 00:29:10.415 [2024-10-21 12:13:46.737388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.415 [2024-10-21 12:13:46.737418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.415 qpair failed and we were unable to recover it. 00:29:10.415 [2024-10-21 12:13:46.737778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.415 [2024-10-21 12:13:46.737806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.415 qpair failed and we were unable to recover it. 00:29:10.415 [2024-10-21 12:13:46.738177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.415 [2024-10-21 12:13:46.738205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.415 qpair failed and we were unable to recover it. 00:29:10.415 [2024-10-21 12:13:46.738523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.415 [2024-10-21 12:13:46.738555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.415 qpair failed and we were unable to recover it. 00:29:10.415 [2024-10-21 12:13:46.738802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.415 [2024-10-21 12:13:46.738831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.415 qpair failed and we were unable to recover it. 00:29:10.415 [2024-10-21 12:13:46.739076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.415 [2024-10-21 12:13:46.739105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.415 qpair failed and we were unable to recover it. 00:29:10.415 [2024-10-21 12:13:46.739450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.415 [2024-10-21 12:13:46.739480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.415 qpair failed and we were unable to recover it. 00:29:10.415 12:13:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:10.415 [2024-10-21 12:13:46.739808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.415 [2024-10-21 12:13:46.739838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.415 qpair failed and we were unable to recover it. 00:29:10.416 12:13:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:29:10.416 [2024-10-21 12:13:46.740195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.416 [2024-10-21 12:13:46.740226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.416 qpair failed and we were unable to recover it. 00:29:10.416 12:13:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:10.416 [2024-10-21 12:13:46.740586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.416 12:13:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:10.416 [2024-10-21 12:13:46.740616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.416 qpair failed and we were unable to recover it. 00:29:10.416 12:13:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:10.416 [2024-10-21 12:13:46.740983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.416 [2024-10-21 12:13:46.741012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.416 qpair failed and we were unable to recover it. 00:29:10.416 [2024-10-21 12:13:46.741262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.416 [2024-10-21 12:13:46.741291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.416 qpair failed and we were unable to recover it. 00:29:10.416 [2024-10-21 12:13:46.741683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.416 [2024-10-21 12:13:46.741712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.416 qpair failed and we were unable to recover it. 00:29:10.416 [2024-10-21 12:13:46.741924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.416 [2024-10-21 12:13:46.741954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.416 qpair failed and we were unable to recover it. 00:29:10.416 [2024-10-21 12:13:46.742359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.416 [2024-10-21 12:13:46.742391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.416 qpair failed and we were unable to recover it. 00:29:10.416 [2024-10-21 12:13:46.742742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.416 [2024-10-21 12:13:46.742773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.416 qpair failed and we were unable to recover it. 00:29:10.416 [2024-10-21 12:13:46.743121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.416 [2024-10-21 12:13:46.743151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.416 qpair failed and we were unable to recover it. 00:29:10.416 [2024-10-21 12:13:46.743508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.416 [2024-10-21 12:13:46.743539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.416 qpair failed and we were unable to recover it. 00:29:10.416 [2024-10-21 12:13:46.743948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.416 [2024-10-21 12:13:46.743978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.416 qpair failed and we were unable to recover it. 00:29:10.416 [2024-10-21 12:13:46.744299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.416 [2024-10-21 12:13:46.744340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.416 qpair failed and we were unable to recover it. 00:29:10.416 [2024-10-21 12:13:46.744589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.416 [2024-10-21 12:13:46.744618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.416 qpair failed and we were unable to recover it. 00:29:10.416 [2024-10-21 12:13:46.744845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.416 [2024-10-21 12:13:46.744875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.416 qpair failed and we were unable to recover it. 00:29:10.416 [2024-10-21 12:13:46.745231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.416 [2024-10-21 12:13:46.745261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.416 qpair failed and we were unable to recover it. 00:29:10.416 [2024-10-21 12:13:46.745619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.416 [2024-10-21 12:13:46.745651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.416 qpair failed and we were unable to recover it. 00:29:10.416 [2024-10-21 12:13:46.745751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.416 [2024-10-21 12:13:46.745782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.416 qpair failed and we were unable to recover it. 00:29:10.416 [2024-10-21 12:13:46.746109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.416 [2024-10-21 12:13:46.746138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.416 qpair failed and we were unable to recover it. 00:29:10.416 [2024-10-21 12:13:46.746361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.416 [2024-10-21 12:13:46.746392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.416 qpair failed and we were unable to recover it. 00:29:10.416 [2024-10-21 12:13:46.746769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.416 [2024-10-21 12:13:46.746799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.416 qpair failed and we were unable to recover it. 00:29:10.416 [2024-10-21 12:13:46.747143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.416 [2024-10-21 12:13:46.747172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.416 qpair failed and we were unable to recover it. 00:29:10.416 [2024-10-21 12:13:46.747578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.416 [2024-10-21 12:13:46.747609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.416 qpair failed and we were unable to recover it. 00:29:10.416 [2024-10-21 12:13:46.747967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.416 [2024-10-21 12:13:46.747996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.416 qpair failed and we were unable to recover it. 00:29:10.416 [2024-10-21 12:13:46.748381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.416 [2024-10-21 12:13:46.748411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.416 qpair failed and we were unable to recover it. 00:29:10.416 [2024-10-21 12:13:46.748623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.416 [2024-10-21 12:13:46.748657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.416 qpair failed and we were unable to recover it. 00:29:10.416 [2024-10-21 12:13:46.749030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.416 [2024-10-21 12:13:46.749058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.416 qpair failed and we were unable to recover it. 00:29:10.416 [2024-10-21 12:13:46.749466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.416 [2024-10-21 12:13:46.749503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.416 qpair failed and we were unable to recover it. 00:29:10.416 [2024-10-21 12:13:46.749881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.416 [2024-10-21 12:13:46.749911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.416 qpair failed and we were unable to recover it. 00:29:10.416 [2024-10-21 12:13:46.750263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.416 [2024-10-21 12:13:46.750293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.416 qpair failed and we were unable to recover it. 00:29:10.416 [2024-10-21 12:13:46.750687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.416 [2024-10-21 12:13:46.750718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.416 qpair failed and we were unable to recover it. 00:29:10.416 [2024-10-21 12:13:46.751102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.416 [2024-10-21 12:13:46.751131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.416 qpair failed and we were unable to recover it. 00:29:10.416 [2024-10-21 12:13:46.751458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.416 [2024-10-21 12:13:46.751488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.416 qpair failed and we were unable to recover it. 00:29:10.416 [2024-10-21 12:13:46.751864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.416 [2024-10-21 12:13:46.751895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.416 qpair failed and we were unable to recover it. 00:29:10.416 [2024-10-21 12:13:46.752249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.416 [2024-10-21 12:13:46.752278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.416 qpair failed and we were unable to recover it. 00:29:10.416 [2024-10-21 12:13:46.752713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.416 [2024-10-21 12:13:46.752745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.416 qpair failed and we were unable to recover it. 00:29:10.416 [2024-10-21 12:13:46.753076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.416 [2024-10-21 12:13:46.753107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.416 qpair failed and we were unable to recover it. 00:29:10.416 [2024-10-21 12:13:46.753315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.416 [2024-10-21 12:13:46.753353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.416 qpair failed and we were unable to recover it. 00:29:10.416 [2024-10-21 12:13:46.753608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.416 [2024-10-21 12:13:46.753637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.416 qpair failed and we were unable to recover it. 00:29:10.416 [2024-10-21 12:13:46.753861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.416 [2024-10-21 12:13:46.753889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.417 qpair failed and we were unable to recover it. 00:29:10.417 [2024-10-21 12:13:46.754121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.417 [2024-10-21 12:13:46.754153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.417 qpair failed and we were unable to recover it. 00:29:10.417 [2024-10-21 12:13:46.754585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.417 [2024-10-21 12:13:46.754616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.417 qpair failed and we were unable to recover it. 00:29:10.417 [2024-10-21 12:13:46.754725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.417 [2024-10-21 12:13:46.754757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.417 qpair failed and we were unable to recover it. 00:29:10.417 [2024-10-21 12:13:46.755115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.417 [2024-10-21 12:13:46.755146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.417 qpair failed and we were unable to recover it. 00:29:10.417 [2024-10-21 12:13:46.755366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.417 [2024-10-21 12:13:46.755399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.417 qpair failed and we were unable to recover it. 00:29:10.417 [2024-10-21 12:13:46.755508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.417 [2024-10-21 12:13:46.755537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.417 qpair failed and we were unable to recover it. 00:29:10.417 [2024-10-21 12:13:46.755888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.417 [2024-10-21 12:13:46.755918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.417 qpair failed and we were unable to recover it. 00:29:10.417 [2024-10-21 12:13:46.756146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.417 [2024-10-21 12:13:46.756175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.417 qpair failed and we were unable to recover it. 00:29:10.417 [2024-10-21 12:13:46.756543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.417 [2024-10-21 12:13:46.756573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.417 qpair failed and we were unable to recover it. 00:29:10.417 [2024-10-21 12:13:46.756777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.417 [2024-10-21 12:13:46.756808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.417 qpair failed and we were unable to recover it. 00:29:10.417 [2024-10-21 12:13:46.757179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.417 [2024-10-21 12:13:46.757207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.417 qpair failed and we were unable to recover it. 00:29:10.417 [2024-10-21 12:13:46.757458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.417 [2024-10-21 12:13:46.757492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.417 qpair failed and we were unable to recover it. 00:29:10.417 [2024-10-21 12:13:46.757868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.417 [2024-10-21 12:13:46.757898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.417 qpair failed and we were unable to recover it. 00:29:10.417 [2024-10-21 12:13:46.758236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.417 [2024-10-21 12:13:46.758266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.417 qpair failed and we were unable to recover it. 00:29:10.417 [2024-10-21 12:13:46.758702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.417 [2024-10-21 12:13:46.758732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.417 qpair failed and we were unable to recover it. 00:29:10.417 [2024-10-21 12:13:46.759094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.417 [2024-10-21 12:13:46.759125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.417 qpair failed and we were unable to recover it. 00:29:10.417 [2024-10-21 12:13:46.759362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.417 [2024-10-21 12:13:46.759397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.417 qpair failed and we were unable to recover it. 00:29:10.417 [2024-10-21 12:13:46.759762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.417 [2024-10-21 12:13:46.759791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.417 qpair failed and we were unable to recover it. 00:29:10.417 [2024-10-21 12:13:46.760180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.417 [2024-10-21 12:13:46.760210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.417 qpair failed and we were unable to recover it. 00:29:10.417 [2024-10-21 12:13:46.760565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.417 [2024-10-21 12:13:46.760596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.417 qpair failed and we were unable to recover it. 00:29:10.417 [2024-10-21 12:13:46.760859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.417 [2024-10-21 12:13:46.760889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.417 qpair failed and we were unable to recover it. 00:29:10.417 [2024-10-21 12:13:46.761236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.417 [2024-10-21 12:13:46.761265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.417 qpair failed and we were unable to recover it. 00:29:10.417 [2024-10-21 12:13:46.761639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.417 [2024-10-21 12:13:46.761669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.417 qpair failed and we were unable to recover it. 00:29:10.417 [2024-10-21 12:13:46.761877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.417 [2024-10-21 12:13:46.761906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.417 qpair failed and we were unable to recover it. 00:29:10.417 [2024-10-21 12:13:46.762267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.417 [2024-10-21 12:13:46.762297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.417 qpair failed and we were unable to recover it. 00:29:10.417 [2024-10-21 12:13:46.762655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.417 [2024-10-21 12:13:46.762686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.417 qpair failed and we were unable to recover it. 00:29:10.417 [2024-10-21 12:13:46.763056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.417 [2024-10-21 12:13:46.763085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.417 qpair failed and we were unable to recover it. 00:29:10.417 [2024-10-21 12:13:46.763452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.417 [2024-10-21 12:13:46.763492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.417 qpair failed and we were unable to recover it. 00:29:10.417 [2024-10-21 12:13:46.763719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.417 [2024-10-21 12:13:46.763753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.417 qpair failed and we were unable to recover it. 00:29:10.417 [2024-10-21 12:13:46.764153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.417 [2024-10-21 12:13:46.764184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.417 qpair failed and we were unable to recover it. 00:29:10.417 [2024-10-21 12:13:46.764406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.417 [2024-10-21 12:13:46.764440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.417 qpair failed and we were unable to recover it. 00:29:10.417 [2024-10-21 12:13:46.764819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.417 [2024-10-21 12:13:46.764847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.417 qpair failed and we were unable to recover it. 00:29:10.417 [2024-10-21 12:13:46.765065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.417 [2024-10-21 12:13:46.765093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.417 qpair failed and we were unable to recover it. 00:29:10.417 [2024-10-21 12:13:46.765468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.418 [2024-10-21 12:13:46.765499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.418 qpair failed and we were unable to recover it. 00:29:10.418 [2024-10-21 12:13:46.765860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.418 [2024-10-21 12:13:46.765889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.418 qpair failed and we were unable to recover it. 00:29:10.418 [2024-10-21 12:13:46.766107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.418 [2024-10-21 12:13:46.766136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.418 qpair failed and we were unable to recover it. 00:29:10.418 [2024-10-21 12:13:46.766479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.418 [2024-10-21 12:13:46.766509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.418 qpair failed and we were unable to recover it. 00:29:10.418 [2024-10-21 12:13:46.766868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.418 [2024-10-21 12:13:46.766898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.418 qpair failed and we were unable to recover it. 00:29:10.418 [2024-10-21 12:13:46.767268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.418 [2024-10-21 12:13:46.767297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.418 qpair failed and we were unable to recover it. 00:29:10.418 [2024-10-21 12:13:46.767681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.418 [2024-10-21 12:13:46.767712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.418 qpair failed and we were unable to recover it. 00:29:10.418 [2024-10-21 12:13:46.767941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.418 [2024-10-21 12:13:46.767970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.418 qpair failed and we were unable to recover it. 00:29:10.418 [2024-10-21 12:13:46.768343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.418 [2024-10-21 12:13:46.768375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.418 qpair failed and we were unable to recover it. 00:29:10.418 [2024-10-21 12:13:46.768595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.418 [2024-10-21 12:13:46.768626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.418 qpair failed and we were unable to recover it. 00:29:10.418 [2024-10-21 12:13:46.768992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.418 [2024-10-21 12:13:46.769021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.418 qpair failed and we were unable to recover it. 00:29:10.418 [2024-10-21 12:13:46.769252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.418 [2024-10-21 12:13:46.769283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.418 qpair failed and we were unable to recover it. 00:29:10.418 [2024-10-21 12:13:46.769519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.418 [2024-10-21 12:13:46.769551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.418 qpair failed and we were unable to recover it. 00:29:10.418 [2024-10-21 12:13:46.769690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.418 [2024-10-21 12:13:46.769722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.418 qpair failed and we were unable to recover it. 00:29:10.418 [2024-10-21 12:13:46.770082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.418 [2024-10-21 12:13:46.770114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.418 qpair failed and we were unable to recover it. 00:29:10.418 [2024-10-21 12:13:46.770466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.418 [2024-10-21 12:13:46.770499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.418 qpair failed and we were unable to recover it. 00:29:10.418 [2024-10-21 12:13:46.770816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.418 [2024-10-21 12:13:46.770845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.418 qpair failed and we were unable to recover it. 00:29:10.418 [2024-10-21 12:13:46.771059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.418 [2024-10-21 12:13:46.771089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.418 qpair failed and we were unable to recover it. 00:29:10.418 [2024-10-21 12:13:46.771457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.418 [2024-10-21 12:13:46.771489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.418 qpair failed and we were unable to recover it. 00:29:10.418 [2024-10-21 12:13:46.771872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.418 [2024-10-21 12:13:46.771901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.418 qpair failed and we were unable to recover it. 00:29:10.418 [2024-10-21 12:13:46.772224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.418 [2024-10-21 12:13:46.772253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.418 qpair failed and we were unable to recover it. 00:29:10.418 [2024-10-21 12:13:46.772653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.418 [2024-10-21 12:13:46.772684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.418 qpair failed and we were unable to recover it. 00:29:10.418 [2024-10-21 12:13:46.773046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.418 [2024-10-21 12:13:46.773077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.418 qpair failed and we were unable to recover it. 00:29:10.418 [2024-10-21 12:13:46.773295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.418 [2024-10-21 12:13:46.773332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.418 qpair failed and we were unable to recover it. 00:29:10.418 [2024-10-21 12:13:46.773711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.418 [2024-10-21 12:13:46.773740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.418 qpair failed and we were unable to recover it. 00:29:10.418 [2024-10-21 12:13:46.774005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.418 [2024-10-21 12:13:46.774035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.418 qpair failed and we were unable to recover it. 00:29:10.418 [2024-10-21 12:13:46.774266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.418 [2024-10-21 12:13:46.774294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.418 qpair failed and we were unable to recover it. 00:29:10.418 [2024-10-21 12:13:46.774697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.418 [2024-10-21 12:13:46.774728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.418 qpair failed and we were unable to recover it. 00:29:10.418 [2024-10-21 12:13:46.775096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.418 [2024-10-21 12:13:46.775127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.418 qpair failed and we were unable to recover it. 00:29:10.418 [2024-10-21 12:13:46.775474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.418 [2024-10-21 12:13:46.775505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.418 qpair failed and we were unable to recover it. 00:29:10.418 [2024-10-21 12:13:46.775767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.418 [2024-10-21 12:13:46.775796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.418 qpair failed and we were unable to recover it. 00:29:10.418 [2024-10-21 12:13:46.776194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.418 [2024-10-21 12:13:46.776223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.418 qpair failed and we were unable to recover it. 00:29:10.418 [2024-10-21 12:13:46.776468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.418 [2024-10-21 12:13:46.776502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.418 qpair failed and we were unable to recover it. 00:29:10.418 [2024-10-21 12:13:46.776880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.418 [2024-10-21 12:13:46.776909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.418 qpair failed and we were unable to recover it. 00:29:10.418 [2024-10-21 12:13:46.777251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.418 [2024-10-21 12:13:46.777286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.418 qpair failed and we were unable to recover it. 00:29:10.418 [2024-10-21 12:13:46.777559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.418 [2024-10-21 12:13:46.777592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.418 qpair failed and we were unable to recover it. 00:29:10.418 [2024-10-21 12:13:46.777958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.418 [2024-10-21 12:13:46.777991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.418 qpair failed and we were unable to recover it. 00:29:10.418 [2024-10-21 12:13:46.778318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.418 [2024-10-21 12:13:46.778360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.418 qpair failed and we were unable to recover it. 00:29:10.418 [2024-10-21 12:13:46.778738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.418 [2024-10-21 12:13:46.778767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.418 qpair failed and we were unable to recover it. 00:29:10.418 [2024-10-21 12:13:46.779097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.418 [2024-10-21 12:13:46.779127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.418 qpair failed and we were unable to recover it. 00:29:10.418 [2024-10-21 12:13:46.779505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.418 [2024-10-21 12:13:46.779536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.419 qpair failed and we were unable to recover it. 00:29:10.419 [2024-10-21 12:13:46.779758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.419 [2024-10-21 12:13:46.779787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.419 qpair failed and we were unable to recover it. 00:29:10.419 [2024-10-21 12:13:46.780048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.419 [2024-10-21 12:13:46.780080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.419 qpair failed and we were unable to recover it. 00:29:10.419 [2024-10-21 12:13:46.780305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.419 [2024-10-21 12:13:46.780358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.419 qpair failed and we were unable to recover it. 00:29:10.419 [2024-10-21 12:13:46.780691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.419 [2024-10-21 12:13:46.780720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.419 qpair failed and we were unable to recover it. 00:29:10.419 [2024-10-21 12:13:46.781106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.419 [2024-10-21 12:13:46.781136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.419 qpair failed and we were unable to recover it. 00:29:10.419 [2024-10-21 12:13:46.781510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.419 [2024-10-21 12:13:46.781541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.419 qpair failed and we were unable to recover it. 00:29:10.419 [2024-10-21 12:13:46.781927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.419 [2024-10-21 12:13:46.781957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.419 qpair failed and we were unable to recover it. 00:29:10.419 [2024-10-21 12:13:46.782065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.419 [2024-10-21 12:13:46.782093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.419 qpair failed and we were unable to recover it. 00:29:10.419 [2024-10-21 12:13:46.782436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.419 [2024-10-21 12:13:46.782467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.419 qpair failed and we were unable to recover it. 00:29:10.419 [2024-10-21 12:13:46.782801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.419 [2024-10-21 12:13:46.782829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.419 qpair failed and we were unable to recover it. 00:29:10.419 [2024-10-21 12:13:46.783247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.419 [2024-10-21 12:13:46.783276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.419 qpair failed and we were unable to recover it. 00:29:10.419 12:13:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:10.419 [2024-10-21 12:13:46.783638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.419 [2024-10-21 12:13:46.783672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.419 qpair failed and we were unable to recover it. 00:29:10.419 [2024-10-21 12:13:46.783888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.419 [2024-10-21 12:13:46.783918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.419 qpair failed and we were unable to recover it. 00:29:10.419 12:13:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:10.419 [2024-10-21 12:13:46.784260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.419 [2024-10-21 12:13:46.784291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.419 qpair failed and we were unable to recover it. 00:29:10.419 12:13:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.419 [2024-10-21 12:13:46.784555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.419 [2024-10-21 12:13:46.784588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.419 qpair failed and we were unable to recover it. 00:29:10.419 12:13:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:10.419 [2024-10-21 12:13:46.784964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.419 [2024-10-21 12:13:46.784996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.419 qpair failed and we were unable to recover it. 00:29:10.419 [2024-10-21 12:13:46.785202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.419 [2024-10-21 12:13:46.785233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.419 qpair failed and we were unable to recover it. 00:29:10.419 [2024-10-21 12:13:46.785506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.419 [2024-10-21 12:13:46.785537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.419 qpair failed and we were unable to recover it. 00:29:10.419 [2024-10-21 12:13:46.785978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.419 [2024-10-21 12:13:46.786009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.419 qpair failed and we were unable to recover it. 00:29:10.419 [2024-10-21 12:13:46.786352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.419 [2024-10-21 12:13:46.786381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.419 qpair failed and we were unable to recover it. 00:29:10.419 [2024-10-21 12:13:46.786636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.419 [2024-10-21 12:13:46.786665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.419 qpair failed and we were unable to recover it. 00:29:10.419 [2024-10-21 12:13:46.787012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.419 [2024-10-21 12:13:46.787041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.419 qpair failed and we were unable to recover it. 00:29:10.419 [2024-10-21 12:13:46.787410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.419 [2024-10-21 12:13:46.787439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.419 qpair failed and we were unable to recover it. 00:29:10.419 [2024-10-21 12:13:46.787782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.419 [2024-10-21 12:13:46.787810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.419 qpair failed and we were unable to recover it. 00:29:10.419 [2024-10-21 12:13:46.788061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.419 [2024-10-21 12:13:46.788090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.419 qpair failed and we were unable to recover it. 00:29:10.419 [2024-10-21 12:13:46.788340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.419 [2024-10-21 12:13:46.788369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.419 qpair failed and we were unable to recover it. 00:29:10.419 [2024-10-21 12:13:46.788736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.419 [2024-10-21 12:13:46.788764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.419 qpair failed and we were unable to recover it. 00:29:10.419 [2024-10-21 12:13:46.789093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.419 [2024-10-21 12:13:46.789121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.419 qpair failed and we were unable to recover it. 00:29:10.419 [2024-10-21 12:13:46.789478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.419 [2024-10-21 12:13:46.789508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.419 qpair failed and we were unable to recover it. 00:29:10.419 [2024-10-21 12:13:46.789882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.419 [2024-10-21 12:13:46.789910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.419 qpair failed and we were unable to recover it. 00:29:10.419 [2024-10-21 12:13:46.790262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.419 [2024-10-21 12:13:46.790291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.419 qpair failed and we were unable to recover it. 00:29:10.419 [2024-10-21 12:13:46.790534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.419 [2024-10-21 12:13:46.790569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.419 qpair failed and we were unable to recover it. 00:29:10.419 [2024-10-21 12:13:46.790940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.419 [2024-10-21 12:13:46.790969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.419 qpair failed and we were unable to recover it. 00:29:10.419 [2024-10-21 12:13:46.791336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.419 [2024-10-21 12:13:46.791367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.419 qpair failed and we were unable to recover it. 00:29:10.419 [2024-10-21 12:13:46.791731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.419 [2024-10-21 12:13:46.791759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.419 qpair failed and we were unable to recover it. 00:29:10.419 [2024-10-21 12:13:46.792087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.419 [2024-10-21 12:13:46.792117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.419 qpair failed and we were unable to recover it. 00:29:10.419 [2024-10-21 12:13:46.792457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.419 [2024-10-21 12:13:46.792487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.419 qpair failed and we were unable to recover it. 00:29:10.419 [2024-10-21 12:13:46.792820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.419 [2024-10-21 12:13:46.792848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.419 qpair failed and we were unable to recover it. 00:29:10.420 [2024-10-21 12:13:46.793228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.420 [2024-10-21 12:13:46.793256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.420 qpair failed and we were unable to recover it. 00:29:10.420 [2024-10-21 12:13:46.793486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.420 [2024-10-21 12:13:46.793520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.420 qpair failed and we were unable to recover it. 00:29:10.420 [2024-10-21 12:13:46.793783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.420 [2024-10-21 12:13:46.793813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.420 qpair failed and we were unable to recover it. 00:29:10.420 [2024-10-21 12:13:46.794154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.420 [2024-10-21 12:13:46.794182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.420 qpair failed and we were unable to recover it. 00:29:10.420 [2024-10-21 12:13:46.794592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.420 [2024-10-21 12:13:46.794622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.420 qpair failed and we were unable to recover it. 00:29:10.420 [2024-10-21 12:13:46.794884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.420 [2024-10-21 12:13:46.794913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.420 qpair failed and we were unable to recover it. 00:29:10.420 [2024-10-21 12:13:46.795280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.420 [2024-10-21 12:13:46.795309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.420 qpair failed and we were unable to recover it. 00:29:10.420 [2024-10-21 12:13:46.795733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.420 [2024-10-21 12:13:46.795763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.420 qpair failed and we were unable to recover it. 00:29:10.420 [2024-10-21 12:13:46.796124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.420 [2024-10-21 12:13:46.796154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.420 qpair failed and we were unable to recover it. 00:29:10.420 [2024-10-21 12:13:46.796547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.420 [2024-10-21 12:13:46.796577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.420 qpair failed and we were unable to recover it. 00:29:10.420 [2024-10-21 12:13:46.796901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.420 [2024-10-21 12:13:46.796930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.420 qpair failed and we were unable to recover it. 00:29:10.420 [2024-10-21 12:13:46.797144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.420 [2024-10-21 12:13:46.797173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.420 qpair failed and we were unable to recover it. 00:29:10.420 [2024-10-21 12:13:46.797414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.420 [2024-10-21 12:13:46.797446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.420 qpair failed and we were unable to recover it. 00:29:10.420 [2024-10-21 12:13:46.797804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.420 [2024-10-21 12:13:46.797833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.420 qpair failed and we were unable to recover it. 00:29:10.420 [2024-10-21 12:13:46.798148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.420 [2024-10-21 12:13:46.798177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.420 qpair failed and we were unable to recover it. 00:29:10.420 [2024-10-21 12:13:46.798507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.420 [2024-10-21 12:13:46.798537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.420 qpair failed and we were unable to recover it. 00:29:10.420 [2024-10-21 12:13:46.798904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.420 [2024-10-21 12:13:46.798932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.420 qpair failed and we were unable to recover it. 00:29:10.420 [2024-10-21 12:13:46.799188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.420 [2024-10-21 12:13:46.799216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.420 qpair failed and we were unable to recover it. 00:29:10.420 [2024-10-21 12:13:46.799572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.420 [2024-10-21 12:13:46.799602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.420 qpair failed and we were unable to recover it. 00:29:10.420 [2024-10-21 12:13:46.799957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.420 [2024-10-21 12:13:46.799985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.420 qpair failed and we were unable to recover it. 00:29:10.420 [2024-10-21 12:13:46.800354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.420 [2024-10-21 12:13:46.800384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.420 qpair failed and we were unable to recover it. 00:29:10.420 [2024-10-21 12:13:46.800767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.420 [2024-10-21 12:13:46.800796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.420 qpair failed and we were unable to recover it. 00:29:10.420 [2024-10-21 12:13:46.801130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.420 [2024-10-21 12:13:46.801158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.420 qpair failed and we were unable to recover it. 00:29:10.420 [2024-10-21 12:13:46.801556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.420 [2024-10-21 12:13:46.801587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.420 qpair failed and we were unable to recover it. 00:29:10.420 [2024-10-21 12:13:46.801962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.420 [2024-10-21 12:13:46.801992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.420 qpair failed and we were unable to recover it. 00:29:10.420 [2024-10-21 12:13:46.802354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.420 [2024-10-21 12:13:46.802383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.420 qpair failed and we were unable to recover it. 00:29:10.420 [2024-10-21 12:13:46.802594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.420 [2024-10-21 12:13:46.802625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.420 qpair failed and we were unable to recover it. 00:29:10.420 [2024-10-21 12:13:46.802996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.420 [2024-10-21 12:13:46.803025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.420 qpair failed and we were unable to recover it. 00:29:10.420 [2024-10-21 12:13:46.803413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.420 [2024-10-21 12:13:46.803442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.420 qpair failed and we were unable to recover it. 00:29:10.420 [2024-10-21 12:13:46.803794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.420 [2024-10-21 12:13:46.803824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.420 qpair failed and we were unable to recover it. 00:29:10.420 [2024-10-21 12:13:46.804229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.420 [2024-10-21 12:13:46.804258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.420 qpair failed and we were unable to recover it. 00:29:10.420 [2024-10-21 12:13:46.804529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.420 [2024-10-21 12:13:46.804559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.420 qpair failed and we were unable to recover it. 00:29:10.420 [2024-10-21 12:13:46.804890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.420 [2024-10-21 12:13:46.804918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.420 qpair failed and we were unable to recover it. 00:29:10.420 [2024-10-21 12:13:46.805255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.420 [2024-10-21 12:13:46.805289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.420 qpair failed and we were unable to recover it. 00:29:10.420 [2024-10-21 12:13:46.805722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.420 [2024-10-21 12:13:46.805752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.420 qpair failed and we were unable to recover it. 00:29:10.420 [2024-10-21 12:13:46.806132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.420 [2024-10-21 12:13:46.806160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.420 qpair failed and we were unable to recover it. 00:29:10.420 [2024-10-21 12:13:46.806413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.420 [2024-10-21 12:13:46.806447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.420 qpair failed and we were unable to recover it. 00:29:10.420 [2024-10-21 12:13:46.806733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.420 [2024-10-21 12:13:46.806765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.420 qpair failed and we were unable to recover it. 00:29:10.420 [2024-10-21 12:13:46.807160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.420 [2024-10-21 12:13:46.807188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.420 qpair failed and we were unable to recover it. 00:29:10.420 [2024-10-21 12:13:46.807551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.420 [2024-10-21 12:13:46.807581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.420 qpair failed and we were unable to recover it. 00:29:10.420 [2024-10-21 12:13:46.807939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.421 [2024-10-21 12:13:46.807969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.421 qpair failed and we were unable to recover it. 00:29:10.421 [2024-10-21 12:13:46.808337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.421 [2024-10-21 12:13:46.808366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.421 qpair failed and we were unable to recover it. 00:29:10.421 [2024-10-21 12:13:46.808751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.421 [2024-10-21 12:13:46.808780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.421 qpair failed and we were unable to recover it. 00:29:10.421 [2024-10-21 12:13:46.809039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.421 [2024-10-21 12:13:46.809068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.421 qpair failed and we were unable to recover it. 00:29:10.421 [2024-10-21 12:13:46.809456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.421 [2024-10-21 12:13:46.809487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.421 qpair failed and we were unable to recover it. 00:29:10.421 [2024-10-21 12:13:46.809875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.421 [2024-10-21 12:13:46.809904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.421 qpair failed and we were unable to recover it. 00:29:10.421 [2024-10-21 12:13:46.810227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.421 [2024-10-21 12:13:46.810257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.421 qpair failed and we were unable to recover it. 00:29:10.421 [2024-10-21 12:13:46.810645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.421 [2024-10-21 12:13:46.810675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.421 qpair failed and we were unable to recover it. 00:29:10.421 [2024-10-21 12:13:46.811038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.421 [2024-10-21 12:13:46.811066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.421 qpair failed and we were unable to recover it. 00:29:10.421 [2024-10-21 12:13:46.811434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.421 [2024-10-21 12:13:46.811465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.421 qpair failed and we were unable to recover it. 00:29:10.421 [2024-10-21 12:13:46.811667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.421 [2024-10-21 12:13:46.811698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.421 qpair failed and we were unable to recover it. 00:29:10.421 [2024-10-21 12:13:46.812087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.421 [2024-10-21 12:13:46.812116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.421 qpair failed and we were unable to recover it. 00:29:10.421 [2024-10-21 12:13:46.812459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.421 [2024-10-21 12:13:46.812489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.421 qpair failed and we were unable to recover it. 00:29:10.421 [2024-10-21 12:13:46.812819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.421 [2024-10-21 12:13:46.812848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.421 qpair failed and we were unable to recover it. 00:29:10.421 [2024-10-21 12:13:46.813239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.421 [2024-10-21 12:13:46.813268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.421 qpair failed and we were unable to recover it. 00:29:10.421 [2024-10-21 12:13:46.813687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.421 [2024-10-21 12:13:46.813717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.421 qpair failed and we were unable to recover it. 00:29:10.421 [2024-10-21 12:13:46.813951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.421 [2024-10-21 12:13:46.813980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.421 qpair failed and we were unable to recover it. 00:29:10.421 [2024-10-21 12:13:46.814300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.421 [2024-10-21 12:13:46.814338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.421 qpair failed and we were unable to recover it. 00:29:10.421 [2024-10-21 12:13:46.814688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.421 [2024-10-21 12:13:46.814718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.421 qpair failed and we were unable to recover it. 00:29:10.421 [2024-10-21 12:13:46.815078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.421 [2024-10-21 12:13:46.815107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.421 qpair failed and we were unable to recover it. 00:29:10.421 [2024-10-21 12:13:46.815481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.421 [2024-10-21 12:13:46.815512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.421 qpair failed and we were unable to recover it. 00:29:10.421 [2024-10-21 12:13:46.815879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.421 [2024-10-21 12:13:46.815908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.421 qpair failed and we were unable to recover it. 00:29:10.421 [2024-10-21 12:13:46.816139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.421 [2024-10-21 12:13:46.816168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.421 qpair failed and we were unable to recover it. 00:29:10.421 [2024-10-21 12:13:46.816550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.421 [2024-10-21 12:13:46.816580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.421 qpair failed and we were unable to recover it. 00:29:10.421 [2024-10-21 12:13:46.816929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.421 [2024-10-21 12:13:46.816960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.421 qpair failed and we were unable to recover it. 00:29:10.421 [2024-10-21 12:13:46.817281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.421 [2024-10-21 12:13:46.817311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.421 qpair failed and we were unable to recover it. 00:29:10.421 [2024-10-21 12:13:46.817576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.421 [2024-10-21 12:13:46.817605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.421 qpair failed and we were unable to recover it. 00:29:10.421 [2024-10-21 12:13:46.817973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.421 [2024-10-21 12:13:46.818002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.421 qpair failed and we were unable to recover it. 00:29:10.421 [2024-10-21 12:13:46.818356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.421 [2024-10-21 12:13:46.818386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.421 qpair failed and we were unable to recover it. 00:29:10.421 [2024-10-21 12:13:46.818762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.421 [2024-10-21 12:13:46.818792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.421 qpair failed and we were unable to recover it. 00:29:10.421 [2024-10-21 12:13:46.819151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.421 [2024-10-21 12:13:46.819180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.421 qpair failed and we were unable to recover it. 00:29:10.421 [2024-10-21 12:13:46.819412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.421 [2024-10-21 12:13:46.819443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.421 qpair failed and we were unable to recover it. 00:29:10.421 [2024-10-21 12:13:46.819677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.421 [2024-10-21 12:13:46.819708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.421 qpair failed and we were unable to recover it. 00:29:10.421 [2024-10-21 12:13:46.820073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.421 [2024-10-21 12:13:46.820108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.421 qpair failed and we were unable to recover it. 00:29:10.421 [2024-10-21 12:13:46.820504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.421 [2024-10-21 12:13:46.820534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.421 qpair failed and we were unable to recover it. 00:29:10.421 [2024-10-21 12:13:46.820793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.421 [2024-10-21 12:13:46.820825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.421 qpair failed and we were unable to recover it. 00:29:10.422 [2024-10-21 12:13:46.821204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.422 [2024-10-21 12:13:46.821232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.422 qpair failed and we were unable to recover it. 00:29:10.422 [2024-10-21 12:13:46.821631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.422 [2024-10-21 12:13:46.821661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.422 qpair failed and we were unable to recover it. 00:29:10.422 [2024-10-21 12:13:46.822001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.422 [2024-10-21 12:13:46.822030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.422 qpair failed and we were unable to recover it. 00:29:10.422 [2024-10-21 12:13:46.822420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.422 [2024-10-21 12:13:46.822450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.422 qpair failed and we were unable to recover it. 00:29:10.422 [2024-10-21 12:13:46.822803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.422 [2024-10-21 12:13:46.822831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.422 qpair failed and we were unable to recover it. 00:29:10.422 [2024-10-21 12:13:46.823035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.422 [2024-10-21 12:13:46.823066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.422 qpair failed and we were unable to recover it. 00:29:10.422 [2024-10-21 12:13:46.823425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.422 [2024-10-21 12:13:46.823455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.422 qpair failed and we were unable to recover it. 00:29:10.422 Malloc0 00:29:10.422 [2024-10-21 12:13:46.823697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.422 [2024-10-21 12:13:46.823727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.422 qpair failed and we were unable to recover it. 00:29:10.422 [2024-10-21 12:13:46.823995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.422 [2024-10-21 12:13:46.824024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.422 qpair failed and we were unable to recover it. 00:29:10.422 12:13:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.422 [2024-10-21 12:13:46.824393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.422 [2024-10-21 12:13:46.824424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.422 qpair failed and we were unable to recover it. 00:29:10.422 12:13:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:10.422 [2024-10-21 12:13:46.824774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.422 [2024-10-21 12:13:46.824804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.422 qpair failed and we were unable to recover it. 00:29:10.422 12:13:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.422 [2024-10-21 12:13:46.825162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.422 [2024-10-21 12:13:46.825190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.422 qpair failed and we were unable to recover it. 00:29:10.422 12:13:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:10.422 [2024-10-21 12:13:46.825571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.422 [2024-10-21 12:13:46.825601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.422 qpair failed and we were unable to recover it. 00:29:10.422 [2024-10-21 12:13:46.826004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.422 [2024-10-21 12:13:46.826033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.422 qpair failed and we were unable to recover it. 00:29:10.422 [2024-10-21 12:13:46.826351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.422 [2024-10-21 12:13:46.826380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.422 qpair failed and we were unable to recover it. 00:29:10.422 [2024-10-21 12:13:46.826732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.422 [2024-10-21 12:13:46.826760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.422 qpair failed and we were unable to recover it. 00:29:10.422 [2024-10-21 12:13:46.827130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.422 [2024-10-21 12:13:46.827159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.422 qpair failed and we were unable to recover it. 00:29:10.422 [2024-10-21 12:13:46.827531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.422 [2024-10-21 12:13:46.827563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.422 qpair failed and we were unable to recover it. 00:29:10.422 [2024-10-21 12:13:46.827911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.422 [2024-10-21 12:13:46.827941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.422 qpair failed and we were unable to recover it. 00:29:10.422 [2024-10-21 12:13:46.828165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.422 [2024-10-21 12:13:46.828195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.422 qpair failed and we were unable to recover it. 00:29:10.422 [2024-10-21 12:13:46.828576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.422 [2024-10-21 12:13:46.828605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.422 qpair failed and we were unable to recover it. 00:29:10.422 [2024-10-21 12:13:46.828968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.422 [2024-10-21 12:13:46.828997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.422 qpair failed and we were unable to recover it. 00:29:10.422 [2024-10-21 12:13:46.829109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.422 [2024-10-21 12:13:46.829147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.422 qpair failed and we were unable to recover it. 00:29:10.422 [2024-10-21 12:13:46.829556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.422 [2024-10-21 12:13:46.829588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.422 qpair failed and we were unable to recover it. 00:29:10.422 [2024-10-21 12:13:46.829939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.422 [2024-10-21 12:13:46.829968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.422 qpair failed and we were unable to recover it. 00:29:10.422 [2024-10-21 12:13:46.830338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.422 [2024-10-21 12:13:46.830369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.422 qpair failed and we were unable to recover it. 00:29:10.422 [2024-10-21 12:13:46.830729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.422 [2024-10-21 12:13:46.830759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.422 qpair failed and we were unable to recover it. 00:29:10.422 [2024-10-21 12:13:46.830893] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:10.422 [2024-10-21 12:13:46.831130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.422 [2024-10-21 12:13:46.831163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.422 qpair failed and we were unable to recover it. 00:29:10.422 [2024-10-21 12:13:46.831490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.422 [2024-10-21 12:13:46.831522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.422 qpair failed and we were unable to recover it. 00:29:10.422 [2024-10-21 12:13:46.831890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.422 [2024-10-21 12:13:46.831919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.422 qpair failed and we were unable to recover it. 00:29:10.422 [2024-10-21 12:13:46.832251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.422 [2024-10-21 12:13:46.832279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.422 qpair failed and we were unable to recover it. 00:29:10.422 [2024-10-21 12:13:46.832711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.422 [2024-10-21 12:13:46.832742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.422 qpair failed and we were unable to recover it. 00:29:10.422 [2024-10-21 12:13:46.832972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.422 [2024-10-21 12:13:46.833000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.422 qpair failed and we were unable to recover it. 00:29:10.422 [2024-10-21 12:13:46.833383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.422 [2024-10-21 12:13:46.833413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.422 qpair failed and we were unable to recover it. 00:29:10.422 [2024-10-21 12:13:46.833630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.422 [2024-10-21 12:13:46.833662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.422 qpair failed and we were unable to recover it. 00:29:10.422 [2024-10-21 12:13:46.834015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.422 [2024-10-21 12:13:46.834050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.422 qpair failed and we were unable to recover it. 00:29:10.422 [2024-10-21 12:13:46.834259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.422 [2024-10-21 12:13:46.834289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.422 qpair failed and we were unable to recover it. 00:29:10.422 [2024-10-21 12:13:46.834660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.422 [2024-10-21 12:13:46.834691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.422 qpair failed and we were unable to recover it. 00:29:10.423 [2024-10-21 12:13:46.834968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.423 [2024-10-21 12:13:46.834997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.423 qpair failed and we were unable to recover it. 00:29:10.423 [2024-10-21 12:13:46.835358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.423 [2024-10-21 12:13:46.835388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.423 qpair failed and we were unable to recover it. 00:29:10.423 [2024-10-21 12:13:46.835793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.423 [2024-10-21 12:13:46.835822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.423 qpair failed and we were unable to recover it. 00:29:10.423 [2024-10-21 12:13:46.836077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.423 [2024-10-21 12:13:46.836109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.423 qpair failed and we were unable to recover it. 00:29:10.423 [2024-10-21 12:13:46.836345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.423 [2024-10-21 12:13:46.836376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.423 qpair failed and we were unable to recover it. 00:29:10.423 [2024-10-21 12:13:46.836709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.423 [2024-10-21 12:13:46.836738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.423 qpair failed and we were unable to recover it. 00:29:10.423 [2024-10-21 12:13:46.837136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.423 [2024-10-21 12:13:46.837164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.423 qpair failed and we were unable to recover it. 00:29:10.423 [2024-10-21 12:13:46.837495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.423 [2024-10-21 12:13:46.837525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.423 qpair failed and we were unable to recover it. 00:29:10.423 [2024-10-21 12:13:46.837886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.423 [2024-10-21 12:13:46.837915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.423 qpair failed and we were unable to recover it. 00:29:10.423 [2024-10-21 12:13:46.838246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.423 [2024-10-21 12:13:46.838276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.423 qpair failed and we were unable to recover it. 00:29:10.423 [2024-10-21 12:13:46.838631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.423 [2024-10-21 12:13:46.838661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.423 qpair failed and we were unable to recover it. 00:29:10.423 [2024-10-21 12:13:46.839029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.423 [2024-10-21 12:13:46.839057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.423 qpair failed and we were unable to recover it. 00:29:10.423 [2024-10-21 12:13:46.839333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.423 [2024-10-21 12:13:46.839364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.423 qpair failed and we were unable to recover it. 00:29:10.423 [2024-10-21 12:13:46.839586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.423 [2024-10-21 12:13:46.839615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.423 qpair failed and we were unable to recover it. 00:29:10.423 [2024-10-21 12:13:46.839951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.423 [2024-10-21 12:13:46.839980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.423 12:13:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.423 qpair failed and we were unable to recover it. 00:29:10.423 [2024-10-21 12:13:46.840215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.423 [2024-10-21 12:13:46.840243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.423 qpair failed and we were unable to recover it. 00:29:10.423 12:13:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:10.423 [2024-10-21 12:13:46.840491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.423 [2024-10-21 12:13:46.840522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.423 qpair failed and we were unable to recover it. 00:29:10.423 12:13:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.423 [2024-10-21 12:13:46.840890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.423 [2024-10-21 12:13:46.840919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.423 qpair failed and we were unable to recover it. 00:29:10.423 12:13:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:10.423 [2024-10-21 12:13:46.841273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.423 [2024-10-21 12:13:46.841302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.423 qpair failed and we were unable to recover it. 00:29:10.423 [2024-10-21 12:13:46.841687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.423 [2024-10-21 12:13:46.841717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.423 qpair failed and we were unable to recover it. 00:29:10.423 [2024-10-21 12:13:46.842062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.423 [2024-10-21 12:13:46.842090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.423 qpair failed and we were unable to recover it. 00:29:10.423 [2024-10-21 12:13:46.842445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.423 [2024-10-21 12:13:46.842475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.423 qpair failed and we were unable to recover it. 00:29:10.423 [2024-10-21 12:13:46.842748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.423 [2024-10-21 12:13:46.842780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.423 qpair failed and we were unable to recover it. 00:29:10.423 [2024-10-21 12:13:46.843143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.423 [2024-10-21 12:13:46.843174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.423 qpair failed and we were unable to recover it. 00:29:10.423 [2024-10-21 12:13:46.843507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.423 [2024-10-21 12:13:46.843537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.423 qpair failed and we were unable to recover it. 00:29:10.423 [2024-10-21 12:13:46.843857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.423 [2024-10-21 12:13:46.843887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.423 qpair failed and we were unable to recover it. 00:29:10.423 [2024-10-21 12:13:46.844230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.423 [2024-10-21 12:13:46.844259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.423 qpair failed and we were unable to recover it. 00:29:10.423 [2024-10-21 12:13:46.844620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.423 [2024-10-21 12:13:46.844651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.423 qpair failed and we were unable to recover it. 00:29:10.423 [2024-10-21 12:13:46.845017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.423 [2024-10-21 12:13:46.845046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.423 qpair failed and we were unable to recover it. 00:29:10.423 [2024-10-21 12:13:46.845276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.423 [2024-10-21 12:13:46.845304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.423 qpair failed and we were unable to recover it. 00:29:10.423 [2024-10-21 12:13:46.845560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.423 [2024-10-21 12:13:46.845590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.423 qpair failed and we were unable to recover it. 00:29:10.423 [2024-10-21 12:13:46.845967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.423 [2024-10-21 12:13:46.845996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.423 qpair failed and we were unable to recover it. 00:29:10.423 [2024-10-21 12:13:46.846371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.423 [2024-10-21 12:13:46.846402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.423 qpair failed and we were unable to recover it. 00:29:10.423 [2024-10-21 12:13:46.846768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.423 [2024-10-21 12:13:46.846799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.423 qpair failed and we were unable to recover it. 00:29:10.423 [2024-10-21 12:13:46.847156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.423 [2024-10-21 12:13:46.847186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.423 qpair failed and we were unable to recover it. 00:29:10.423 [2024-10-21 12:13:46.847538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.423 [2024-10-21 12:13:46.847576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.423 qpair failed and we were unable to recover it. 00:29:10.423 [2024-10-21 12:13:46.847928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.423 [2024-10-21 12:13:46.847957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.423 qpair failed and we were unable to recover it. 00:29:10.423 [2024-10-21 12:13:46.848316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.423 [2024-10-21 12:13:46.848354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.423 qpair failed and we were unable to recover it. 00:29:10.423 [2024-10-21 12:13:46.848571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.423 [2024-10-21 12:13:46.848600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.424 qpair failed and we were unable to recover it. 00:29:10.424 [2024-10-21 12:13:46.848972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.424 [2024-10-21 12:13:46.849001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.424 qpair failed and we were unable to recover it. 00:29:10.424 [2024-10-21 12:13:46.849357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.424 [2024-10-21 12:13:46.849388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.424 qpair failed and we were unable to recover it. 00:29:10.424 [2024-10-21 12:13:46.849615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.424 [2024-10-21 12:13:46.849647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.424 qpair failed and we were unable to recover it. 00:29:10.424 [2024-10-21 12:13:46.849865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.424 [2024-10-21 12:13:46.849895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.424 qpair failed and we were unable to recover it. 00:29:10.424 [2024-10-21 12:13:46.850110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.424 [2024-10-21 12:13:46.850140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.424 qpair failed and we were unable to recover it. 00:29:10.424 [2024-10-21 12:13:46.850488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.424 [2024-10-21 12:13:46.850520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.424 qpair failed and we were unable to recover it. 00:29:10.424 [2024-10-21 12:13:46.850756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.424 [2024-10-21 12:13:46.850788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.424 qpair failed and we were unable to recover it. 00:29:10.424 [2024-10-21 12:13:46.851027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.424 [2024-10-21 12:13:46.851057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.424 qpair failed and we were unable to recover it. 00:29:10.424 [2024-10-21 12:13:46.851426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.424 [2024-10-21 12:13:46.851458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.424 qpair failed and we were unable to recover it. 00:29:10.424 [2024-10-21 12:13:46.851838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.424 [2024-10-21 12:13:46.851868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.424 qpair failed and we were unable to recover it. 00:29:10.424 12:13:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.424 [2024-10-21 12:13:46.852144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.424 [2024-10-21 12:13:46.852174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.424 qpair failed and we were unable to recover it. 00:29:10.424 12:13:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:10.424 [2024-10-21 12:13:46.852511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.424 [2024-10-21 12:13:46.852542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.424 qpair failed and we were unable to recover it. 00:29:10.424 12:13:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.424 [2024-10-21 12:13:46.852914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.424 [2024-10-21 12:13:46.852945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.424 qpair failed and we were unable to recover it. 00:29:10.424 12:13:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:10.424 [2024-10-21 12:13:46.853166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.424 [2024-10-21 12:13:46.853196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.424 qpair failed and we were unable to recover it. 00:29:10.424 [2024-10-21 12:13:46.853610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.424 [2024-10-21 12:13:46.853642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.424 qpair failed and we were unable to recover it. 00:29:10.424 [2024-10-21 12:13:46.853998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.424 [2024-10-21 12:13:46.854028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.424 qpair failed and we were unable to recover it. 00:29:10.424 [2024-10-21 12:13:46.854354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.424 [2024-10-21 12:13:46.854385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.424 qpair failed and we were unable to recover it. 00:29:10.424 [2024-10-21 12:13:46.854759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.424 [2024-10-21 12:13:46.854788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.424 qpair failed and we were unable to recover it. 00:29:10.424 [2024-10-21 12:13:46.855135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.424 [2024-10-21 12:13:46.855165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.424 qpair failed and we were unable to recover it. 00:29:10.424 [2024-10-21 12:13:46.855392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.424 [2024-10-21 12:13:46.855423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.424 qpair failed and we were unable to recover it. 00:29:10.424 [2024-10-21 12:13:46.855808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.424 [2024-10-21 12:13:46.855838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.424 qpair failed and we were unable to recover it. 00:29:10.424 [2024-10-21 12:13:46.856194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.424 [2024-10-21 12:13:46.856228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.424 qpair failed and we were unable to recover it. 00:29:10.424 [2024-10-21 12:13:46.856452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.424 [2024-10-21 12:13:46.856483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.424 qpair failed and we were unable to recover it. 00:29:10.424 [2024-10-21 12:13:46.856850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.424 [2024-10-21 12:13:46.856880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.424 qpair failed and we were unable to recover it. 00:29:10.424 [2024-10-21 12:13:46.857249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.424 [2024-10-21 12:13:46.857279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.424 qpair failed and we were unable to recover it. 00:29:10.424 [2024-10-21 12:13:46.857535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.424 [2024-10-21 12:13:46.857566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.424 qpair failed and we were unable to recover it. 00:29:10.424 [2024-10-21 12:13:46.857822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.424 [2024-10-21 12:13:46.857851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.424 qpair failed and we were unable to recover it. 00:29:10.424 [2024-10-21 12:13:46.858219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.424 [2024-10-21 12:13:46.858248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.424 qpair failed and we were unable to recover it. 00:29:10.424 [2024-10-21 12:13:46.858637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.424 [2024-10-21 12:13:46.858668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.424 qpair failed and we were unable to recover it. 00:29:10.424 [2024-10-21 12:13:46.858897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.424 [2024-10-21 12:13:46.858926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.424 qpair failed and we were unable to recover it. 00:29:10.424 [2024-10-21 12:13:46.859295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.424 [2024-10-21 12:13:46.859335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.424 qpair failed and we were unable to recover it. 00:29:10.424 [2024-10-21 12:13:46.859702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.424 [2024-10-21 12:13:46.859732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.424 qpair failed and we were unable to recover it. 00:29:10.424 [2024-10-21 12:13:46.860055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.424 [2024-10-21 12:13:46.860085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.424 qpair failed and we were unable to recover it. 00:29:10.424 [2024-10-21 12:13:46.860308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.424 [2024-10-21 12:13:46.860363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.424 qpair failed and we were unable to recover it. 00:29:10.424 [2024-10-21 12:13:46.860697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.424 [2024-10-21 12:13:46.860726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.424 qpair failed and we were unable to recover it. 00:29:10.424 [2024-10-21 12:13:46.860956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.424 [2024-10-21 12:13:46.860985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.424 qpair failed and we were unable to recover it. 00:29:10.424 [2024-10-21 12:13:46.861350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.424 [2024-10-21 12:13:46.861381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.424 qpair failed and we were unable to recover it. 00:29:10.424 [2024-10-21 12:13:46.861727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.424 [2024-10-21 12:13:46.861755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.424 qpair failed and we were unable to recover it. 00:29:10.424 [2024-10-21 12:13:46.862128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.425 [2024-10-21 12:13:46.862156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.425 qpair failed and we were unable to recover it. 00:29:10.425 [2024-10-21 12:13:46.862548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.425 [2024-10-21 12:13:46.862578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.425 qpair failed and we were unable to recover it. 00:29:10.425 [2024-10-21 12:13:46.862801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.425 [2024-10-21 12:13:46.862830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.425 qpair failed and we were unable to recover it. 00:29:10.425 [2024-10-21 12:13:46.863203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.425 [2024-10-21 12:13:46.863232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.425 qpair failed and we were unable to recover it. 00:29:10.425 [2024-10-21 12:13:46.863612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.425 [2024-10-21 12:13:46.863642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.425 qpair failed and we were unable to recover it. 00:29:10.425 [2024-10-21 12:13:46.864026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.425 12:13:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.425 [2024-10-21 12:13:46.864056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.425 qpair failed and we were unable to recover it. 00:29:10.425 [2024-10-21 12:13:46.864391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.425 [2024-10-21 12:13:46.864422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.425 qpair failed and we were unable to recover it. 00:29:10.425 12:13:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:10.425 12:13:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.425 [2024-10-21 12:13:46.864787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.425 [2024-10-21 12:13:46.864816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.425 qpair failed and we were unable to recover it. 00:29:10.425 12:13:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:10.425 [2024-10-21 12:13:46.865197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.425 [2024-10-21 12:13:46.865227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.425 qpair failed and we were unable to recover it. 00:29:10.425 [2024-10-21 12:13:46.865561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.425 [2024-10-21 12:13:46.865590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.425 qpair failed and we were unable to recover it. 00:29:10.425 [2024-10-21 12:13:46.865924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.425 [2024-10-21 12:13:46.865952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.425 qpair failed and we were unable to recover it. 00:29:10.425 [2024-10-21 12:13:46.866214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.425 [2024-10-21 12:13:46.866245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.425 qpair failed and we were unable to recover it. 00:29:10.425 [2024-10-21 12:13:46.866599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.425 [2024-10-21 12:13:46.866630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.425 qpair failed and we were unable to recover it. 00:29:10.425 [2024-10-21 12:13:46.866996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.425 [2024-10-21 12:13:46.867026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.425 qpair failed and we were unable to recover it. 00:29:10.425 [2024-10-21 12:13:46.867393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.425 [2024-10-21 12:13:46.867423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.425 qpair failed and we were unable to recover it. 00:29:10.425 [2024-10-21 12:13:46.867660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.425 [2024-10-21 12:13:46.867689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.425 qpair failed and we were unable to recover it. 00:29:10.425 [2024-10-21 12:13:46.867988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.425 [2024-10-21 12:13:46.868017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.425 qpair failed and we were unable to recover it. 00:29:10.425 [2024-10-21 12:13:46.868350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.425 [2024-10-21 12:13:46.868382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.425 qpair failed and we were unable to recover it. 00:29:10.425 [2024-10-21 12:13:46.868737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.425 [2024-10-21 12:13:46.868767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.425 qpair failed and we were unable to recover it. 00:29:10.425 [2024-10-21 12:13:46.869139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.425 [2024-10-21 12:13:46.869168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.425 qpair failed and we were unable to recover it. 00:29:10.425 [2024-10-21 12:13:46.869561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.425 [2024-10-21 12:13:46.869591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.425 qpair failed and we were unable to recover it. 00:29:10.425 [2024-10-21 12:13:46.869929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.425 [2024-10-21 12:13:46.869964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.425 qpair failed and we were unable to recover it. 00:29:10.425 [2024-10-21 12:13:46.870311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.425 [2024-10-21 12:13:46.870352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.425 qpair failed and we were unable to recover it. 00:29:10.425 [2024-10-21 12:13:46.870705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.425 [2024-10-21 12:13:46.870734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.425 qpair failed and we were unable to recover it. 00:29:10.425 [2024-10-21 12:13:46.871105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.425 [2024-10-21 12:13:46.871133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf24000b90 with addr=10.0.0.2, port=4420 00:29:10.425 qpair failed and we were unable to recover it. 00:29:10.425 [2024-10-21 12:13:46.871288] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:10.425 12:13:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.425 12:13:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:10.425 12:13:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.425 12:13:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:10.425 [2024-10-21 12:13:46.882144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.425 [2024-10-21 12:13:46.882302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.425 [2024-10-21 12:13:46.882367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.425 [2024-10-21 12:13:46.882392] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.425 [2024-10-21 12:13:46.882413] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.425 [2024-10-21 12:13:46.882470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.425 qpair failed and we were unable to recover it. 00:29:10.425 12:13:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.425 12:13:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1160974 00:29:10.425 [2024-10-21 12:13:46.892033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.425 [2024-10-21 12:13:46.892120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.425 [2024-10-21 12:13:46.892152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.425 [2024-10-21 12:13:46.892168] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.425 [2024-10-21 12:13:46.892182] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.425 [2024-10-21 12:13:46.892216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.425 qpair failed and we were unable to recover it. 00:29:10.425 [2024-10-21 12:13:46.902046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.425 [2024-10-21 12:13:46.902145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.425 [2024-10-21 12:13:46.902177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.425 [2024-10-21 12:13:46.902188] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.425 [2024-10-21 12:13:46.902198] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.425 [2024-10-21 12:13:46.902221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.425 qpair failed and we were unable to recover it. 00:29:10.425 [2024-10-21 12:13:46.912081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.425 [2024-10-21 12:13:46.912159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.425 [2024-10-21 12:13:46.912177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.425 [2024-10-21 12:13:46.912184] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.425 [2024-10-21 12:13:46.912192] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.426 [2024-10-21 12:13:46.912209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.426 qpair failed and we were unable to recover it. 00:29:10.426 [2024-10-21 12:13:46.922028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.426 [2024-10-21 12:13:46.922105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.426 [2024-10-21 12:13:46.922122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.426 [2024-10-21 12:13:46.922129] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.426 [2024-10-21 12:13:46.922135] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.426 [2024-10-21 12:13:46.922151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.426 qpair failed and we were unable to recover it. 00:29:10.426 [2024-10-21 12:13:46.932001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.426 [2024-10-21 12:13:46.932067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.426 [2024-10-21 12:13:46.932085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.426 [2024-10-21 12:13:46.932092] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.426 [2024-10-21 12:13:46.932099] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.426 [2024-10-21 12:13:46.932116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.426 qpair failed and we were unable to recover it. 00:29:10.426 [2024-10-21 12:13:46.941906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.426 [2024-10-21 12:13:46.942007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.426 [2024-10-21 12:13:46.942028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.426 [2024-10-21 12:13:46.942035] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.426 [2024-10-21 12:13:46.942048] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.426 [2024-10-21 12:13:46.942072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.426 qpair failed and we were unable to recover it. 00:29:10.426 [2024-10-21 12:13:46.952042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.426 [2024-10-21 12:13:46.952113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.426 [2024-10-21 12:13:46.952131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.426 [2024-10-21 12:13:46.952138] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.426 [2024-10-21 12:13:46.952144] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.426 [2024-10-21 12:13:46.952161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.426 qpair failed and we were unable to recover it. 00:29:10.426 [2024-10-21 12:13:46.962144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.426 [2024-10-21 12:13:46.962225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.426 [2024-10-21 12:13:46.962245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.426 [2024-10-21 12:13:46.962253] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.426 [2024-10-21 12:13:46.962259] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.426 [2024-10-21 12:13:46.962277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.426 qpair failed and we were unable to recover it. 00:29:10.426 [2024-10-21 12:13:46.972122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.426 [2024-10-21 12:13:46.972204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.426 [2024-10-21 12:13:46.972222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.426 [2024-10-21 12:13:46.972229] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.426 [2024-10-21 12:13:46.972236] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.426 [2024-10-21 12:13:46.972252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.426 qpair failed and we were unable to recover it. 00:29:10.426 [2024-10-21 12:13:46.982076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.426 [2024-10-21 12:13:46.982136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.426 [2024-10-21 12:13:46.982155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.426 [2024-10-21 12:13:46.982163] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.426 [2024-10-21 12:13:46.982169] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.426 [2024-10-21 12:13:46.982191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.426 qpair failed and we were unable to recover it. 00:29:10.426 [2024-10-21 12:13:46.992147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.426 [2024-10-21 12:13:46.992220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.426 [2024-10-21 12:13:46.992238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.426 [2024-10-21 12:13:46.992245] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.426 [2024-10-21 12:13:46.992252] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.426 [2024-10-21 12:13:46.992269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.426 qpair failed and we were unable to recover it. 00:29:10.689 [2024-10-21 12:13:47.002251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.689 [2024-10-21 12:13:47.002336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.689 [2024-10-21 12:13:47.002355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.689 [2024-10-21 12:13:47.002363] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.689 [2024-10-21 12:13:47.002369] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.689 [2024-10-21 12:13:47.002387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-10-21 12:13:47.012230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.689 [2024-10-21 12:13:47.012335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.689 [2024-10-21 12:13:47.012352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.689 [2024-10-21 12:13:47.012360] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.689 [2024-10-21 12:13:47.012366] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.689 [2024-10-21 12:13:47.012383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-10-21 12:13:47.022274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.689 [2024-10-21 12:13:47.022341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.689 [2024-10-21 12:13:47.022357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.689 [2024-10-21 12:13:47.022364] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.689 [2024-10-21 12:13:47.022371] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.689 [2024-10-21 12:13:47.022388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-10-21 12:13:47.032289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.689 [2024-10-21 12:13:47.032364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.689 [2024-10-21 12:13:47.032382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.689 [2024-10-21 12:13:47.032389] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.689 [2024-10-21 12:13:47.032402] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.689 [2024-10-21 12:13:47.032420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-10-21 12:13:47.042315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.689 [2024-10-21 12:13:47.042387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.689 [2024-10-21 12:13:47.042404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.689 [2024-10-21 12:13:47.042411] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.689 [2024-10-21 12:13:47.042418] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.689 [2024-10-21 12:13:47.042435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-10-21 12:13:47.053178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.689 [2024-10-21 12:13:47.053292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.689 [2024-10-21 12:13:47.053311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.689 [2024-10-21 12:13:47.053319] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.689 [2024-10-21 12:13:47.053331] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.689 [2024-10-21 12:13:47.053348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-10-21 12:13:47.062456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.689 [2024-10-21 12:13:47.062523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.689 [2024-10-21 12:13:47.062540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.689 [2024-10-21 12:13:47.062548] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.689 [2024-10-21 12:13:47.062554] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.689 [2024-10-21 12:13:47.062571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-10-21 12:13:47.072329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.689 [2024-10-21 12:13:47.072433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.689 [2024-10-21 12:13:47.072452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.689 [2024-10-21 12:13:47.072460] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.689 [2024-10-21 12:13:47.072468] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.689 [2024-10-21 12:13:47.072491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-10-21 12:13:47.082489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.689 [2024-10-21 12:13:47.082561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.689 [2024-10-21 12:13:47.082587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.689 [2024-10-21 12:13:47.082600] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.689 [2024-10-21 12:13:47.082607] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.689 [2024-10-21 12:13:47.082625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-10-21 12:13:47.092523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.689 [2024-10-21 12:13:47.092588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.689 [2024-10-21 12:13:47.092606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.689 [2024-10-21 12:13:47.092613] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.689 [2024-10-21 12:13:47.092620] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.689 [2024-10-21 12:13:47.092637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-10-21 12:13:47.102494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.689 [2024-10-21 12:13:47.102565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.690 [2024-10-21 12:13:47.102583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.690 [2024-10-21 12:13:47.102590] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.690 [2024-10-21 12:13:47.102597] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.690 [2024-10-21 12:13:47.102614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-21 12:13:47.112497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.690 [2024-10-21 12:13:47.112577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.690 [2024-10-21 12:13:47.112594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.690 [2024-10-21 12:13:47.112602] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.690 [2024-10-21 12:13:47.112608] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.690 [2024-10-21 12:13:47.112625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-21 12:13:47.122604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.690 [2024-10-21 12:13:47.122678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.690 [2024-10-21 12:13:47.122695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.690 [2024-10-21 12:13:47.122707] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.690 [2024-10-21 12:13:47.122713] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.690 [2024-10-21 12:13:47.122730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-21 12:13:47.132585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.690 [2024-10-21 12:13:47.132643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.690 [2024-10-21 12:13:47.132661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.690 [2024-10-21 12:13:47.132669] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.690 [2024-10-21 12:13:47.132676] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.690 [2024-10-21 12:13:47.132692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-21 12:13:47.142602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.690 [2024-10-21 12:13:47.142662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.690 [2024-10-21 12:13:47.142679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.690 [2024-10-21 12:13:47.142686] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.690 [2024-10-21 12:13:47.142693] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.690 [2024-10-21 12:13:47.142709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-21 12:13:47.152669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.690 [2024-10-21 12:13:47.152736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.690 [2024-10-21 12:13:47.152754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.690 [2024-10-21 12:13:47.152761] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.690 [2024-10-21 12:13:47.152767] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.690 [2024-10-21 12:13:47.152784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-21 12:13:47.162685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.690 [2024-10-21 12:13:47.162751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.690 [2024-10-21 12:13:47.162767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.690 [2024-10-21 12:13:47.162775] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.690 [2024-10-21 12:13:47.162781] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.690 [2024-10-21 12:13:47.162798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-21 12:13:47.172705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.690 [2024-10-21 12:13:47.172798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.690 [2024-10-21 12:13:47.172820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.690 [2024-10-21 12:13:47.172828] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.690 [2024-10-21 12:13:47.172835] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.690 [2024-10-21 12:13:47.172852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-21 12:13:47.182749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.690 [2024-10-21 12:13:47.182815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.690 [2024-10-21 12:13:47.182832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.690 [2024-10-21 12:13:47.182840] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.690 [2024-10-21 12:13:47.182847] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.690 [2024-10-21 12:13:47.182864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-21 12:13:47.192796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.690 [2024-10-21 12:13:47.192861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.690 [2024-10-21 12:13:47.192878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.690 [2024-10-21 12:13:47.192885] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.690 [2024-10-21 12:13:47.192891] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.690 [2024-10-21 12:13:47.192908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-21 12:13:47.202842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.690 [2024-10-21 12:13:47.202947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.690 [2024-10-21 12:13:47.202963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.690 [2024-10-21 12:13:47.202971] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.690 [2024-10-21 12:13:47.202978] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.690 [2024-10-21 12:13:47.202995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-21 12:13:47.212826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.690 [2024-10-21 12:13:47.212894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.690 [2024-10-21 12:13:47.212918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.690 [2024-10-21 12:13:47.212925] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.690 [2024-10-21 12:13:47.212932] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.690 [2024-10-21 12:13:47.212948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-21 12:13:47.222845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.690 [2024-10-21 12:13:47.222910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.690 [2024-10-21 12:13:47.222927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.690 [2024-10-21 12:13:47.222934] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.690 [2024-10-21 12:13:47.222940] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.690 [2024-10-21 12:13:47.222957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-21 12:13:47.232904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.690 [2024-10-21 12:13:47.232998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.690 [2024-10-21 12:13:47.233015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.690 [2024-10-21 12:13:47.233023] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.690 [2024-10-21 12:13:47.233030] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.690 [2024-10-21 12:13:47.233046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-21 12:13:47.242972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.690 [2024-10-21 12:13:47.243051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.690 [2024-10-21 12:13:47.243087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.691 [2024-10-21 12:13:47.243096] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.691 [2024-10-21 12:13:47.243103] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.691 [2024-10-21 12:13:47.243128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-10-21 12:13:47.252945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.691 [2024-10-21 12:13:47.253015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.691 [2024-10-21 12:13:47.253051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.691 [2024-10-21 12:13:47.253060] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.691 [2024-10-21 12:13:47.253067] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.691 [2024-10-21 12:13:47.253091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-10-21 12:13:47.262995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.691 [2024-10-21 12:13:47.263064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.691 [2024-10-21 12:13:47.263099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.691 [2024-10-21 12:13:47.263109] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.691 [2024-10-21 12:13:47.263116] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.691 [2024-10-21 12:13:47.263140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-10-21 12:13:47.273001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.691 [2024-10-21 12:13:47.273066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.691 [2024-10-21 12:13:47.273087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.691 [2024-10-21 12:13:47.273094] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.691 [2024-10-21 12:13:47.273101] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.691 [2024-10-21 12:13:47.273119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.954 [2024-10-21 12:13:47.282961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.954 [2024-10-21 12:13:47.283035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.954 [2024-10-21 12:13:47.283053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.954 [2024-10-21 12:13:47.283061] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.954 [2024-10-21 12:13:47.283067] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.954 [2024-10-21 12:13:47.283085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.954 qpair failed and we were unable to recover it. 00:29:10.954 [2024-10-21 12:13:47.293054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.954 [2024-10-21 12:13:47.293115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.954 [2024-10-21 12:13:47.293133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.954 [2024-10-21 12:13:47.293141] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.954 [2024-10-21 12:13:47.293147] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.954 [2024-10-21 12:13:47.293164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.954 qpair failed and we were unable to recover it. 00:29:10.954 [2024-10-21 12:13:47.303104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.954 [2024-10-21 12:13:47.303189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.954 [2024-10-21 12:13:47.303213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.955 [2024-10-21 12:13:47.303220] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.955 [2024-10-21 12:13:47.303227] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.955 [2024-10-21 12:13:47.303244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.955 qpair failed and we were unable to recover it. 00:29:10.955 [2024-10-21 12:13:47.313107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.955 [2024-10-21 12:13:47.313174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.955 [2024-10-21 12:13:47.313192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.955 [2024-10-21 12:13:47.313199] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.955 [2024-10-21 12:13:47.313206] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.955 [2024-10-21 12:13:47.313222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.955 qpair failed and we were unable to recover it. 00:29:10.955 [2024-10-21 12:13:47.323175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.955 [2024-10-21 12:13:47.323253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.955 [2024-10-21 12:13:47.323270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.955 [2024-10-21 12:13:47.323278] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.955 [2024-10-21 12:13:47.323284] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.955 [2024-10-21 12:13:47.323301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.955 qpair failed and we were unable to recover it. 00:29:10.955 [2024-10-21 12:13:47.333209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.955 [2024-10-21 12:13:47.333285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.955 [2024-10-21 12:13:47.333303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.955 [2024-10-21 12:13:47.333311] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.955 [2024-10-21 12:13:47.333317] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.955 [2024-10-21 12:13:47.333340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.955 qpair failed and we were unable to recover it. 00:29:10.955 [2024-10-21 12:13:47.343245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.955 [2024-10-21 12:13:47.343354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.955 [2024-10-21 12:13:47.343372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.955 [2024-10-21 12:13:47.343379] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.955 [2024-10-21 12:13:47.343386] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.955 [2024-10-21 12:13:47.343408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.955 qpair failed and we were unable to recover it. 00:29:10.955 [2024-10-21 12:13:47.353245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.955 [2024-10-21 12:13:47.353314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.955 [2024-10-21 12:13:47.353335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.955 [2024-10-21 12:13:47.353343] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.955 [2024-10-21 12:13:47.353349] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.955 [2024-10-21 12:13:47.353366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.955 qpair failed and we were unable to recover it. 00:29:10.955 [2024-10-21 12:13:47.363307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.955 [2024-10-21 12:13:47.363390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.955 [2024-10-21 12:13:47.363406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.955 [2024-10-21 12:13:47.363414] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.955 [2024-10-21 12:13:47.363420] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.955 [2024-10-21 12:13:47.363437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.955 qpair failed and we were unable to recover it. 00:29:10.955 [2024-10-21 12:13:47.373312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.955 [2024-10-21 12:13:47.373380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.955 [2024-10-21 12:13:47.373397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.955 [2024-10-21 12:13:47.373404] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.955 [2024-10-21 12:13:47.373411] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.955 [2024-10-21 12:13:47.373428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.955 qpair failed and we were unable to recover it. 00:29:10.955 [2024-10-21 12:13:47.383313] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.955 [2024-10-21 12:13:47.383380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.955 [2024-10-21 12:13:47.383400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.955 [2024-10-21 12:13:47.383408] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.955 [2024-10-21 12:13:47.383414] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.955 [2024-10-21 12:13:47.383432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.955 qpair failed and we were unable to recover it. 00:29:10.955 [2024-10-21 12:13:47.393364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.955 [2024-10-21 12:13:47.393471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.955 [2024-10-21 12:13:47.393494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.955 [2024-10-21 12:13:47.393502] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.955 [2024-10-21 12:13:47.393509] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.955 [2024-10-21 12:13:47.393526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.955 qpair failed and we were unable to recover it. 00:29:10.955 [2024-10-21 12:13:47.403392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.955 [2024-10-21 12:13:47.403459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.955 [2024-10-21 12:13:47.403475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.955 [2024-10-21 12:13:47.403482] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.955 [2024-10-21 12:13:47.403489] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.955 [2024-10-21 12:13:47.403506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.955 qpair failed and we were unable to recover it. 00:29:10.955 [2024-10-21 12:13:47.413398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.955 [2024-10-21 12:13:47.413465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.955 [2024-10-21 12:13:47.413482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.955 [2024-10-21 12:13:47.413490] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.955 [2024-10-21 12:13:47.413496] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.955 [2024-10-21 12:13:47.413513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.955 qpair failed and we were unable to recover it. 00:29:10.955 [2024-10-21 12:13:47.423438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.955 [2024-10-21 12:13:47.423537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.955 [2024-10-21 12:13:47.423553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.955 [2024-10-21 12:13:47.423561] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.955 [2024-10-21 12:13:47.423567] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.955 [2024-10-21 12:13:47.423584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.955 qpair failed and we were unable to recover it. 00:29:10.955 [2024-10-21 12:13:47.433450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.955 [2024-10-21 12:13:47.433522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.955 [2024-10-21 12:13:47.433541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.955 [2024-10-21 12:13:47.433549] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.955 [2024-10-21 12:13:47.433561] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.955 [2024-10-21 12:13:47.433579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.955 qpair failed and we were unable to recover it. 00:29:10.955 [2024-10-21 12:13:47.443533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.955 [2024-10-21 12:13:47.443601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.955 [2024-10-21 12:13:47.443618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.955 [2024-10-21 12:13:47.443626] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.955 [2024-10-21 12:13:47.443633] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.956 [2024-10-21 12:13:47.443650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.956 qpair failed and we were unable to recover it. 00:29:10.956 [2024-10-21 12:13:47.453525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.956 [2024-10-21 12:13:47.453589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.956 [2024-10-21 12:13:47.453607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.956 [2024-10-21 12:13:47.453614] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.956 [2024-10-21 12:13:47.453620] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.956 [2024-10-21 12:13:47.453637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.956 qpair failed and we were unable to recover it. 00:29:10.956 [2024-10-21 12:13:47.463520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.956 [2024-10-21 12:13:47.463585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.956 [2024-10-21 12:13:47.463602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.956 [2024-10-21 12:13:47.463610] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.956 [2024-10-21 12:13:47.463616] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.956 [2024-10-21 12:13:47.463633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.956 qpair failed and we were unable to recover it. 00:29:10.956 [2024-10-21 12:13:47.473572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.956 [2024-10-21 12:13:47.473653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.956 [2024-10-21 12:13:47.473672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.956 [2024-10-21 12:13:47.473680] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.956 [2024-10-21 12:13:47.473686] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.956 [2024-10-21 12:13:47.473703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.956 qpair failed and we were unable to recover it. 00:29:10.956 [2024-10-21 12:13:47.483643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.956 [2024-10-21 12:13:47.483722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.956 [2024-10-21 12:13:47.483742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.956 [2024-10-21 12:13:47.483750] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.956 [2024-10-21 12:13:47.483757] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.956 [2024-10-21 12:13:47.483773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.956 qpair failed and we were unable to recover it. 00:29:10.956 [2024-10-21 12:13:47.493645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.956 [2024-10-21 12:13:47.493706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.956 [2024-10-21 12:13:47.493724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.956 [2024-10-21 12:13:47.493732] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.956 [2024-10-21 12:13:47.493739] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.956 [2024-10-21 12:13:47.493755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.956 qpair failed and we were unable to recover it. 00:29:10.956 [2024-10-21 12:13:47.503658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.956 [2024-10-21 12:13:47.503723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.956 [2024-10-21 12:13:47.503740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.956 [2024-10-21 12:13:47.503747] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.956 [2024-10-21 12:13:47.503753] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.956 [2024-10-21 12:13:47.503770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.956 qpair failed and we were unable to recover it. 00:29:10.956 [2024-10-21 12:13:47.513674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.956 [2024-10-21 12:13:47.513739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.956 [2024-10-21 12:13:47.513756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.956 [2024-10-21 12:13:47.513763] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.956 [2024-10-21 12:13:47.513769] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.956 [2024-10-21 12:13:47.513786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.956 qpair failed and we were unable to recover it. 00:29:10.956 [2024-10-21 12:13:47.523746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.956 [2024-10-21 12:13:47.523823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.956 [2024-10-21 12:13:47.523840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.956 [2024-10-21 12:13:47.523847] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.956 [2024-10-21 12:13:47.523860] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.956 [2024-10-21 12:13:47.523877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.956 qpair failed and we were unable to recover it. 00:29:10.956 [2024-10-21 12:13:47.533737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.956 [2024-10-21 12:13:47.533802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.956 [2024-10-21 12:13:47.533821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.956 [2024-10-21 12:13:47.533828] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.956 [2024-10-21 12:13:47.533834] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.956 [2024-10-21 12:13:47.533851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.956 qpair failed and we were unable to recover it. 00:29:10.956 [2024-10-21 12:13:47.543773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.956 [2024-10-21 12:13:47.543836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.956 [2024-10-21 12:13:47.543853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.956 [2024-10-21 12:13:47.543860] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.956 [2024-10-21 12:13:47.543867] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:10.956 [2024-10-21 12:13:47.543884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.956 qpair failed and we were unable to recover it. 00:29:11.219 [2024-10-21 12:13:47.553821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.219 [2024-10-21 12:13:47.553893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.219 [2024-10-21 12:13:47.553910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.219 [2024-10-21 12:13:47.553917] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.219 [2024-10-21 12:13:47.553924] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.219 [2024-10-21 12:13:47.553940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.219 qpair failed and we were unable to recover it. 00:29:11.219 [2024-10-21 12:13:47.563881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.219 [2024-10-21 12:13:47.563956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.219 [2024-10-21 12:13:47.563973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.219 [2024-10-21 12:13:47.563980] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.219 [2024-10-21 12:13:47.563986] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.219 [2024-10-21 12:13:47.564003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.219 qpair failed and we were unable to recover it. 00:29:11.219 [2024-10-21 12:13:47.573873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.219 [2024-10-21 12:13:47.573940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.219 [2024-10-21 12:13:47.573958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.219 [2024-10-21 12:13:47.573965] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.219 [2024-10-21 12:13:47.573972] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.219 [2024-10-21 12:13:47.573989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.219 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-21 12:13:47.583875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.220 [2024-10-21 12:13:47.583931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.220 [2024-10-21 12:13:47.583950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.220 [2024-10-21 12:13:47.583958] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.220 [2024-10-21 12:13:47.583964] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.220 [2024-10-21 12:13:47.583982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-21 12:13:47.593934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.220 [2024-10-21 12:13:47.594015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.220 [2024-10-21 12:13:47.594032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.220 [2024-10-21 12:13:47.594039] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.220 [2024-10-21 12:13:47.594046] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.220 [2024-10-21 12:13:47.594063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-21 12:13:47.604002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.220 [2024-10-21 12:13:47.604081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.220 [2024-10-21 12:13:47.604098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.220 [2024-10-21 12:13:47.604106] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.220 [2024-10-21 12:13:47.604112] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.220 [2024-10-21 12:13:47.604129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-21 12:13:47.613988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.220 [2024-10-21 12:13:47.614047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.220 [2024-10-21 12:13:47.614066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.220 [2024-10-21 12:13:47.614079] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.220 [2024-10-21 12:13:47.614086] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.220 [2024-10-21 12:13:47.614103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-21 12:13:47.624057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.220 [2024-10-21 12:13:47.624134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.220 [2024-10-21 12:13:47.624151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.220 [2024-10-21 12:13:47.624159] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.220 [2024-10-21 12:13:47.624165] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.220 [2024-10-21 12:13:47.624182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-21 12:13:47.634067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.220 [2024-10-21 12:13:47.634132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.220 [2024-10-21 12:13:47.634149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.220 [2024-10-21 12:13:47.634156] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.220 [2024-10-21 12:13:47.634162] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.220 [2024-10-21 12:13:47.634179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-21 12:13:47.644145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.220 [2024-10-21 12:13:47.644219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.220 [2024-10-21 12:13:47.644236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.220 [2024-10-21 12:13:47.644243] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.220 [2024-10-21 12:13:47.644250] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.220 [2024-10-21 12:13:47.644267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-21 12:13:47.654134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.220 [2024-10-21 12:13:47.654195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.220 [2024-10-21 12:13:47.654213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.220 [2024-10-21 12:13:47.654221] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.220 [2024-10-21 12:13:47.654227] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.220 [2024-10-21 12:13:47.654244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-21 12:13:47.664168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.220 [2024-10-21 12:13:47.664265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.220 [2024-10-21 12:13:47.664282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.220 [2024-10-21 12:13:47.664289] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.220 [2024-10-21 12:13:47.664295] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.220 [2024-10-21 12:13:47.664312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-21 12:13:47.674264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.220 [2024-10-21 12:13:47.674345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.220 [2024-10-21 12:13:47.674363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.220 [2024-10-21 12:13:47.674370] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.220 [2024-10-21 12:13:47.674377] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.220 [2024-10-21 12:13:47.674394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-21 12:13:47.684239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.220 [2024-10-21 12:13:47.684314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.220 [2024-10-21 12:13:47.684338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.220 [2024-10-21 12:13:47.684345] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.220 [2024-10-21 12:13:47.684351] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.220 [2024-10-21 12:13:47.684368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-21 12:13:47.694141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.220 [2024-10-21 12:13:47.694203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.220 [2024-10-21 12:13:47.694221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.220 [2024-10-21 12:13:47.694228] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.220 [2024-10-21 12:13:47.694235] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.220 [2024-10-21 12:13:47.694252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-21 12:13:47.704296] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.220 [2024-10-21 12:13:47.704367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.220 [2024-10-21 12:13:47.704383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.220 [2024-10-21 12:13:47.704396] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.220 [2024-10-21 12:13:47.704402] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.220 [2024-10-21 12:13:47.704419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-21 12:13:47.714303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.220 [2024-10-21 12:13:47.714379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.220 [2024-10-21 12:13:47.714396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.220 [2024-10-21 12:13:47.714403] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.220 [2024-10-21 12:13:47.714410] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.220 [2024-10-21 12:13:47.714426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-21 12:13:47.724376] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.221 [2024-10-21 12:13:47.724499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.221 [2024-10-21 12:13:47.724516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.221 [2024-10-21 12:13:47.724524] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.221 [2024-10-21 12:13:47.724530] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.221 [2024-10-21 12:13:47.724548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.221 qpair failed and we were unable to recover it. 00:29:11.221 [2024-10-21 12:13:47.734381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.221 [2024-10-21 12:13:47.734450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.221 [2024-10-21 12:13:47.734468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.221 [2024-10-21 12:13:47.734476] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.221 [2024-10-21 12:13:47.734482] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.221 [2024-10-21 12:13:47.734498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.221 qpair failed and we were unable to recover it. 00:29:11.221 [2024-10-21 12:13:47.744378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.221 [2024-10-21 12:13:47.744447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.221 [2024-10-21 12:13:47.744463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.221 [2024-10-21 12:13:47.744471] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.221 [2024-10-21 12:13:47.744477] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.221 [2024-10-21 12:13:47.744494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.221 qpair failed and we were unable to recover it. 00:29:11.221 [2024-10-21 12:13:47.754475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.221 [2024-10-21 12:13:47.754547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.221 [2024-10-21 12:13:47.754564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.221 [2024-10-21 12:13:47.754572] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.221 [2024-10-21 12:13:47.754578] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.221 [2024-10-21 12:13:47.754595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.221 qpair failed and we were unable to recover it. 00:29:11.221 [2024-10-21 12:13:47.764492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.221 [2024-10-21 12:13:47.764557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.221 [2024-10-21 12:13:47.764574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.221 [2024-10-21 12:13:47.764582] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.221 [2024-10-21 12:13:47.764589] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.221 [2024-10-21 12:13:47.764605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.221 qpair failed and we were unable to recover it. 00:29:11.221 [2024-10-21 12:13:47.774518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.221 [2024-10-21 12:13:47.774588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.221 [2024-10-21 12:13:47.774607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.221 [2024-10-21 12:13:47.774614] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.221 [2024-10-21 12:13:47.774621] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.221 [2024-10-21 12:13:47.774638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.221 qpair failed and we were unable to recover it. 00:29:11.221 [2024-10-21 12:13:47.784497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.221 [2024-10-21 12:13:47.784565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.221 [2024-10-21 12:13:47.784581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.221 [2024-10-21 12:13:47.784589] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.221 [2024-10-21 12:13:47.784595] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.221 [2024-10-21 12:13:47.784612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.221 qpair failed and we were unable to recover it. 00:29:11.221 [2024-10-21 12:13:47.794576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.221 [2024-10-21 12:13:47.794643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.221 [2024-10-21 12:13:47.794672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.221 [2024-10-21 12:13:47.794679] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.221 [2024-10-21 12:13:47.794686] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.221 [2024-10-21 12:13:47.794704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.221 qpair failed and we were unable to recover it. 00:29:11.221 [2024-10-21 12:13:47.804577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.221 [2024-10-21 12:13:47.804646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.221 [2024-10-21 12:13:47.804663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.221 [2024-10-21 12:13:47.804670] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.221 [2024-10-21 12:13:47.804677] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.221 [2024-10-21 12:13:47.804693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.221 qpair failed and we were unable to recover it. 00:29:11.485 [2024-10-21 12:13:47.814609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.485 [2024-10-21 12:13:47.814671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.485 [2024-10-21 12:13:47.814688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.485 [2024-10-21 12:13:47.814696] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.485 [2024-10-21 12:13:47.814703] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.485 [2024-10-21 12:13:47.814719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-10-21 12:13:47.824592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.485 [2024-10-21 12:13:47.824659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.485 [2024-10-21 12:13:47.824676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.485 [2024-10-21 12:13:47.824684] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.485 [2024-10-21 12:13:47.824691] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.485 [2024-10-21 12:13:47.824707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-10-21 12:13:47.834675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.485 [2024-10-21 12:13:47.834744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.485 [2024-10-21 12:13:47.834762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.485 [2024-10-21 12:13:47.834769] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.485 [2024-10-21 12:13:47.834776] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.485 [2024-10-21 12:13:47.834798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-10-21 12:13:47.844717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.485 [2024-10-21 12:13:47.844783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.485 [2024-10-21 12:13:47.844799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.485 [2024-10-21 12:13:47.844807] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.485 [2024-10-21 12:13:47.844813] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.485 [2024-10-21 12:13:47.844830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-10-21 12:13:47.854699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.485 [2024-10-21 12:13:47.854760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.485 [2024-10-21 12:13:47.854777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.485 [2024-10-21 12:13:47.854784] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.485 [2024-10-21 12:13:47.854791] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.485 [2024-10-21 12:13:47.854807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-10-21 12:13:47.864746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.485 [2024-10-21 12:13:47.864815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.485 [2024-10-21 12:13:47.864832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.485 [2024-10-21 12:13:47.864839] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.485 [2024-10-21 12:13:47.864846] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.485 [2024-10-21 12:13:47.864862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-10-21 12:13:47.874772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.485 [2024-10-21 12:13:47.874835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.485 [2024-10-21 12:13:47.874852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.485 [2024-10-21 12:13:47.874859] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.485 [2024-10-21 12:13:47.874866] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.485 [2024-10-21 12:13:47.874882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-10-21 12:13:47.884837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.485 [2024-10-21 12:13:47.884946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.486 [2024-10-21 12:13:47.884968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.486 [2024-10-21 12:13:47.884975] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.486 [2024-10-21 12:13:47.884982] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.486 [2024-10-21 12:13:47.884999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-10-21 12:13:47.894795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.486 [2024-10-21 12:13:47.894857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.486 [2024-10-21 12:13:47.894875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.486 [2024-10-21 12:13:47.894882] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.486 [2024-10-21 12:13:47.894889] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.486 [2024-10-21 12:13:47.894906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-10-21 12:13:47.904856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.486 [2024-10-21 12:13:47.904925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.486 [2024-10-21 12:13:47.904941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.486 [2024-10-21 12:13:47.904948] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.486 [2024-10-21 12:13:47.904955] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.486 [2024-10-21 12:13:47.904971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-10-21 12:13:47.914897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.486 [2024-10-21 12:13:47.914960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.486 [2024-10-21 12:13:47.914978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.486 [2024-10-21 12:13:47.914985] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.486 [2024-10-21 12:13:47.914991] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.486 [2024-10-21 12:13:47.915008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-10-21 12:13:47.924854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.486 [2024-10-21 12:13:47.924923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.486 [2024-10-21 12:13:47.924940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.486 [2024-10-21 12:13:47.924947] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.486 [2024-10-21 12:13:47.924953] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.486 [2024-10-21 12:13:47.924976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-10-21 12:13:47.934940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.486 [2024-10-21 12:13:47.935005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.486 [2024-10-21 12:13:47.935022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.486 [2024-10-21 12:13:47.935029] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.486 [2024-10-21 12:13:47.935035] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.486 [2024-10-21 12:13:47.935051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-10-21 12:13:47.944984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.486 [2024-10-21 12:13:47.945057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.486 [2024-10-21 12:13:47.945094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.486 [2024-10-21 12:13:47.945103] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.486 [2024-10-21 12:13:47.945110] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.486 [2024-10-21 12:13:47.945135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-10-21 12:13:47.955028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.486 [2024-10-21 12:13:47.955111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.486 [2024-10-21 12:13:47.955147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.486 [2024-10-21 12:13:47.955156] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.486 [2024-10-21 12:13:47.955164] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.486 [2024-10-21 12:13:47.955187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-10-21 12:13:47.965096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.486 [2024-10-21 12:13:47.965174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.486 [2024-10-21 12:13:47.965196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.486 [2024-10-21 12:13:47.965203] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.486 [2024-10-21 12:13:47.965210] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.486 [2024-10-21 12:13:47.965229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-10-21 12:13:47.974965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.486 [2024-10-21 12:13:47.975063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.486 [2024-10-21 12:13:47.975082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.486 [2024-10-21 12:13:47.975090] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.486 [2024-10-21 12:13:47.975097] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.486 [2024-10-21 12:13:47.975115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-10-21 12:13:47.985135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.486 [2024-10-21 12:13:47.985211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.486 [2024-10-21 12:13:47.985229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.486 [2024-10-21 12:13:47.985237] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.486 [2024-10-21 12:13:47.985243] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.486 [2024-10-21 12:13:47.985260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-10-21 12:13:47.995127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.486 [2024-10-21 12:13:47.995197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.486 [2024-10-21 12:13:47.995214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.486 [2024-10-21 12:13:47.995221] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.486 [2024-10-21 12:13:47.995228] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.486 [2024-10-21 12:13:47.995245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-10-21 12:13:48.005231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.486 [2024-10-21 12:13:48.005306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.486 [2024-10-21 12:13:48.005334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.486 [2024-10-21 12:13:48.005342] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.486 [2024-10-21 12:13:48.005348] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.486 [2024-10-21 12:13:48.005366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-10-21 12:13:48.015228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.486 [2024-10-21 12:13:48.015296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.486 [2024-10-21 12:13:48.015314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.486 [2024-10-21 12:13:48.015328] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.486 [2024-10-21 12:13:48.015342] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.486 [2024-10-21 12:13:48.015360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-10-21 12:13:48.025264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.486 [2024-10-21 12:13:48.025333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.487 [2024-10-21 12:13:48.025353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.487 [2024-10-21 12:13:48.025361] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.487 [2024-10-21 12:13:48.025367] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.487 [2024-10-21 12:13:48.025385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.487 qpair failed and we were unable to recover it. 00:29:11.487 [2024-10-21 12:13:48.035310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.487 [2024-10-21 12:13:48.035384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.487 [2024-10-21 12:13:48.035402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.487 [2024-10-21 12:13:48.035410] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.487 [2024-10-21 12:13:48.035416] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.487 [2024-10-21 12:13:48.035433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.487 qpair failed and we were unable to recover it. 00:29:11.487 [2024-10-21 12:13:48.045342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.487 [2024-10-21 12:13:48.045407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.487 [2024-10-21 12:13:48.045425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.487 [2024-10-21 12:13:48.045433] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.487 [2024-10-21 12:13:48.045439] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.487 [2024-10-21 12:13:48.045457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.487 qpair failed and we were unable to recover it. 00:29:11.487 [2024-10-21 12:13:48.055349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.487 [2024-10-21 12:13:48.055412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.487 [2024-10-21 12:13:48.055430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.487 [2024-10-21 12:13:48.055437] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.487 [2024-10-21 12:13:48.055444] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.487 [2024-10-21 12:13:48.055462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.487 qpair failed and we were unable to recover it. 00:29:11.487 [2024-10-21 12:13:48.065377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.487 [2024-10-21 12:13:48.065447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.487 [2024-10-21 12:13:48.065464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.487 [2024-10-21 12:13:48.065472] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.487 [2024-10-21 12:13:48.065479] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.487 [2024-10-21 12:13:48.065496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.487 qpair failed and we were unable to recover it. 00:29:11.487 [2024-10-21 12:13:48.075294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.487 [2024-10-21 12:13:48.075381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.487 [2024-10-21 12:13:48.075403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.487 [2024-10-21 12:13:48.075412] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.487 [2024-10-21 12:13:48.075418] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.487 [2024-10-21 12:13:48.075444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.487 qpair failed and we were unable to recover it. 00:29:11.751 [2024-10-21 12:13:48.085478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.751 [2024-10-21 12:13:48.085550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.751 [2024-10-21 12:13:48.085570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.751 [2024-10-21 12:13:48.085578] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.751 [2024-10-21 12:13:48.085585] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.751 [2024-10-21 12:13:48.085602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.751 qpair failed and we were unable to recover it. 00:29:11.751 [2024-10-21 12:13:48.095331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.751 [2024-10-21 12:13:48.095393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.751 [2024-10-21 12:13:48.095410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.751 [2024-10-21 12:13:48.095418] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.751 [2024-10-21 12:13:48.095424] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.751 [2024-10-21 12:13:48.095441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.751 qpair failed and we were unable to recover it. 00:29:11.751 [2024-10-21 12:13:48.105494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.751 [2024-10-21 12:13:48.105557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.751 [2024-10-21 12:13:48.105575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.751 [2024-10-21 12:13:48.105589] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.751 [2024-10-21 12:13:48.105595] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.751 [2024-10-21 12:13:48.105612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.751 qpair failed and we were unable to recover it. 00:29:11.751 [2024-10-21 12:13:48.115572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.751 [2024-10-21 12:13:48.115639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.751 [2024-10-21 12:13:48.115656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.751 [2024-10-21 12:13:48.115663] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.751 [2024-10-21 12:13:48.115670] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.751 [2024-10-21 12:13:48.115687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.751 qpair failed and we were unable to recover it. 00:29:11.751 [2024-10-21 12:13:48.125530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.751 [2024-10-21 12:13:48.125606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.751 [2024-10-21 12:13:48.125624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.751 [2024-10-21 12:13:48.125631] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.751 [2024-10-21 12:13:48.125638] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.751 [2024-10-21 12:13:48.125655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.751 qpair failed and we were unable to recover it. 00:29:11.751 [2024-10-21 12:13:48.135577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.751 [2024-10-21 12:13:48.135673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.751 [2024-10-21 12:13:48.135690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.751 [2024-10-21 12:13:48.135698] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.751 [2024-10-21 12:13:48.135705] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.751 [2024-10-21 12:13:48.135722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.751 qpair failed and we were unable to recover it. 00:29:11.751 [2024-10-21 12:13:48.145628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.751 [2024-10-21 12:13:48.145694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.751 [2024-10-21 12:13:48.145712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.751 [2024-10-21 12:13:48.145719] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.751 [2024-10-21 12:13:48.145726] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.751 [2024-10-21 12:13:48.145742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.751 qpair failed and we were unable to recover it. 00:29:11.751 [2024-10-21 12:13:48.155679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.751 [2024-10-21 12:13:48.155779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.751 [2024-10-21 12:13:48.155796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.751 [2024-10-21 12:13:48.155804] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.751 [2024-10-21 12:13:48.155811] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.751 [2024-10-21 12:13:48.155828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.751 qpair failed and we were unable to recover it. 00:29:11.751 [2024-10-21 12:13:48.165757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.751 [2024-10-21 12:13:48.165824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.751 [2024-10-21 12:13:48.165841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.751 [2024-10-21 12:13:48.165848] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.751 [2024-10-21 12:13:48.165855] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.751 [2024-10-21 12:13:48.165872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.751 qpair failed and we were unable to recover it. 00:29:11.752 [2024-10-21 12:13:48.175748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.752 [2024-10-21 12:13:48.175813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.752 [2024-10-21 12:13:48.175830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.752 [2024-10-21 12:13:48.175838] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.752 [2024-10-21 12:13:48.175844] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.752 [2024-10-21 12:13:48.175861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.752 qpair failed and we were unable to recover it. 00:29:11.752 [2024-10-21 12:13:48.185753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.752 [2024-10-21 12:13:48.185823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.752 [2024-10-21 12:13:48.185840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.752 [2024-10-21 12:13:48.185847] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.752 [2024-10-21 12:13:48.185854] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.752 [2024-10-21 12:13:48.185870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.752 qpair failed and we were unable to recover it. 00:29:11.752 [2024-10-21 12:13:48.195774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.752 [2024-10-21 12:13:48.195843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.752 [2024-10-21 12:13:48.195861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.752 [2024-10-21 12:13:48.195874] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.752 [2024-10-21 12:13:48.195880] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.752 [2024-10-21 12:13:48.195897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.752 qpair failed and we were unable to recover it. 00:29:11.752 [2024-10-21 12:13:48.205843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.752 [2024-10-21 12:13:48.205914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.752 [2024-10-21 12:13:48.205931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.752 [2024-10-21 12:13:48.205938] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.752 [2024-10-21 12:13:48.205944] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.752 [2024-10-21 12:13:48.205961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.752 qpair failed and we were unable to recover it. 00:29:11.752 [2024-10-21 12:13:48.215822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.752 [2024-10-21 12:13:48.215884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.752 [2024-10-21 12:13:48.215904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.752 [2024-10-21 12:13:48.215911] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.752 [2024-10-21 12:13:48.215918] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.752 [2024-10-21 12:13:48.215935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.752 qpair failed and we were unable to recover it. 00:29:11.752 [2024-10-21 12:13:48.225843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.752 [2024-10-21 12:13:48.225908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.752 [2024-10-21 12:13:48.225926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.752 [2024-10-21 12:13:48.225935] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.752 [2024-10-21 12:13:48.225942] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.752 [2024-10-21 12:13:48.225959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.752 qpair failed and we were unable to recover it. 00:29:11.752 [2024-10-21 12:13:48.235867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.752 [2024-10-21 12:13:48.235934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.752 [2024-10-21 12:13:48.235951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.752 [2024-10-21 12:13:48.235958] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.752 [2024-10-21 12:13:48.235964] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.752 [2024-10-21 12:13:48.235981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.752 qpair failed and we were unable to recover it. 00:29:11.752 [2024-10-21 12:13:48.245951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.752 [2024-10-21 12:13:48.246029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.752 [2024-10-21 12:13:48.246047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.752 [2024-10-21 12:13:48.246054] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.752 [2024-10-21 12:13:48.246061] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.752 [2024-10-21 12:13:48.246077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.752 qpair failed and we were unable to recover it. 00:29:11.752 [2024-10-21 12:13:48.255897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.752 [2024-10-21 12:13:48.255956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.752 [2024-10-21 12:13:48.255974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.752 [2024-10-21 12:13:48.255981] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.752 [2024-10-21 12:13:48.255987] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.752 [2024-10-21 12:13:48.256004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.752 qpair failed and we were unable to recover it. 00:29:11.752 [2024-10-21 12:13:48.265845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.752 [2024-10-21 12:13:48.265955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.752 [2024-10-21 12:13:48.265975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.752 [2024-10-21 12:13:48.265983] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.752 [2024-10-21 12:13:48.265991] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.752 [2024-10-21 12:13:48.266008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.752 qpair failed and we were unable to recover it. 00:29:11.752 [2024-10-21 12:13:48.275992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.752 [2024-10-21 12:13:48.276063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.752 [2024-10-21 12:13:48.276080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.752 [2024-10-21 12:13:48.276087] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.752 [2024-10-21 12:13:48.276094] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.752 [2024-10-21 12:13:48.276110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.752 qpair failed and we were unable to recover it. 00:29:11.752 [2024-10-21 12:13:48.286090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.752 [2024-10-21 12:13:48.286172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.752 [2024-10-21 12:13:48.286215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.752 [2024-10-21 12:13:48.286224] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.752 [2024-10-21 12:13:48.286232] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.752 [2024-10-21 12:13:48.286256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.752 qpair failed and we were unable to recover it. 00:29:11.752 [2024-10-21 12:13:48.296074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.752 [2024-10-21 12:13:48.296138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.752 [2024-10-21 12:13:48.296158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.752 [2024-10-21 12:13:48.296166] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.752 [2024-10-21 12:13:48.296172] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.752 [2024-10-21 12:13:48.296190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.752 qpair failed and we were unable to recover it. 00:29:11.752 [2024-10-21 12:13:48.306047] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.752 [2024-10-21 12:13:48.306138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.752 [2024-10-21 12:13:48.306157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.752 [2024-10-21 12:13:48.306164] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.752 [2024-10-21 12:13:48.306172] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.752 [2024-10-21 12:13:48.306190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.752 qpair failed and we were unable to recover it. 00:29:11.752 [2024-10-21 12:13:48.316127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.752 [2024-10-21 12:13:48.316221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.753 [2024-10-21 12:13:48.316238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.753 [2024-10-21 12:13:48.316246] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.753 [2024-10-21 12:13:48.316252] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.753 [2024-10-21 12:13:48.316269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.753 qpair failed and we were unable to recover it. 00:29:11.753 [2024-10-21 12:13:48.326188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.753 [2024-10-21 12:13:48.326275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.753 [2024-10-21 12:13:48.326294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.753 [2024-10-21 12:13:48.326301] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.753 [2024-10-21 12:13:48.326307] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.753 [2024-10-21 12:13:48.326335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.753 qpair failed and we were unable to recover it. 00:29:11.753 [2024-10-21 12:13:48.336192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.753 [2024-10-21 12:13:48.336279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.753 [2024-10-21 12:13:48.336296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.753 [2024-10-21 12:13:48.336303] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.753 [2024-10-21 12:13:48.336310] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:11.753 [2024-10-21 12:13:48.336331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.753 qpair failed and we were unable to recover it. 00:29:12.016 [2024-10-21 12:13:48.346227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.016 [2024-10-21 12:13:48.346286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.016 [2024-10-21 12:13:48.346303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.016 [2024-10-21 12:13:48.346311] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.016 [2024-10-21 12:13:48.346317] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.016 [2024-10-21 12:13:48.346341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.016 qpair failed and we were unable to recover it. 00:29:12.016 [2024-10-21 12:13:48.356266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.016 [2024-10-21 12:13:48.356339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.016 [2024-10-21 12:13:48.356356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.016 [2024-10-21 12:13:48.356364] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.016 [2024-10-21 12:13:48.356371] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.016 [2024-10-21 12:13:48.356388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.016 qpair failed and we were unable to recover it. 00:29:12.016 [2024-10-21 12:13:48.366318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.016 [2024-10-21 12:13:48.366394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.016 [2024-10-21 12:13:48.366411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.016 [2024-10-21 12:13:48.366418] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.016 [2024-10-21 12:13:48.366425] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.016 [2024-10-21 12:13:48.366442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.016 qpair failed and we were unable to recover it. 00:29:12.016 [2024-10-21 12:13:48.376327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.016 [2024-10-21 12:13:48.376390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.016 [2024-10-21 12:13:48.376416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.016 [2024-10-21 12:13:48.376424] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.016 [2024-10-21 12:13:48.376430] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.016 [2024-10-21 12:13:48.376448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.016 qpair failed and we were unable to recover it. 00:29:12.016 [2024-10-21 12:13:48.386361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.016 [2024-10-21 12:13:48.386471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.016 [2024-10-21 12:13:48.386489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.016 [2024-10-21 12:13:48.386496] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.016 [2024-10-21 12:13:48.386502] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.016 [2024-10-21 12:13:48.386519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.016 qpair failed and we were unable to recover it. 00:29:12.016 [2024-10-21 12:13:48.396401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.016 [2024-10-21 12:13:48.396467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.016 [2024-10-21 12:13:48.396485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.016 [2024-10-21 12:13:48.396493] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.016 [2024-10-21 12:13:48.396499] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.016 [2024-10-21 12:13:48.396516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.016 qpair failed and we were unable to recover it. 00:29:12.016 [2024-10-21 12:13:48.406311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.016 [2024-10-21 12:13:48.406384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.016 [2024-10-21 12:13:48.406401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.016 [2024-10-21 12:13:48.406408] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.016 [2024-10-21 12:13:48.406414] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.016 [2024-10-21 12:13:48.406430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.016 qpair failed and we were unable to recover it. 00:29:12.016 [2024-10-21 12:13:48.416425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.016 [2024-10-21 12:13:48.416492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.016 [2024-10-21 12:13:48.416512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.016 [2024-10-21 12:13:48.416519] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.016 [2024-10-21 12:13:48.416526] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.016 [2024-10-21 12:13:48.416549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.016 qpair failed and we were unable to recover it. 00:29:12.016 [2024-10-21 12:13:48.426339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.016 [2024-10-21 12:13:48.426402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.016 [2024-10-21 12:13:48.426420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.016 [2024-10-21 12:13:48.426427] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.016 [2024-10-21 12:13:48.426433] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.016 [2024-10-21 12:13:48.426450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.016 qpair failed and we were unable to recover it. 00:29:12.016 [2024-10-21 12:13:48.436484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.016 [2024-10-21 12:13:48.436548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.016 [2024-10-21 12:13:48.436565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.016 [2024-10-21 12:13:48.436573] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.016 [2024-10-21 12:13:48.436579] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.016 [2024-10-21 12:13:48.436596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.016 qpair failed and we were unable to recover it. 00:29:12.016 [2024-10-21 12:13:48.446548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.016 [2024-10-21 12:13:48.446645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.016 [2024-10-21 12:13:48.446661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.016 [2024-10-21 12:13:48.446669] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.016 [2024-10-21 12:13:48.446675] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.016 [2024-10-21 12:13:48.446692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.016 qpair failed and we were unable to recover it. 00:29:12.016 [2024-10-21 12:13:48.456551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.016 [2024-10-21 12:13:48.456622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.016 [2024-10-21 12:13:48.456640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.016 [2024-10-21 12:13:48.456647] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.016 [2024-10-21 12:13:48.456654] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.016 [2024-10-21 12:13:48.456670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.016 qpair failed and we were unable to recover it. 00:29:12.016 [2024-10-21 12:13:48.466583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.016 [2024-10-21 12:13:48.466643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.016 [2024-10-21 12:13:48.466665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.016 [2024-10-21 12:13:48.466672] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.016 [2024-10-21 12:13:48.466678] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.016 [2024-10-21 12:13:48.466694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.016 qpair failed and we were unable to recover it. 00:29:12.017 [2024-10-21 12:13:48.476631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.017 [2024-10-21 12:13:48.476699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.017 [2024-10-21 12:13:48.476715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.017 [2024-10-21 12:13:48.476723] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.017 [2024-10-21 12:13:48.476729] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.017 [2024-10-21 12:13:48.476745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.017 qpair failed and we were unable to recover it. 00:29:12.017 [2024-10-21 12:13:48.486572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.017 [2024-10-21 12:13:48.486642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.017 [2024-10-21 12:13:48.486658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.017 [2024-10-21 12:13:48.486665] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.017 [2024-10-21 12:13:48.486671] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.017 [2024-10-21 12:13:48.486687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.017 qpair failed and we were unable to recover it. 00:29:12.017 [2024-10-21 12:13:48.496561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.017 [2024-10-21 12:13:48.496633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.017 [2024-10-21 12:13:48.496654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.017 [2024-10-21 12:13:48.496662] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.017 [2024-10-21 12:13:48.496668] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.017 [2024-10-21 12:13:48.496687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.017 qpair failed and we were unable to recover it. 00:29:12.017 [2024-10-21 12:13:48.506600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.017 [2024-10-21 12:13:48.506673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.017 [2024-10-21 12:13:48.506690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.017 [2024-10-21 12:13:48.506697] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.017 [2024-10-21 12:13:48.506709] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.017 [2024-10-21 12:13:48.506726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.017 qpair failed and we were unable to recover it. 00:29:12.017 [2024-10-21 12:13:48.516760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.017 [2024-10-21 12:13:48.516828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.017 [2024-10-21 12:13:48.516845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.017 [2024-10-21 12:13:48.516853] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.017 [2024-10-21 12:13:48.516859] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.017 [2024-10-21 12:13:48.516876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.017 qpair failed and we were unable to recover it. 00:29:12.017 [2024-10-21 12:13:48.526817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.017 [2024-10-21 12:13:48.526882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.017 [2024-10-21 12:13:48.526900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.017 [2024-10-21 12:13:48.526907] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.017 [2024-10-21 12:13:48.526914] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.017 [2024-10-21 12:13:48.526931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.017 qpair failed and we were unable to recover it. 00:29:12.017 [2024-10-21 12:13:48.536787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.017 [2024-10-21 12:13:48.536849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.017 [2024-10-21 12:13:48.536866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.017 [2024-10-21 12:13:48.536873] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.017 [2024-10-21 12:13:48.536880] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.017 [2024-10-21 12:13:48.536897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.017 qpair failed and we were unable to recover it. 00:29:12.017 [2024-10-21 12:13:48.546813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.017 [2024-10-21 12:13:48.546873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.017 [2024-10-21 12:13:48.546890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.017 [2024-10-21 12:13:48.546898] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.017 [2024-10-21 12:13:48.546904] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.017 [2024-10-21 12:13:48.546920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.017 qpair failed and we were unable to recover it. 00:29:12.017 [2024-10-21 12:13:48.556869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.017 [2024-10-21 12:13:48.556940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.017 [2024-10-21 12:13:48.556957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.017 [2024-10-21 12:13:48.556964] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.017 [2024-10-21 12:13:48.556970] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.017 [2024-10-21 12:13:48.556987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.017 qpair failed and we were unable to recover it. 00:29:12.017 [2024-10-21 12:13:48.566941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.017 [2024-10-21 12:13:48.567027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.017 [2024-10-21 12:13:48.567044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.017 [2024-10-21 12:13:48.567051] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.017 [2024-10-21 12:13:48.567058] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.017 [2024-10-21 12:13:48.567074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.017 qpair failed and we were unable to recover it. 00:29:12.017 [2024-10-21 12:13:48.576903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.017 [2024-10-21 12:13:48.576957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.017 [2024-10-21 12:13:48.576974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.017 [2024-10-21 12:13:48.576981] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.017 [2024-10-21 12:13:48.576987] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.017 [2024-10-21 12:13:48.577004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.017 qpair failed and we were unable to recover it. 00:29:12.017 [2024-10-21 12:13:48.586946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.017 [2024-10-21 12:13:48.587013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.017 [2024-10-21 12:13:48.587030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.017 [2024-10-21 12:13:48.587037] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.017 [2024-10-21 12:13:48.587044] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.017 [2024-10-21 12:13:48.587060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.017 qpair failed and we were unable to recover it. 00:29:12.017 [2024-10-21 12:13:48.596975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.017 [2024-10-21 12:13:48.597038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.017 [2024-10-21 12:13:48.597054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.017 [2024-10-21 12:13:48.597062] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.017 [2024-10-21 12:13:48.597079] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.017 [2024-10-21 12:13:48.597097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.017 qpair failed and we were unable to recover it. 00:29:12.017 [2024-10-21 12:13:48.607075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.017 [2024-10-21 12:13:48.607150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.017 [2024-10-21 12:13:48.607167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.017 [2024-10-21 12:13:48.607174] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.017 [2024-10-21 12:13:48.607181] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.017 [2024-10-21 12:13:48.607197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.017 qpair failed and we were unable to recover it. 00:29:12.291 [2024-10-21 12:13:48.616940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.291 [2024-10-21 12:13:48.617061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.291 [2024-10-21 12:13:48.617078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.291 [2024-10-21 12:13:48.617086] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.291 [2024-10-21 12:13:48.617092] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.291 [2024-10-21 12:13:48.617109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.291 qpair failed and we were unable to recover it. 00:29:12.291 [2024-10-21 12:13:48.627035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.291 [2024-10-21 12:13:48.627094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.291 [2024-10-21 12:13:48.627115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.291 [2024-10-21 12:13:48.627122] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.291 [2024-10-21 12:13:48.627129] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.291 [2024-10-21 12:13:48.627146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.291 qpair failed and we were unable to recover it. 00:29:12.291 [2024-10-21 12:13:48.637080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.291 [2024-10-21 12:13:48.637145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.291 [2024-10-21 12:13:48.637163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.291 [2024-10-21 12:13:48.637170] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.291 [2024-10-21 12:13:48.637178] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.291 [2024-10-21 12:13:48.637195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.291 qpair failed and we were unable to recover it. 00:29:12.291 [2024-10-21 12:13:48.647175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.291 [2024-10-21 12:13:48.647279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.291 [2024-10-21 12:13:48.647297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.292 [2024-10-21 12:13:48.647304] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.292 [2024-10-21 12:13:48.647310] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.292 [2024-10-21 12:13:48.647330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-10-21 12:13:48.657160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.292 [2024-10-21 12:13:48.657219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.292 [2024-10-21 12:13:48.657235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.292 [2024-10-21 12:13:48.657242] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.292 [2024-10-21 12:13:48.657249] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.292 [2024-10-21 12:13:48.657265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-10-21 12:13:48.667163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.292 [2024-10-21 12:13:48.667254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.292 [2024-10-21 12:13:48.667270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.292 [2024-10-21 12:13:48.667277] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.292 [2024-10-21 12:13:48.667284] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.292 [2024-10-21 12:13:48.667299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-10-21 12:13:48.677218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.292 [2024-10-21 12:13:48.677285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.292 [2024-10-21 12:13:48.677302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.292 [2024-10-21 12:13:48.677309] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.292 [2024-10-21 12:13:48.677316] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.292 [2024-10-21 12:13:48.677338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-10-21 12:13:48.687252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.292 [2024-10-21 12:13:48.687314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.292 [2024-10-21 12:13:48.687333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.292 [2024-10-21 12:13:48.687345] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.292 [2024-10-21 12:13:48.687351] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.292 [2024-10-21 12:13:48.687367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-10-21 12:13:48.697306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.292 [2024-10-21 12:13:48.697372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.292 [2024-10-21 12:13:48.697388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.292 [2024-10-21 12:13:48.697395] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.292 [2024-10-21 12:13:48.697401] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.292 [2024-10-21 12:13:48.697417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-10-21 12:13:48.707250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.292 [2024-10-21 12:13:48.707303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.292 [2024-10-21 12:13:48.707317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.292 [2024-10-21 12:13:48.707329] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.292 [2024-10-21 12:13:48.707335] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.292 [2024-10-21 12:13:48.707350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-10-21 12:13:48.717314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.292 [2024-10-21 12:13:48.717379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.292 [2024-10-21 12:13:48.717394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.292 [2024-10-21 12:13:48.717401] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.292 [2024-10-21 12:13:48.717407] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.292 [2024-10-21 12:13:48.717422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-10-21 12:13:48.727377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.292 [2024-10-21 12:13:48.727450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.292 [2024-10-21 12:13:48.727465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.292 [2024-10-21 12:13:48.727472] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.292 [2024-10-21 12:13:48.727478] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.292 [2024-10-21 12:13:48.727493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-10-21 12:13:48.737362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.292 [2024-10-21 12:13:48.737419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.292 [2024-10-21 12:13:48.737436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.292 [2024-10-21 12:13:48.737443] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.292 [2024-10-21 12:13:48.737449] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.292 [2024-10-21 12:13:48.737464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-10-21 12:13:48.747393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.292 [2024-10-21 12:13:48.747456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.292 [2024-10-21 12:13:48.747470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.292 [2024-10-21 12:13:48.747477] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.292 [2024-10-21 12:13:48.747483] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.292 [2024-10-21 12:13:48.747498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-10-21 12:13:48.757438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.292 [2024-10-21 12:13:48.757493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.292 [2024-10-21 12:13:48.757507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.292 [2024-10-21 12:13:48.757514] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.292 [2024-10-21 12:13:48.757520] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.292 [2024-10-21 12:13:48.757535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-10-21 12:13:48.767461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.292 [2024-10-21 12:13:48.767520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.292 [2024-10-21 12:13:48.767534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.292 [2024-10-21 12:13:48.767541] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.292 [2024-10-21 12:13:48.767547] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.292 [2024-10-21 12:13:48.767562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-10-21 12:13:48.777477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.292 [2024-10-21 12:13:48.777539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.292 [2024-10-21 12:13:48.777556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.292 [2024-10-21 12:13:48.777564] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.292 [2024-10-21 12:13:48.777570] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.292 [2024-10-21 12:13:48.777584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-10-21 12:13:48.787437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.292 [2024-10-21 12:13:48.787505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.292 [2024-10-21 12:13:48.787519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.292 [2024-10-21 12:13:48.787526] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.292 [2024-10-21 12:13:48.787532] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.292 [2024-10-21 12:13:48.787546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-10-21 12:13:48.797539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.292 [2024-10-21 12:13:48.797616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.292 [2024-10-21 12:13:48.797629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.292 [2024-10-21 12:13:48.797636] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.292 [2024-10-21 12:13:48.797642] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.292 [2024-10-21 12:13:48.797657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-10-21 12:13:48.807510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.292 [2024-10-21 12:13:48.807562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.292 [2024-10-21 12:13:48.807575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.292 [2024-10-21 12:13:48.807582] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.292 [2024-10-21 12:13:48.807589] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.292 [2024-10-21 12:13:48.807603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.293 [2024-10-21 12:13:48.817569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.293 [2024-10-21 12:13:48.817622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.293 [2024-10-21 12:13:48.817636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.293 [2024-10-21 12:13:48.817643] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.293 [2024-10-21 12:13:48.817649] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.293 [2024-10-21 12:13:48.817663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.293 qpair failed and we were unable to recover it. 00:29:12.293 [2024-10-21 12:13:48.827559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.293 [2024-10-21 12:13:48.827606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.293 [2024-10-21 12:13:48.827620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.293 [2024-10-21 12:13:48.827626] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.293 [2024-10-21 12:13:48.827633] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.293 [2024-10-21 12:13:48.827647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.293 qpair failed and we were unable to recover it. 00:29:12.293 [2024-10-21 12:13:48.837505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.293 [2024-10-21 12:13:48.837564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.293 [2024-10-21 12:13:48.837578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.293 [2024-10-21 12:13:48.837585] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.293 [2024-10-21 12:13:48.837591] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.293 [2024-10-21 12:13:48.837606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.293 qpair failed and we were unable to recover it. 00:29:12.293 [2024-10-21 12:13:48.847623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.293 [2024-10-21 12:13:48.847677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.293 [2024-10-21 12:13:48.847690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.293 [2024-10-21 12:13:48.847697] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.293 [2024-10-21 12:13:48.847703] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.293 [2024-10-21 12:13:48.847717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.293 qpair failed and we were unable to recover it. 00:29:12.293 [2024-10-21 12:13:48.857685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.293 [2024-10-21 12:13:48.857739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.293 [2024-10-21 12:13:48.857752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.293 [2024-10-21 12:13:48.857759] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.293 [2024-10-21 12:13:48.857765] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.293 [2024-10-21 12:13:48.857779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.293 qpair failed and we were unable to recover it. 00:29:12.293 [2024-10-21 12:13:48.867651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.293 [2024-10-21 12:13:48.867696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.293 [2024-10-21 12:13:48.867712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.293 [2024-10-21 12:13:48.867719] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.293 [2024-10-21 12:13:48.867725] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.293 [2024-10-21 12:13:48.867739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.293 qpair failed and we were unable to recover it. 00:29:12.293 [2024-10-21 12:13:48.877745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.293 [2024-10-21 12:13:48.877798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.293 [2024-10-21 12:13:48.877811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.293 [2024-10-21 12:13:48.877818] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.293 [2024-10-21 12:13:48.877824] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.293 [2024-10-21 12:13:48.877838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.293 qpair failed and we were unable to recover it. 00:29:12.556 [2024-10-21 12:13:48.887689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.556 [2024-10-21 12:13:48.887733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.556 [2024-10-21 12:13:48.887747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.556 [2024-10-21 12:13:48.887754] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.556 [2024-10-21 12:13:48.887760] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.556 [2024-10-21 12:13:48.887774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.556 qpair failed and we were unable to recover it. 00:29:12.556 [2024-10-21 12:13:48.897782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.556 [2024-10-21 12:13:48.897843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.556 [2024-10-21 12:13:48.897856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.556 [2024-10-21 12:13:48.897863] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.556 [2024-10-21 12:13:48.897869] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.556 [2024-10-21 12:13:48.897883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.556 qpair failed and we were unable to recover it. 00:29:12.556 [2024-10-21 12:13:48.907784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.556 [2024-10-21 12:13:48.907833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.556 [2024-10-21 12:13:48.907847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.556 [2024-10-21 12:13:48.907854] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.556 [2024-10-21 12:13:48.907860] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.556 [2024-10-21 12:13:48.907878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.556 qpair failed and we were unable to recover it. 00:29:12.556 [2024-10-21 12:13:48.917852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.556 [2024-10-21 12:13:48.917905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.556 [2024-10-21 12:13:48.917918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.556 [2024-10-21 12:13:48.917925] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.556 [2024-10-21 12:13:48.917932] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.556 [2024-10-21 12:13:48.917946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.556 qpair failed and we were unable to recover it. 00:29:12.556 [2024-10-21 12:13:48.927862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.556 [2024-10-21 12:13:48.927916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.556 [2024-10-21 12:13:48.927932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.556 [2024-10-21 12:13:48.927939] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.556 [2024-10-21 12:13:48.927945] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.556 [2024-10-21 12:13:48.927960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.556 qpair failed and we were unable to recover it. 00:29:12.556 [2024-10-21 12:13:48.937937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.556 [2024-10-21 12:13:48.938019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.556 [2024-10-21 12:13:48.938032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.556 [2024-10-21 12:13:48.938040] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.557 [2024-10-21 12:13:48.938046] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.557 [2024-10-21 12:13:48.938060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.557 qpair failed and we were unable to recover it. 00:29:12.557 [2024-10-21 12:13:48.947895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.557 [2024-10-21 12:13:48.947946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.557 [2024-10-21 12:13:48.947960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.557 [2024-10-21 12:13:48.947966] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.557 [2024-10-21 12:13:48.947973] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.557 [2024-10-21 12:13:48.947986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.557 qpair failed and we were unable to recover it. 00:29:12.557 [2024-10-21 12:13:48.957963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.557 [2024-10-21 12:13:48.958017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.557 [2024-10-21 12:13:48.958034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.557 [2024-10-21 12:13:48.958041] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.557 [2024-10-21 12:13:48.958047] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.557 [2024-10-21 12:13:48.958061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.557 qpair failed and we were unable to recover it. 00:29:12.557 [2024-10-21 12:13:48.967968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.557 [2024-10-21 12:13:48.968018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.557 [2024-10-21 12:13:48.968032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.557 [2024-10-21 12:13:48.968039] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.557 [2024-10-21 12:13:48.968045] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.557 [2024-10-21 12:13:48.968059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.557 qpair failed and we were unable to recover it. 00:29:12.557 [2024-10-21 12:13:48.978012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.557 [2024-10-21 12:13:48.978068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.557 [2024-10-21 12:13:48.978082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.557 [2024-10-21 12:13:48.978088] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.557 [2024-10-21 12:13:48.978095] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.557 [2024-10-21 12:13:48.978109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.557 qpair failed and we were unable to recover it. 00:29:12.557 [2024-10-21 12:13:48.988001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.557 [2024-10-21 12:13:48.988087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.557 [2024-10-21 12:13:48.988100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.557 [2024-10-21 12:13:48.988107] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.557 [2024-10-21 12:13:48.988113] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.557 [2024-10-21 12:13:48.988127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.557 qpair failed and we were unable to recover it. 00:29:12.557 [2024-10-21 12:13:48.998090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.557 [2024-10-21 12:13:48.998142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.557 [2024-10-21 12:13:48.998155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.557 [2024-10-21 12:13:48.998162] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.557 [2024-10-21 12:13:48.998172] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.557 [2024-10-21 12:13:48.998186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.557 qpair failed and we were unable to recover it. 00:29:12.557 [2024-10-21 12:13:49.008109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.557 [2024-10-21 12:13:49.008157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.557 [2024-10-21 12:13:49.008170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.557 [2024-10-21 12:13:49.008177] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.557 [2024-10-21 12:13:49.008183] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.557 [2024-10-21 12:13:49.008197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.557 qpair failed and we were unable to recover it. 00:29:12.557 [2024-10-21 12:13:49.018111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.557 [2024-10-21 12:13:49.018160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.557 [2024-10-21 12:13:49.018173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.557 [2024-10-21 12:13:49.018180] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.557 [2024-10-21 12:13:49.018186] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.557 [2024-10-21 12:13:49.018201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.557 qpair failed and we were unable to recover it. 00:29:12.557 [2024-10-21 12:13:49.028109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.557 [2024-10-21 12:13:49.028158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.557 [2024-10-21 12:13:49.028172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.557 [2024-10-21 12:13:49.028179] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.557 [2024-10-21 12:13:49.028185] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.557 [2024-10-21 12:13:49.028199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.557 qpair failed and we were unable to recover it. 00:29:12.557 [2024-10-21 12:13:49.038180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.557 [2024-10-21 12:13:49.038231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.557 [2024-10-21 12:13:49.038245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.557 [2024-10-21 12:13:49.038252] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.557 [2024-10-21 12:13:49.038258] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.557 [2024-10-21 12:13:49.038272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.557 qpair failed and we were unable to recover it. 00:29:12.557 [2024-10-21 12:13:49.048189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.557 [2024-10-21 12:13:49.048246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.557 [2024-10-21 12:13:49.048259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.557 [2024-10-21 12:13:49.048266] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.557 [2024-10-21 12:13:49.048272] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.557 [2024-10-21 12:13:49.048287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.557 qpair failed and we were unable to recover it. 00:29:12.557 [2024-10-21 12:13:49.058237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.557 [2024-10-21 12:13:49.058294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.557 [2024-10-21 12:13:49.058308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.557 [2024-10-21 12:13:49.058314] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.557 [2024-10-21 12:13:49.058324] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.557 [2024-10-21 12:13:49.058339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.557 qpair failed and we were unable to recover it. 00:29:12.557 [2024-10-21 12:13:49.068296] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.557 [2024-10-21 12:13:49.068354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.557 [2024-10-21 12:13:49.068368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.557 [2024-10-21 12:13:49.068375] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.557 [2024-10-21 12:13:49.068381] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.557 [2024-10-21 12:13:49.068396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.557 qpair failed and we were unable to recover it. 00:29:12.557 [2024-10-21 12:13:49.078342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.557 [2024-10-21 12:13:49.078397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.557 [2024-10-21 12:13:49.078409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.557 [2024-10-21 12:13:49.078416] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.558 [2024-10-21 12:13:49.078423] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.558 [2024-10-21 12:13:49.078437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.558 qpair failed and we were unable to recover it. 00:29:12.558 [2024-10-21 12:13:49.088329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.558 [2024-10-21 12:13:49.088383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.558 [2024-10-21 12:13:49.088396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.558 [2024-10-21 12:13:49.088403] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.558 [2024-10-21 12:13:49.088413] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.558 [2024-10-21 12:13:49.088428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.558 qpair failed and we were unable to recover it. 00:29:12.558 [2024-10-21 12:13:49.098358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.558 [2024-10-21 12:13:49.098405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.558 [2024-10-21 12:13:49.098418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.558 [2024-10-21 12:13:49.098425] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.558 [2024-10-21 12:13:49.098431] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.558 [2024-10-21 12:13:49.098446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.558 qpair failed and we were unable to recover it. 00:29:12.558 [2024-10-21 12:13:49.108331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.558 [2024-10-21 12:13:49.108383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.558 [2024-10-21 12:13:49.108396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.558 [2024-10-21 12:13:49.108402] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.558 [2024-10-21 12:13:49.108409] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.558 [2024-10-21 12:13:49.108422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.558 qpair failed and we were unable to recover it. 00:29:12.558 [2024-10-21 12:13:49.118407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.558 [2024-10-21 12:13:49.118459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.558 [2024-10-21 12:13:49.118471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.558 [2024-10-21 12:13:49.118478] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.558 [2024-10-21 12:13:49.118485] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.558 [2024-10-21 12:13:49.118498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.558 qpair failed and we were unable to recover it. 00:29:12.558 [2024-10-21 12:13:49.128395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.558 [2024-10-21 12:13:49.128451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.558 [2024-10-21 12:13:49.128465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.558 [2024-10-21 12:13:49.128472] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.558 [2024-10-21 12:13:49.128478] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.558 [2024-10-21 12:13:49.128491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.558 qpair failed and we were unable to recover it. 00:29:12.558 [2024-10-21 12:13:49.138458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.558 [2024-10-21 12:13:49.138509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.558 [2024-10-21 12:13:49.138523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.558 [2024-10-21 12:13:49.138530] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.558 [2024-10-21 12:13:49.138536] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.558 [2024-10-21 12:13:49.138550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.558 qpair failed and we were unable to recover it. 00:29:12.558 [2024-10-21 12:13:49.148440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.558 [2024-10-21 12:13:49.148501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.558 [2024-10-21 12:13:49.148514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.558 [2024-10-21 12:13:49.148521] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.558 [2024-10-21 12:13:49.148527] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.558 [2024-10-21 12:13:49.148541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.558 qpair failed and we were unable to recover it. 00:29:12.820 [2024-10-21 12:13:49.158393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.820 [2024-10-21 12:13:49.158449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.820 [2024-10-21 12:13:49.158462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.820 [2024-10-21 12:13:49.158469] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.820 [2024-10-21 12:13:49.158475] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.820 [2024-10-21 12:13:49.158489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.820 qpair failed and we were unable to recover it. 00:29:12.820 [2024-10-21 12:13:49.168522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.820 [2024-10-21 12:13:49.168575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.820 [2024-10-21 12:13:49.168588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.820 [2024-10-21 12:13:49.168595] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.820 [2024-10-21 12:13:49.168601] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.820 [2024-10-21 12:13:49.168615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.820 qpair failed and we were unable to recover it. 00:29:12.820 [2024-10-21 12:13:49.178564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.820 [2024-10-21 12:13:49.178625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.820 [2024-10-21 12:13:49.178637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.820 [2024-10-21 12:13:49.178651] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.820 [2024-10-21 12:13:49.178658] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.820 [2024-10-21 12:13:49.178672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.820 qpair failed and we were unable to recover it. 00:29:12.820 [2024-10-21 12:13:49.188553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.820 [2024-10-21 12:13:49.188630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.820 [2024-10-21 12:13:49.188643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.820 [2024-10-21 12:13:49.188650] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.820 [2024-10-21 12:13:49.188657] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.820 [2024-10-21 12:13:49.188670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.820 qpair failed and we were unable to recover it. 00:29:12.820 [2024-10-21 12:13:49.198625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.820 [2024-10-21 12:13:49.198701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.820 [2024-10-21 12:13:49.198714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.820 [2024-10-21 12:13:49.198721] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.820 [2024-10-21 12:13:49.198727] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.820 [2024-10-21 12:13:49.198741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.820 qpair failed and we were unable to recover it. 00:29:12.820 [2024-10-21 12:13:49.208584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.820 [2024-10-21 12:13:49.208634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.820 [2024-10-21 12:13:49.208647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.820 [2024-10-21 12:13:49.208654] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.820 [2024-10-21 12:13:49.208660] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.821 [2024-10-21 12:13:49.208674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.821 qpair failed and we were unable to recover it. 00:29:12.821 [2024-10-21 12:13:49.218680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.821 [2024-10-21 12:13:49.218728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.821 [2024-10-21 12:13:49.218741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.821 [2024-10-21 12:13:49.218748] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.821 [2024-10-21 12:13:49.218754] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.821 [2024-10-21 12:13:49.218768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.821 qpair failed and we were unable to recover it. 00:29:12.821 [2024-10-21 12:13:49.228541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.821 [2024-10-21 12:13:49.228611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.821 [2024-10-21 12:13:49.228624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.821 [2024-10-21 12:13:49.228631] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.821 [2024-10-21 12:13:49.228637] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.821 [2024-10-21 12:13:49.228651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.821 qpair failed and we were unable to recover it. 00:29:12.821 [2024-10-21 12:13:49.238733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.821 [2024-10-21 12:13:49.238785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.821 [2024-10-21 12:13:49.238798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.821 [2024-10-21 12:13:49.238805] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.821 [2024-10-21 12:13:49.238812] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.821 [2024-10-21 12:13:49.238826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.821 qpair failed and we were unable to recover it. 00:29:12.821 [2024-10-21 12:13:49.248737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.821 [2024-10-21 12:13:49.248792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.821 [2024-10-21 12:13:49.248806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.821 [2024-10-21 12:13:49.248814] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.821 [2024-10-21 12:13:49.248820] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.821 [2024-10-21 12:13:49.248834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.821 qpair failed and we were unable to recover it. 00:29:12.821 [2024-10-21 12:13:49.258660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.821 [2024-10-21 12:13:49.258712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.821 [2024-10-21 12:13:49.258725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.821 [2024-10-21 12:13:49.258732] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.821 [2024-10-21 12:13:49.258738] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.821 [2024-10-21 12:13:49.258753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.821 qpair failed and we were unable to recover it. 00:29:12.821 [2024-10-21 12:13:49.268777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.821 [2024-10-21 12:13:49.268826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.821 [2024-10-21 12:13:49.268840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.821 [2024-10-21 12:13:49.268850] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.821 [2024-10-21 12:13:49.268857] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.821 [2024-10-21 12:13:49.268871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.821 qpair failed and we were unable to recover it. 00:29:12.821 [2024-10-21 12:13:49.278772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.821 [2024-10-21 12:13:49.278841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.821 [2024-10-21 12:13:49.278854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.821 [2024-10-21 12:13:49.278861] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.821 [2024-10-21 12:13:49.278867] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.821 [2024-10-21 12:13:49.278881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.821 qpair failed and we were unable to recover it. 00:29:12.821 [2024-10-21 12:13:49.288849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.821 [2024-10-21 12:13:49.288904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.821 [2024-10-21 12:13:49.288918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.821 [2024-10-21 12:13:49.288924] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.821 [2024-10-21 12:13:49.288932] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.821 [2024-10-21 12:13:49.288946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.821 qpair failed and we were unable to recover it. 00:29:12.821 [2024-10-21 12:13:49.298771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.821 [2024-10-21 12:13:49.298821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.821 [2024-10-21 12:13:49.298834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.821 [2024-10-21 12:13:49.298841] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.821 [2024-10-21 12:13:49.298848] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.821 [2024-10-21 12:13:49.298862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.821 qpair failed and we were unable to recover it. 00:29:12.821 [2024-10-21 12:13:49.308891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.821 [2024-10-21 12:13:49.308976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.821 [2024-10-21 12:13:49.308989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.821 [2024-10-21 12:13:49.308996] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.821 [2024-10-21 12:13:49.309002] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.821 [2024-10-21 12:13:49.309016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.821 qpair failed and we were unable to recover it. 00:29:12.821 [2024-10-21 12:13:49.318940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.821 [2024-10-21 12:13:49.318993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.821 [2024-10-21 12:13:49.319007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.821 [2024-10-21 12:13:49.319014] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.821 [2024-10-21 12:13:49.319021] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.821 [2024-10-21 12:13:49.319035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.821 qpair failed and we were unable to recover it. 00:29:12.821 [2024-10-21 12:13:49.328964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.821 [2024-10-21 12:13:49.329012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.821 [2024-10-21 12:13:49.329025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.821 [2024-10-21 12:13:49.329032] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.821 [2024-10-21 12:13:49.329039] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.821 [2024-10-21 12:13:49.329052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.821 qpair failed and we were unable to recover it. 00:29:12.821 [2024-10-21 12:13:49.339030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.821 [2024-10-21 12:13:49.339118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.821 [2024-10-21 12:13:49.339131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.821 [2024-10-21 12:13:49.339138] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.821 [2024-10-21 12:13:49.339144] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.821 [2024-10-21 12:13:49.339158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.821 qpair failed and we were unable to recover it. 00:29:12.821 [2024-10-21 12:13:49.349000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.821 [2024-10-21 12:13:49.349050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.821 [2024-10-21 12:13:49.349063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.821 [2024-10-21 12:13:49.349070] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.821 [2024-10-21 12:13:49.349076] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.822 [2024-10-21 12:13:49.349090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.822 qpair failed and we were unable to recover it. 00:29:12.822 [2024-10-21 12:13:49.359062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.822 [2024-10-21 12:13:49.359122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.822 [2024-10-21 12:13:49.359139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.822 [2024-10-21 12:13:49.359146] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.822 [2024-10-21 12:13:49.359152] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.822 [2024-10-21 12:13:49.359166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.822 qpair failed and we were unable to recover it. 00:29:12.822 [2024-10-21 12:13:49.369068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.822 [2024-10-21 12:13:49.369117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.822 [2024-10-21 12:13:49.369130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.822 [2024-10-21 12:13:49.369137] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.822 [2024-10-21 12:13:49.369143] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.822 [2024-10-21 12:13:49.369157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.822 qpair failed and we were unable to recover it. 00:29:12.822 [2024-10-21 12:13:49.379007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.822 [2024-10-21 12:13:49.379060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.822 [2024-10-21 12:13:49.379074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.822 [2024-10-21 12:13:49.379080] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.822 [2024-10-21 12:13:49.379087] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.822 [2024-10-21 12:13:49.379101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.822 qpair failed and we were unable to recover it. 00:29:12.822 [2024-10-21 12:13:49.389111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.822 [2024-10-21 12:13:49.389160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.822 [2024-10-21 12:13:49.389174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.822 [2024-10-21 12:13:49.389181] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.822 [2024-10-21 12:13:49.389187] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.822 [2024-10-21 12:13:49.389201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.822 qpair failed and we were unable to recover it. 00:29:12.822 [2024-10-21 12:13:49.399146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.822 [2024-10-21 12:13:49.399203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.822 [2024-10-21 12:13:49.399217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.822 [2024-10-21 12:13:49.399224] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.822 [2024-10-21 12:13:49.399230] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.822 [2024-10-21 12:13:49.399248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.822 qpair failed and we were unable to recover it. 00:29:12.822 [2024-10-21 12:13:49.409180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.822 [2024-10-21 12:13:49.409233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.822 [2024-10-21 12:13:49.409246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.822 [2024-10-21 12:13:49.409253] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.822 [2024-10-21 12:13:49.409259] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:12.822 [2024-10-21 12:13:49.409273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.822 qpair failed and we were unable to recover it. 00:29:13.085 [2024-10-21 12:13:49.419220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.085 [2024-10-21 12:13:49.419276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.085 [2024-10-21 12:13:49.419289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.085 [2024-10-21 12:13:49.419296] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.085 [2024-10-21 12:13:49.419303] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.085 [2024-10-21 12:13:49.419317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.085 qpair failed and we were unable to recover it. 00:29:13.085 [2024-10-21 12:13:49.429205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.085 [2024-10-21 12:13:49.429250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.085 [2024-10-21 12:13:49.429263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.085 [2024-10-21 12:13:49.429270] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.085 [2024-10-21 12:13:49.429277] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.085 [2024-10-21 12:13:49.429290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.085 qpair failed and we were unable to recover it. 00:29:13.085 [2024-10-21 12:13:49.439294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.085 [2024-10-21 12:13:49.439352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.085 [2024-10-21 12:13:49.439366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.085 [2024-10-21 12:13:49.439373] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.085 [2024-10-21 12:13:49.439379] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.085 [2024-10-21 12:13:49.439393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.085 qpair failed and we were unable to recover it. 00:29:13.085 [2024-10-21 12:13:49.449278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.085 [2024-10-21 12:13:49.449335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.085 [2024-10-21 12:13:49.449351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.085 [2024-10-21 12:13:49.449358] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.085 [2024-10-21 12:13:49.449365] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.085 [2024-10-21 12:13:49.449379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.085 qpair failed and we were unable to recover it. 00:29:13.085 [2024-10-21 12:13:49.459343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.085 [2024-10-21 12:13:49.459394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.085 [2024-10-21 12:13:49.459408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.085 [2024-10-21 12:13:49.459415] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.085 [2024-10-21 12:13:49.459421] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.085 [2024-10-21 12:13:49.459436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.085 qpair failed and we were unable to recover it. 00:29:13.085 [2024-10-21 12:13:49.469323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.085 [2024-10-21 12:13:49.469371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.085 [2024-10-21 12:13:49.469384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.085 [2024-10-21 12:13:49.469391] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.085 [2024-10-21 12:13:49.469397] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.085 [2024-10-21 12:13:49.469411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.085 qpair failed and we were unable to recover it. 00:29:13.085 [2024-10-21 12:13:49.479390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.085 [2024-10-21 12:13:49.479447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.085 [2024-10-21 12:13:49.479460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.085 [2024-10-21 12:13:49.479467] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.085 [2024-10-21 12:13:49.479473] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.085 [2024-10-21 12:13:49.479487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.085 qpair failed and we were unable to recover it. 00:29:13.085 [2024-10-21 12:13:49.489357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.085 [2024-10-21 12:13:49.489405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.085 [2024-10-21 12:13:49.489418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.085 [2024-10-21 12:13:49.489425] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.085 [2024-10-21 12:13:49.489432] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.085 [2024-10-21 12:13:49.489449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.085 qpair failed and we were unable to recover it. 00:29:13.085 [2024-10-21 12:13:49.499423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.085 [2024-10-21 12:13:49.499487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.085 [2024-10-21 12:13:49.499500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.085 [2024-10-21 12:13:49.499507] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.085 [2024-10-21 12:13:49.499513] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.085 [2024-10-21 12:13:49.499527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.085 qpair failed and we were unable to recover it. 00:29:13.085 [2024-10-21 12:13:49.509324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.085 [2024-10-21 12:13:49.509382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.085 [2024-10-21 12:13:49.509396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.085 [2024-10-21 12:13:49.509403] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.085 [2024-10-21 12:13:49.509410] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.085 [2024-10-21 12:13:49.509424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.085 qpair failed and we were unable to recover it. 00:29:13.085 [2024-10-21 12:13:49.519399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.085 [2024-10-21 12:13:49.519460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.085 [2024-10-21 12:13:49.519473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.085 [2024-10-21 12:13:49.519480] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.085 [2024-10-21 12:13:49.519487] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.085 [2024-10-21 12:13:49.519500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.085 qpair failed and we were unable to recover it. 00:29:13.085 [2024-10-21 12:13:49.529507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.085 [2024-10-21 12:13:49.529558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.085 [2024-10-21 12:13:49.529572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.085 [2024-10-21 12:13:49.529579] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.085 [2024-10-21 12:13:49.529585] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.085 [2024-10-21 12:13:49.529599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.085 qpair failed and we were unable to recover it. 00:29:13.085 [2024-10-21 12:13:49.539577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.085 [2024-10-21 12:13:49.539630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.085 [2024-10-21 12:13:49.539642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.086 [2024-10-21 12:13:49.539649] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.086 [2024-10-21 12:13:49.539655] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.086 [2024-10-21 12:13:49.539669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.086 qpair failed and we were unable to recover it. 00:29:13.086 [2024-10-21 12:13:49.549540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.086 [2024-10-21 12:13:49.549592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.086 [2024-10-21 12:13:49.549605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.086 [2024-10-21 12:13:49.549612] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.086 [2024-10-21 12:13:49.549618] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.086 [2024-10-21 12:13:49.549632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.086 qpair failed and we were unable to recover it. 00:29:13.086 [2024-10-21 12:13:49.559630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.086 [2024-10-21 12:13:49.559683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.086 [2024-10-21 12:13:49.559696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.086 [2024-10-21 12:13:49.559703] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.086 [2024-10-21 12:13:49.559709] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.086 [2024-10-21 12:13:49.559723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.086 qpair failed and we were unable to recover it. 00:29:13.086 [2024-10-21 12:13:49.569588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.086 [2024-10-21 12:13:49.569636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.086 [2024-10-21 12:13:49.569650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.086 [2024-10-21 12:13:49.569656] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.086 [2024-10-21 12:13:49.569663] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.086 [2024-10-21 12:13:49.569676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.086 qpair failed and we were unable to recover it. 00:29:13.086 [2024-10-21 12:13:49.579665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.086 [2024-10-21 12:13:49.579722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.086 [2024-10-21 12:13:49.579736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.086 [2024-10-21 12:13:49.579743] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.086 [2024-10-21 12:13:49.579753] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.086 [2024-10-21 12:13:49.579767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.086 qpair failed and we were unable to recover it. 00:29:13.086 [2024-10-21 12:13:49.589606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.086 [2024-10-21 12:13:49.589654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.086 [2024-10-21 12:13:49.589667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.086 [2024-10-21 12:13:49.589674] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.086 [2024-10-21 12:13:49.589680] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.086 [2024-10-21 12:13:49.589695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.086 qpair failed and we were unable to recover it. 00:29:13.086 [2024-10-21 12:13:49.599733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.086 [2024-10-21 12:13:49.599790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.086 [2024-10-21 12:13:49.599803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.086 [2024-10-21 12:13:49.599809] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.086 [2024-10-21 12:13:49.599816] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.086 [2024-10-21 12:13:49.599830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.086 qpair failed and we were unable to recover it. 00:29:13.086 [2024-10-21 12:13:49.609719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.086 [2024-10-21 12:13:49.609794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.086 [2024-10-21 12:13:49.609806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.086 [2024-10-21 12:13:49.609813] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.086 [2024-10-21 12:13:49.609820] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.086 [2024-10-21 12:13:49.609834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.086 qpair failed and we were unable to recover it. 00:29:13.086 [2024-10-21 12:13:49.619760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.086 [2024-10-21 12:13:49.619809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.086 [2024-10-21 12:13:49.619822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.086 [2024-10-21 12:13:49.619829] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.086 [2024-10-21 12:13:49.619835] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.086 [2024-10-21 12:13:49.619849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.086 qpair failed and we were unable to recover it. 00:29:13.086 [2024-10-21 12:13:49.629714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.086 [2024-10-21 12:13:49.629764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.086 [2024-10-21 12:13:49.629777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.086 [2024-10-21 12:13:49.629784] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.086 [2024-10-21 12:13:49.629790] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.086 [2024-10-21 12:13:49.629804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.086 qpair failed and we were unable to recover it. 00:29:13.086 [2024-10-21 12:13:49.639786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.086 [2024-10-21 12:13:49.639842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.086 [2024-10-21 12:13:49.639855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.086 [2024-10-21 12:13:49.639861] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.086 [2024-10-21 12:13:49.639868] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.086 [2024-10-21 12:13:49.639882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.086 qpair failed and we were unable to recover it. 00:29:13.086 [2024-10-21 12:13:49.649805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.086 [2024-10-21 12:13:49.649854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.086 [2024-10-21 12:13:49.649867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.086 [2024-10-21 12:13:49.649873] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.086 [2024-10-21 12:13:49.649880] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.086 [2024-10-21 12:13:49.649893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.086 qpair failed and we were unable to recover it. 00:29:13.086 [2024-10-21 12:13:49.659755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.086 [2024-10-21 12:13:49.659805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.086 [2024-10-21 12:13:49.659819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.086 [2024-10-21 12:13:49.659826] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.086 [2024-10-21 12:13:49.659832] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.086 [2024-10-21 12:13:49.659846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.086 qpair failed and we were unable to recover it. 00:29:13.086 [2024-10-21 12:13:49.669860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.086 [2024-10-21 12:13:49.669914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.086 [2024-10-21 12:13:49.669929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.086 [2024-10-21 12:13:49.669939] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.086 [2024-10-21 12:13:49.669945] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.086 [2024-10-21 12:13:49.669960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.086 qpair failed and we were unable to recover it. 00:29:13.350 [2024-10-21 12:13:49.679926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.350 [2024-10-21 12:13:49.679980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.350 [2024-10-21 12:13:49.679993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.350 [2024-10-21 12:13:49.680000] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.350 [2024-10-21 12:13:49.680006] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.350 [2024-10-21 12:13:49.680020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.350 qpair failed and we were unable to recover it. 00:29:13.350 [2024-10-21 12:13:49.689932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.350 [2024-10-21 12:13:49.690027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.350 [2024-10-21 12:13:49.690041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.350 [2024-10-21 12:13:49.690048] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.350 [2024-10-21 12:13:49.690054] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.350 [2024-10-21 12:13:49.690068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.350 qpair failed and we were unable to recover it. 00:29:13.350 [2024-10-21 12:13:49.699852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.350 [2024-10-21 12:13:49.699902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.350 [2024-10-21 12:13:49.699916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.350 [2024-10-21 12:13:49.699923] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.350 [2024-10-21 12:13:49.699930] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.350 [2024-10-21 12:13:49.699944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.350 qpair failed and we were unable to recover it. 00:29:13.350 [2024-10-21 12:13:49.709957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.350 [2024-10-21 12:13:49.710020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.350 [2024-10-21 12:13:49.710034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.350 [2024-10-21 12:13:49.710040] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.350 [2024-10-21 12:13:49.710047] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.350 [2024-10-21 12:13:49.710061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.350 qpair failed and we were unable to recover it. 00:29:13.350 [2024-10-21 12:13:49.720039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.350 [2024-10-21 12:13:49.720129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.350 [2024-10-21 12:13:49.720142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.350 [2024-10-21 12:13:49.720149] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.350 [2024-10-21 12:13:49.720155] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.350 [2024-10-21 12:13:49.720169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.350 qpair failed and we were unable to recover it. 00:29:13.350 [2024-10-21 12:13:49.730033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.350 [2024-10-21 12:13:49.730080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.350 [2024-10-21 12:13:49.730094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.350 [2024-10-21 12:13:49.730101] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.350 [2024-10-21 12:13:49.730107] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.350 [2024-10-21 12:13:49.730121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.350 qpair failed and we were unable to recover it. 00:29:13.350 [2024-10-21 12:13:49.740083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.350 [2024-10-21 12:13:49.740131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.350 [2024-10-21 12:13:49.740145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.350 [2024-10-21 12:13:49.740152] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.350 [2024-10-21 12:13:49.740158] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.350 [2024-10-21 12:13:49.740172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.350 qpair failed and we were unable to recover it. 00:29:13.350 [2024-10-21 12:13:49.750035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.350 [2024-10-21 12:13:49.750086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.350 [2024-10-21 12:13:49.750099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.350 [2024-10-21 12:13:49.750106] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.350 [2024-10-21 12:13:49.750112] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.350 [2024-10-21 12:13:49.750126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.350 qpair failed and we were unable to recover it. 00:29:13.350 [2024-10-21 12:13:49.760158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.350 [2024-10-21 12:13:49.760214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.350 [2024-10-21 12:13:49.760227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.350 [2024-10-21 12:13:49.760238] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.350 [2024-10-21 12:13:49.760244] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.350 [2024-10-21 12:13:49.760258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.350 qpair failed and we were unable to recover it. 00:29:13.350 [2024-10-21 12:13:49.770150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.350 [2024-10-21 12:13:49.770218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.350 [2024-10-21 12:13:49.770231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.350 [2024-10-21 12:13:49.770238] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.350 [2024-10-21 12:13:49.770244] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.350 [2024-10-21 12:13:49.770258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.350 qpair failed and we were unable to recover it. 00:29:13.350 [2024-10-21 12:13:49.780191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.350 [2024-10-21 12:13:49.780239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.350 [2024-10-21 12:13:49.780252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.350 [2024-10-21 12:13:49.780259] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.350 [2024-10-21 12:13:49.780265] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.350 [2024-10-21 12:13:49.780279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.350 qpair failed and we were unable to recover it. 00:29:13.350 [2024-10-21 12:13:49.790176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.350 [2024-10-21 12:13:49.790223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.350 [2024-10-21 12:13:49.790236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.350 [2024-10-21 12:13:49.790243] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.350 [2024-10-21 12:13:49.790249] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.350 [2024-10-21 12:13:49.790263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.350 qpair failed and we were unable to recover it. 00:29:13.350 [2024-10-21 12:13:49.800230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.350 [2024-10-21 12:13:49.800282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.350 [2024-10-21 12:13:49.800295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.350 [2024-10-21 12:13:49.800302] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.350 [2024-10-21 12:13:49.800308] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.350 [2024-10-21 12:13:49.800325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.350 qpair failed and we were unable to recover it. 00:29:13.350 [2024-10-21 12:13:49.810263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.350 [2024-10-21 12:13:49.810354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.350 [2024-10-21 12:13:49.810367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.350 [2024-10-21 12:13:49.810374] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.351 [2024-10-21 12:13:49.810381] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.351 [2024-10-21 12:13:49.810395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.351 qpair failed and we were unable to recover it. 00:29:13.351 [2024-10-21 12:13:49.820348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.351 [2024-10-21 12:13:49.820419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.351 [2024-10-21 12:13:49.820432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.351 [2024-10-21 12:13:49.820440] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.351 [2024-10-21 12:13:49.820446] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.351 [2024-10-21 12:13:49.820461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.351 qpair failed and we were unable to recover it. 00:29:13.351 [2024-10-21 12:13:49.830284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.351 [2024-10-21 12:13:49.830344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.351 [2024-10-21 12:13:49.830357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.351 [2024-10-21 12:13:49.830364] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.351 [2024-10-21 12:13:49.830370] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.351 [2024-10-21 12:13:49.830385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.351 qpair failed and we were unable to recover it. 00:29:13.351 [2024-10-21 12:13:49.840256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.351 [2024-10-21 12:13:49.840352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.351 [2024-10-21 12:13:49.840366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.351 [2024-10-21 12:13:49.840373] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.351 [2024-10-21 12:13:49.840379] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.351 [2024-10-21 12:13:49.840398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.351 qpair failed and we were unable to recover it. 00:29:13.351 [2024-10-21 12:13:49.850314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.351 [2024-10-21 12:13:49.850369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.351 [2024-10-21 12:13:49.850386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.351 [2024-10-21 12:13:49.850393] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.351 [2024-10-21 12:13:49.850399] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.351 [2024-10-21 12:13:49.850413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.351 qpair failed and we were unable to recover it. 00:29:13.351 [2024-10-21 12:13:49.860403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.351 [2024-10-21 12:13:49.860457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.351 [2024-10-21 12:13:49.860470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.351 [2024-10-21 12:13:49.860477] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.351 [2024-10-21 12:13:49.860484] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.351 [2024-10-21 12:13:49.860497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.351 qpair failed and we were unable to recover it. 00:29:13.351 [2024-10-21 12:13:49.870361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.351 [2024-10-21 12:13:49.870404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.351 [2024-10-21 12:13:49.870418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.351 [2024-10-21 12:13:49.870425] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.351 [2024-10-21 12:13:49.870431] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.351 [2024-10-21 12:13:49.870446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.351 qpair failed and we were unable to recover it. 00:29:13.351 [2024-10-21 12:13:49.880465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.351 [2024-10-21 12:13:49.880518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.351 [2024-10-21 12:13:49.880532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.351 [2024-10-21 12:13:49.880539] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.351 [2024-10-21 12:13:49.880545] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.351 [2024-10-21 12:13:49.880560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.351 qpair failed and we were unable to recover it. 00:29:13.351 [2024-10-21 12:13:49.890520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.351 [2024-10-21 12:13:49.890581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.351 [2024-10-21 12:13:49.890596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.351 [2024-10-21 12:13:49.890603] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.351 [2024-10-21 12:13:49.890609] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.351 [2024-10-21 12:13:49.890631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.351 qpair failed and we were unable to recover it. 00:29:13.351 [2024-10-21 12:13:49.900514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.351 [2024-10-21 12:13:49.900585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.351 [2024-10-21 12:13:49.900599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.351 [2024-10-21 12:13:49.900606] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.351 [2024-10-21 12:13:49.900612] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.351 [2024-10-21 12:13:49.900627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.351 qpair failed and we were unable to recover it. 00:29:13.351 [2024-10-21 12:13:49.910506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.351 [2024-10-21 12:13:49.910557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.351 [2024-10-21 12:13:49.910570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.351 [2024-10-21 12:13:49.910577] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.351 [2024-10-21 12:13:49.910583] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.351 [2024-10-21 12:13:49.910597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.351 qpair failed and we were unable to recover it. 00:29:13.351 [2024-10-21 12:13:49.920580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.351 [2024-10-21 12:13:49.920633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.351 [2024-10-21 12:13:49.920646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.351 [2024-10-21 12:13:49.920652] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.351 [2024-10-21 12:13:49.920659] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.351 [2024-10-21 12:13:49.920673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.351 qpair failed and we were unable to recover it. 00:29:13.351 [2024-10-21 12:13:49.930578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.351 [2024-10-21 12:13:49.930627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.351 [2024-10-21 12:13:49.930641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.351 [2024-10-21 12:13:49.930648] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.351 [2024-10-21 12:13:49.930654] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.351 [2024-10-21 12:13:49.930668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.351 qpair failed and we were unable to recover it. 00:29:13.351 [2024-10-21 12:13:49.940612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.351 [2024-10-21 12:13:49.940669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.351 [2024-10-21 12:13:49.940688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.351 [2024-10-21 12:13:49.940695] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.351 [2024-10-21 12:13:49.940701] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.351 [2024-10-21 12:13:49.940717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.351 qpair failed and we were unable to recover it. 00:29:13.613 [2024-10-21 12:13:49.950591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.613 [2024-10-21 12:13:49.950640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.613 [2024-10-21 12:13:49.950654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.613 [2024-10-21 12:13:49.950660] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.614 [2024-10-21 12:13:49.950667] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.614 [2024-10-21 12:13:49.950681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-10-21 12:13:49.960675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.614 [2024-10-21 12:13:49.960759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.614 [2024-10-21 12:13:49.960772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.614 [2024-10-21 12:13:49.960779] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.614 [2024-10-21 12:13:49.960785] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.614 [2024-10-21 12:13:49.960799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-10-21 12:13:49.970703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.614 [2024-10-21 12:13:49.970747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.614 [2024-10-21 12:13:49.970760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.614 [2024-10-21 12:13:49.970768] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.614 [2024-10-21 12:13:49.970774] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.614 [2024-10-21 12:13:49.970788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-10-21 12:13:49.980737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.614 [2024-10-21 12:13:49.980786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.614 [2024-10-21 12:13:49.980799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.614 [2024-10-21 12:13:49.980806] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.614 [2024-10-21 12:13:49.980812] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.614 [2024-10-21 12:13:49.980830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-10-21 12:13:49.990720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.614 [2024-10-21 12:13:49.990765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.614 [2024-10-21 12:13:49.990778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.614 [2024-10-21 12:13:49.990785] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.614 [2024-10-21 12:13:49.990791] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.614 [2024-10-21 12:13:49.990805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-10-21 12:13:50.000770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.614 [2024-10-21 12:13:50.000827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.614 [2024-10-21 12:13:50.000840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.614 [2024-10-21 12:13:50.000848] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.614 [2024-10-21 12:13:50.000854] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.614 [2024-10-21 12:13:50.000868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-10-21 12:13:50.010839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.614 [2024-10-21 12:13:50.010898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.614 [2024-10-21 12:13:50.010913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.614 [2024-10-21 12:13:50.010920] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.614 [2024-10-21 12:13:50.010926] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.614 [2024-10-21 12:13:50.010940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-10-21 12:13:50.020898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.614 [2024-10-21 12:13:50.020958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.614 [2024-10-21 12:13:50.020975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.614 [2024-10-21 12:13:50.020983] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.614 [2024-10-21 12:13:50.020989] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.614 [2024-10-21 12:13:50.021005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-10-21 12:13:50.030846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.614 [2024-10-21 12:13:50.030897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.614 [2024-10-21 12:13:50.030915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.614 [2024-10-21 12:13:50.030925] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.614 [2024-10-21 12:13:50.030932] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.614 [2024-10-21 12:13:50.030947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-10-21 12:13:50.041044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.614 [2024-10-21 12:13:50.041171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.614 [2024-10-21 12:13:50.041192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.614 [2024-10-21 12:13:50.041203] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.614 [2024-10-21 12:13:50.041213] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.614 [2024-10-21 12:13:50.041244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-10-21 12:13:50.050908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.614 [2024-10-21 12:13:50.050965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.614 [2024-10-21 12:13:50.050981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.614 [2024-10-21 12:13:50.050988] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.614 [2024-10-21 12:13:50.050994] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.614 [2024-10-21 12:13:50.051009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-10-21 12:13:50.060970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.614 [2024-10-21 12:13:50.061020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.614 [2024-10-21 12:13:50.061034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.614 [2024-10-21 12:13:50.061042] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.614 [2024-10-21 12:13:50.061048] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.614 [2024-10-21 12:13:50.061062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-10-21 12:13:50.070935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.614 [2024-10-21 12:13:50.070985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.614 [2024-10-21 12:13:50.070999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.614 [2024-10-21 12:13:50.071006] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.614 [2024-10-21 12:13:50.071017] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.614 [2024-10-21 12:13:50.071032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-10-21 12:13:50.081032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.614 [2024-10-21 12:13:50.081087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.614 [2024-10-21 12:13:50.081102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.614 [2024-10-21 12:13:50.081109] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.614 [2024-10-21 12:13:50.081115] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.614 [2024-10-21 12:13:50.081130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-10-21 12:13:50.090927] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.614 [2024-10-21 12:13:50.090976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.614 [2024-10-21 12:13:50.090990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.614 [2024-10-21 12:13:50.090997] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.614 [2024-10-21 12:13:50.091003] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.614 [2024-10-21 12:13:50.091017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-10-21 12:13:50.101056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.614 [2024-10-21 12:13:50.101103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.614 [2024-10-21 12:13:50.101117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.614 [2024-10-21 12:13:50.101125] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.614 [2024-10-21 12:13:50.101131] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.614 [2024-10-21 12:13:50.101145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-10-21 12:13:50.111055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.614 [2024-10-21 12:13:50.111105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.614 [2024-10-21 12:13:50.111118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.614 [2024-10-21 12:13:50.111126] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.614 [2024-10-21 12:13:50.111132] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.614 [2024-10-21 12:13:50.111146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-10-21 12:13:50.121012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.614 [2024-10-21 12:13:50.121071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.614 [2024-10-21 12:13:50.121084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.614 [2024-10-21 12:13:50.121091] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.614 [2024-10-21 12:13:50.121098] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.614 [2024-10-21 12:13:50.121112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-10-21 12:13:50.131171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.614 [2024-10-21 12:13:50.131222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.614 [2024-10-21 12:13:50.131235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.614 [2024-10-21 12:13:50.131243] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.614 [2024-10-21 12:13:50.131250] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.614 [2024-10-21 12:13:50.131264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-10-21 12:13:50.141163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.614 [2024-10-21 12:13:50.141223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.614 [2024-10-21 12:13:50.141236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.614 [2024-10-21 12:13:50.141243] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.614 [2024-10-21 12:13:50.141249] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.615 [2024-10-21 12:13:50.141264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.615 qpair failed and we were unable to recover it. 00:29:13.615 [2024-10-21 12:13:50.151175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.615 [2024-10-21 12:13:50.151226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.615 [2024-10-21 12:13:50.151238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.615 [2024-10-21 12:13:50.151246] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.615 [2024-10-21 12:13:50.151252] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.615 [2024-10-21 12:13:50.151266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.615 qpair failed and we were unable to recover it. 00:29:13.615 [2024-10-21 12:13:50.161196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.615 [2024-10-21 12:13:50.161248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.615 [2024-10-21 12:13:50.161261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.615 [2024-10-21 12:13:50.161268] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.615 [2024-10-21 12:13:50.161278] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.615 [2024-10-21 12:13:50.161293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.615 qpair failed and we were unable to recover it. 00:29:13.615 [2024-10-21 12:13:50.171245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.615 [2024-10-21 12:13:50.171296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.615 [2024-10-21 12:13:50.171310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.615 [2024-10-21 12:13:50.171317] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.615 [2024-10-21 12:13:50.171327] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.615 [2024-10-21 12:13:50.171341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.615 qpair failed and we were unable to recover it. 00:29:13.615 [2024-10-21 12:13:50.181272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.615 [2024-10-21 12:13:50.181380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.615 [2024-10-21 12:13:50.181395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.615 [2024-10-21 12:13:50.181402] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.615 [2024-10-21 12:13:50.181409] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.615 [2024-10-21 12:13:50.181424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.615 qpair failed and we were unable to recover it. 00:29:13.615 [2024-10-21 12:13:50.191274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.615 [2024-10-21 12:13:50.191326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.615 [2024-10-21 12:13:50.191340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.615 [2024-10-21 12:13:50.191347] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.615 [2024-10-21 12:13:50.191354] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.615 [2024-10-21 12:13:50.191368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.615 qpair failed and we were unable to recover it. 00:29:13.615 [2024-10-21 12:13:50.201356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.615 [2024-10-21 12:13:50.201408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.615 [2024-10-21 12:13:50.201420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.615 [2024-10-21 12:13:50.201428] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.615 [2024-10-21 12:13:50.201434] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.615 [2024-10-21 12:13:50.201448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.615 qpair failed and we were unable to recover it. 00:29:13.877 [2024-10-21 12:13:50.211351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.877 [2024-10-21 12:13:50.211401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.877 [2024-10-21 12:13:50.211414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.877 [2024-10-21 12:13:50.211422] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.877 [2024-10-21 12:13:50.211428] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.877 [2024-10-21 12:13:50.211442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.877 qpair failed and we were unable to recover it. 00:29:13.877 [2024-10-21 12:13:50.221381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.877 [2024-10-21 12:13:50.221438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.877 [2024-10-21 12:13:50.221452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.877 [2024-10-21 12:13:50.221459] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.877 [2024-10-21 12:13:50.221465] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.877 [2024-10-21 12:13:50.221479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.877 qpair failed and we were unable to recover it. 00:29:13.877 [2024-10-21 12:13:50.231388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.877 [2024-10-21 12:13:50.231478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.877 [2024-10-21 12:13:50.231493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.877 [2024-10-21 12:13:50.231499] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.877 [2024-10-21 12:13:50.231506] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.877 [2024-10-21 12:13:50.231524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.877 qpair failed and we were unable to recover it. 00:29:13.877 [2024-10-21 12:13:50.241443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.877 [2024-10-21 12:13:50.241498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.877 [2024-10-21 12:13:50.241511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.877 [2024-10-21 12:13:50.241518] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.877 [2024-10-21 12:13:50.241524] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.877 [2024-10-21 12:13:50.241539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.877 qpair failed and we were unable to recover it. 00:29:13.877 [2024-10-21 12:13:50.251436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.877 [2024-10-21 12:13:50.251484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.877 [2024-10-21 12:13:50.251497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.877 [2024-10-21 12:13:50.251508] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.877 [2024-10-21 12:13:50.251514] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.877 [2024-10-21 12:13:50.251528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.877 qpair failed and we were unable to recover it. 00:29:13.877 [2024-10-21 12:13:50.261494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.877 [2024-10-21 12:13:50.261540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.877 [2024-10-21 12:13:50.261554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.877 [2024-10-21 12:13:50.261561] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.877 [2024-10-21 12:13:50.261567] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.877 [2024-10-21 12:13:50.261581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.877 qpair failed and we were unable to recover it. 00:29:13.877 [2024-10-21 12:13:50.271452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.877 [2024-10-21 12:13:50.271498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.877 [2024-10-21 12:13:50.271511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.877 [2024-10-21 12:13:50.271518] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.877 [2024-10-21 12:13:50.271524] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.877 [2024-10-21 12:13:50.271538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.877 qpair failed and we were unable to recover it. 00:29:13.877 [2024-10-21 12:13:50.281500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.877 [2024-10-21 12:13:50.281547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.877 [2024-10-21 12:13:50.281560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.877 [2024-10-21 12:13:50.281567] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.877 [2024-10-21 12:13:50.281573] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.877 [2024-10-21 12:13:50.281587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.877 qpair failed and we were unable to recover it. 00:29:13.878 [2024-10-21 12:13:50.291510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.878 [2024-10-21 12:13:50.291556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.878 [2024-10-21 12:13:50.291571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.878 [2024-10-21 12:13:50.291578] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.878 [2024-10-21 12:13:50.291585] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.878 [2024-10-21 12:13:50.291599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.878 qpair failed and we were unable to recover it. 00:29:13.878 [2024-10-21 12:13:50.301627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.878 [2024-10-21 12:13:50.301678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.878 [2024-10-21 12:13:50.301691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.878 [2024-10-21 12:13:50.301698] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.878 [2024-10-21 12:13:50.301704] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.878 [2024-10-21 12:13:50.301718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.878 qpair failed and we were unable to recover it. 00:29:13.878 [2024-10-21 12:13:50.311559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.878 [2024-10-21 12:13:50.311601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.878 [2024-10-21 12:13:50.311614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.878 [2024-10-21 12:13:50.311621] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.878 [2024-10-21 12:13:50.311627] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.878 [2024-10-21 12:13:50.311641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.878 qpair failed and we were unable to recover it. 00:29:13.878 [2024-10-21 12:13:50.321504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.878 [2024-10-21 12:13:50.321549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.878 [2024-10-21 12:13:50.321563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.878 [2024-10-21 12:13:50.321570] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.878 [2024-10-21 12:13:50.321576] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.878 [2024-10-21 12:13:50.321596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.878 qpair failed and we were unable to recover it. 00:29:13.878 [2024-10-21 12:13:50.331639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.878 [2024-10-21 12:13:50.331685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.878 [2024-10-21 12:13:50.331699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.878 [2024-10-21 12:13:50.331706] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.878 [2024-10-21 12:13:50.331712] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.878 [2024-10-21 12:13:50.331726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.878 qpair failed and we were unable to recover it. 00:29:13.878 [2024-10-21 12:13:50.341717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.878 [2024-10-21 12:13:50.341766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.878 [2024-10-21 12:13:50.341782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.878 [2024-10-21 12:13:50.341789] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.878 [2024-10-21 12:13:50.341796] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.878 [2024-10-21 12:13:50.341811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.878 qpair failed and we were unable to recover it. 00:29:13.878 [2024-10-21 12:13:50.351613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.878 [2024-10-21 12:13:50.351659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.878 [2024-10-21 12:13:50.351673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.878 [2024-10-21 12:13:50.351679] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.878 [2024-10-21 12:13:50.351685] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.878 [2024-10-21 12:13:50.351700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.878 qpair failed and we were unable to recover it. 00:29:13.878 [2024-10-21 12:13:50.361742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.878 [2024-10-21 12:13:50.361791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.878 [2024-10-21 12:13:50.361804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.878 [2024-10-21 12:13:50.361811] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.878 [2024-10-21 12:13:50.361817] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.878 [2024-10-21 12:13:50.361831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.878 qpair failed and we were unable to recover it. 00:29:13.878 [2024-10-21 12:13:50.371641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.878 [2024-10-21 12:13:50.371720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.878 [2024-10-21 12:13:50.371733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.878 [2024-10-21 12:13:50.371741] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.878 [2024-10-21 12:13:50.371747] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.878 [2024-10-21 12:13:50.371760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.878 qpair failed and we were unable to recover it. 00:29:13.878 [2024-10-21 12:13:50.381820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.878 [2024-10-21 12:13:50.381879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.878 [2024-10-21 12:13:50.381893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.878 [2024-10-21 12:13:50.381900] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.878 [2024-10-21 12:13:50.381906] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.878 [2024-10-21 12:13:50.381920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.878 qpair failed and we were unable to recover it. 00:29:13.878 [2024-10-21 12:13:50.391836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.878 [2024-10-21 12:13:50.391897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.878 [2024-10-21 12:13:50.391910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.878 [2024-10-21 12:13:50.391917] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.878 [2024-10-21 12:13:50.391923] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.878 [2024-10-21 12:13:50.391938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.878 qpair failed and we were unable to recover it. 00:29:13.878 [2024-10-21 12:13:50.401829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.878 [2024-10-21 12:13:50.401875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.878 [2024-10-21 12:13:50.401888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.878 [2024-10-21 12:13:50.401895] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.878 [2024-10-21 12:13:50.401901] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.878 [2024-10-21 12:13:50.401915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.878 qpair failed and we were unable to recover it. 00:29:13.878 [2024-10-21 12:13:50.411890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.878 [2024-10-21 12:13:50.411937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.878 [2024-10-21 12:13:50.411950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.878 [2024-10-21 12:13:50.411957] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.878 [2024-10-21 12:13:50.411963] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.878 [2024-10-21 12:13:50.411977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.878 qpair failed and we were unable to recover it. 00:29:13.879 [2024-10-21 12:13:50.421912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.879 [2024-10-21 12:13:50.421958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.879 [2024-10-21 12:13:50.421971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.879 [2024-10-21 12:13:50.421978] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.879 [2024-10-21 12:13:50.421984] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.879 [2024-10-21 12:13:50.421998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.879 qpair failed and we were unable to recover it. 00:29:13.879 [2024-10-21 12:13:50.431820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.879 [2024-10-21 12:13:50.431890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.879 [2024-10-21 12:13:50.431906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.879 [2024-10-21 12:13:50.431913] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.879 [2024-10-21 12:13:50.431920] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.879 [2024-10-21 12:13:50.431934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.879 qpair failed and we were unable to recover it. 00:29:13.879 [2024-10-21 12:13:50.441862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.879 [2024-10-21 12:13:50.441906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.879 [2024-10-21 12:13:50.441921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.879 [2024-10-21 12:13:50.441928] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.879 [2024-10-21 12:13:50.441935] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.879 [2024-10-21 12:13:50.441957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.879 qpair failed and we were unable to recover it. 00:29:13.879 [2024-10-21 12:13:50.451959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.879 [2024-10-21 12:13:50.452005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.879 [2024-10-21 12:13:50.452019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.879 [2024-10-21 12:13:50.452026] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.879 [2024-10-21 12:13:50.452032] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.879 [2024-10-21 12:13:50.452046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.879 qpair failed and we were unable to recover it. 00:29:13.879 [2024-10-21 12:13:50.462029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.879 [2024-10-21 12:13:50.462077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.879 [2024-10-21 12:13:50.462091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.879 [2024-10-21 12:13:50.462098] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.879 [2024-10-21 12:13:50.462104] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:13.879 [2024-10-21 12:13:50.462118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.879 qpair failed and we were unable to recover it. 00:29:14.141 [2024-10-21 12:13:50.471926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.141 [2024-10-21 12:13:50.471985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.141 [2024-10-21 12:13:50.471998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.141 [2024-10-21 12:13:50.472005] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.141 [2024-10-21 12:13:50.472012] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.141 [2024-10-21 12:13:50.472029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.141 qpair failed and we were unable to recover it. 00:29:14.141 [2024-10-21 12:13:50.482030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.141 [2024-10-21 12:13:50.482075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.141 [2024-10-21 12:13:50.482088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.141 [2024-10-21 12:13:50.482096] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.141 [2024-10-21 12:13:50.482102] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.141 [2024-10-21 12:13:50.482116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.141 qpair failed and we were unable to recover it. 00:29:14.141 [2024-10-21 12:13:50.492083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.141 [2024-10-21 12:13:50.492135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.141 [2024-10-21 12:13:50.492148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.141 [2024-10-21 12:13:50.492155] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.142 [2024-10-21 12:13:50.492161] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.142 [2024-10-21 12:13:50.492175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.142 qpair failed and we were unable to recover it. 00:29:14.142 [2024-10-21 12:13:50.502140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.142 [2024-10-21 12:13:50.502186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.142 [2024-10-21 12:13:50.502200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.142 [2024-10-21 12:13:50.502208] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.142 [2024-10-21 12:13:50.502214] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.142 [2024-10-21 12:13:50.502229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.142 qpair failed and we were unable to recover it. 00:29:14.142 [2024-10-21 12:13:50.512117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.142 [2024-10-21 12:13:50.512160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.142 [2024-10-21 12:13:50.512174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.142 [2024-10-21 12:13:50.512181] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.142 [2024-10-21 12:13:50.512187] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.142 [2024-10-21 12:13:50.512201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.142 qpair failed and we were unable to recover it. 00:29:14.142 [2024-10-21 12:13:50.522164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.142 [2024-10-21 12:13:50.522225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.142 [2024-10-21 12:13:50.522243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.142 [2024-10-21 12:13:50.522250] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.142 [2024-10-21 12:13:50.522260] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.142 [2024-10-21 12:13:50.522275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.142 qpair failed and we were unable to recover it. 00:29:14.142 [2024-10-21 12:13:50.532204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.142 [2024-10-21 12:13:50.532252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.142 [2024-10-21 12:13:50.532266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.142 [2024-10-21 12:13:50.532273] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.142 [2024-10-21 12:13:50.532279] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.142 [2024-10-21 12:13:50.532293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.142 qpair failed and we were unable to recover it. 00:29:14.142 [2024-10-21 12:13:50.542253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.142 [2024-10-21 12:13:50.542300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.142 [2024-10-21 12:13:50.542313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.142 [2024-10-21 12:13:50.542325] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.142 [2024-10-21 12:13:50.542332] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.142 [2024-10-21 12:13:50.542346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.142 qpair failed and we were unable to recover it. 00:29:14.142 [2024-10-21 12:13:50.552249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.142 [2024-10-21 12:13:50.552291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.142 [2024-10-21 12:13:50.552304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.142 [2024-10-21 12:13:50.552311] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.142 [2024-10-21 12:13:50.552318] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.142 [2024-10-21 12:13:50.552335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.142 qpair failed and we were unable to recover it. 00:29:14.142 [2024-10-21 12:13:50.562238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.142 [2024-10-21 12:13:50.562286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.142 [2024-10-21 12:13:50.562299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.142 [2024-10-21 12:13:50.562306] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.142 [2024-10-21 12:13:50.562315] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.142 [2024-10-21 12:13:50.562333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.142 qpair failed and we were unable to recover it. 00:29:14.142 [2024-10-21 12:13:50.572297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.142 [2024-10-21 12:13:50.572359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.142 [2024-10-21 12:13:50.572374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.142 [2024-10-21 12:13:50.572381] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.142 [2024-10-21 12:13:50.572387] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.142 [2024-10-21 12:13:50.572401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.142 qpair failed and we were unable to recover it. 00:29:14.142 [2024-10-21 12:13:50.582367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.142 [2024-10-21 12:13:50.582437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.142 [2024-10-21 12:13:50.582450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.142 [2024-10-21 12:13:50.582457] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.142 [2024-10-21 12:13:50.582463] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.142 [2024-10-21 12:13:50.582477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.142 qpair failed and we were unable to recover it. 00:29:14.142 [2024-10-21 12:13:50.592386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.142 [2024-10-21 12:13:50.592472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.142 [2024-10-21 12:13:50.592485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.142 [2024-10-21 12:13:50.592492] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.142 [2024-10-21 12:13:50.592498] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.142 [2024-10-21 12:13:50.592512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.142 qpair failed and we were unable to recover it. 00:29:14.142 [2024-10-21 12:13:50.602389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.142 [2024-10-21 12:13:50.602435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.142 [2024-10-21 12:13:50.602448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.142 [2024-10-21 12:13:50.602455] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.142 [2024-10-21 12:13:50.602461] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.142 [2024-10-21 12:13:50.602475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.142 qpair failed and we were unable to recover it. 00:29:14.142 [2024-10-21 12:13:50.612419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.142 [2024-10-21 12:13:50.612472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.142 [2024-10-21 12:13:50.612486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.142 [2024-10-21 12:13:50.612493] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.142 [2024-10-21 12:13:50.612499] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.142 [2024-10-21 12:13:50.612513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.142 qpair failed and we were unable to recover it. 00:29:14.142 [2024-10-21 12:13:50.622449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.142 [2024-10-21 12:13:50.622498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.142 [2024-10-21 12:13:50.622511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.142 [2024-10-21 12:13:50.622518] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.142 [2024-10-21 12:13:50.622524] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.142 [2024-10-21 12:13:50.622538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.142 qpair failed and we were unable to recover it. 00:29:14.142 [2024-10-21 12:13:50.632452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.142 [2024-10-21 12:13:50.632500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.142 [2024-10-21 12:13:50.632513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.142 [2024-10-21 12:13:50.632520] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.143 [2024-10-21 12:13:50.632526] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.143 [2024-10-21 12:13:50.632540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.143 qpair failed and we were unable to recover it. 00:29:14.143 [2024-10-21 12:13:50.642489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.143 [2024-10-21 12:13:50.642540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.143 [2024-10-21 12:13:50.642552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.143 [2024-10-21 12:13:50.642559] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.143 [2024-10-21 12:13:50.642565] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.143 [2024-10-21 12:13:50.642579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.143 qpair failed and we were unable to recover it. 00:29:14.143 [2024-10-21 12:13:50.652387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.143 [2024-10-21 12:13:50.652435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.143 [2024-10-21 12:13:50.652448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.143 [2024-10-21 12:13:50.652455] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.143 [2024-10-21 12:13:50.652464] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.143 [2024-10-21 12:13:50.652479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.143 qpair failed and we were unable to recover it. 00:29:14.143 [2024-10-21 12:13:50.662588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.143 [2024-10-21 12:13:50.662634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.143 [2024-10-21 12:13:50.662648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.143 [2024-10-21 12:13:50.662654] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.143 [2024-10-21 12:13:50.662660] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.143 [2024-10-21 12:13:50.662674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.143 qpair failed and we were unable to recover it. 00:29:14.143 [2024-10-21 12:13:50.672571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.143 [2024-10-21 12:13:50.672616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.143 [2024-10-21 12:13:50.672629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.143 [2024-10-21 12:13:50.672636] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.143 [2024-10-21 12:13:50.672642] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.143 [2024-10-21 12:13:50.672655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.143 qpair failed and we were unable to recover it. 00:29:14.143 [2024-10-21 12:13:50.682520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.143 [2024-10-21 12:13:50.682567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.143 [2024-10-21 12:13:50.682580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.143 [2024-10-21 12:13:50.682587] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.143 [2024-10-21 12:13:50.682593] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.143 [2024-10-21 12:13:50.682607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.143 qpair failed and we were unable to recover it. 00:29:14.143 [2024-10-21 12:13:50.692638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.143 [2024-10-21 12:13:50.692685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.143 [2024-10-21 12:13:50.692698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.143 [2024-10-21 12:13:50.692705] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.143 [2024-10-21 12:13:50.692711] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.143 [2024-10-21 12:13:50.692724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.143 qpair failed and we were unable to recover it. 00:29:14.143 [2024-10-21 12:13:50.702680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.143 [2024-10-21 12:13:50.702723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.143 [2024-10-21 12:13:50.702738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.143 [2024-10-21 12:13:50.702745] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.143 [2024-10-21 12:13:50.702751] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.143 [2024-10-21 12:13:50.702765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.143 qpair failed and we were unable to recover it. 00:29:14.143 [2024-10-21 12:13:50.712680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.143 [2024-10-21 12:13:50.712725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.143 [2024-10-21 12:13:50.712738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.143 [2024-10-21 12:13:50.712745] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.143 [2024-10-21 12:13:50.712751] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.143 [2024-10-21 12:13:50.712765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.143 qpair failed and we were unable to recover it. 00:29:14.143 [2024-10-21 12:13:50.722667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.143 [2024-10-21 12:13:50.722713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.143 [2024-10-21 12:13:50.722726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.143 [2024-10-21 12:13:50.722733] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.143 [2024-10-21 12:13:50.722739] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.143 [2024-10-21 12:13:50.722753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.143 qpair failed and we were unable to recover it. 00:29:14.143 [2024-10-21 12:13:50.732741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.143 [2024-10-21 12:13:50.732794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.143 [2024-10-21 12:13:50.732807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.143 [2024-10-21 12:13:50.732814] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.143 [2024-10-21 12:13:50.732820] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.143 [2024-10-21 12:13:50.732833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.143 qpair failed and we were unable to recover it. 00:29:14.406 [2024-10-21 12:13:50.742805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.406 [2024-10-21 12:13:50.742901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.406 [2024-10-21 12:13:50.742915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.406 [2024-10-21 12:13:50.742925] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.406 [2024-10-21 12:13:50.742931] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.406 [2024-10-21 12:13:50.742946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.406 qpair failed and we were unable to recover it. 00:29:14.406 [2024-10-21 12:13:50.752762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.406 [2024-10-21 12:13:50.752806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.406 [2024-10-21 12:13:50.752819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.406 [2024-10-21 12:13:50.752826] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.406 [2024-10-21 12:13:50.752832] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.406 [2024-10-21 12:13:50.752846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.406 qpair failed and we were unable to recover it. 00:29:14.406 [2024-10-21 12:13:50.762817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.406 [2024-10-21 12:13:50.762865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.406 [2024-10-21 12:13:50.762878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.406 [2024-10-21 12:13:50.762885] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.406 [2024-10-21 12:13:50.762891] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.406 [2024-10-21 12:13:50.762905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.406 qpair failed and we were unable to recover it. 00:29:14.406 [2024-10-21 12:13:50.772816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.406 [2024-10-21 12:13:50.772859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.406 [2024-10-21 12:13:50.772873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.406 [2024-10-21 12:13:50.772880] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.406 [2024-10-21 12:13:50.772886] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.406 [2024-10-21 12:13:50.772900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.406 qpair failed and we were unable to recover it. 00:29:14.406 [2024-10-21 12:13:50.782848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.406 [2024-10-21 12:13:50.782892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.406 [2024-10-21 12:13:50.782905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.406 [2024-10-21 12:13:50.782911] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.406 [2024-10-21 12:13:50.782918] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.406 [2024-10-21 12:13:50.782932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.406 qpair failed and we were unable to recover it. 00:29:14.406 [2024-10-21 12:13:50.792860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.406 [2024-10-21 12:13:50.792905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.406 [2024-10-21 12:13:50.792918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.406 [2024-10-21 12:13:50.792925] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.406 [2024-10-21 12:13:50.792931] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.406 [2024-10-21 12:13:50.792945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.406 qpair failed and we were unable to recover it. 00:29:14.406 [2024-10-21 12:13:50.802926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.406 [2024-10-21 12:13:50.802969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.406 [2024-10-21 12:13:50.802982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.406 [2024-10-21 12:13:50.802989] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.406 [2024-10-21 12:13:50.802995] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.406 [2024-10-21 12:13:50.803009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.406 qpair failed and we were unable to recover it. 00:29:14.406 [2024-10-21 12:13:50.812947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.406 [2024-10-21 12:13:50.813034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.406 [2024-10-21 12:13:50.813047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.406 [2024-10-21 12:13:50.813054] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.406 [2024-10-21 12:13:50.813060] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.406 [2024-10-21 12:13:50.813074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.406 qpair failed and we were unable to recover it. 00:29:14.406 [2024-10-21 12:13:50.822998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.406 [2024-10-21 12:13:50.823048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.406 [2024-10-21 12:13:50.823061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.406 [2024-10-21 12:13:50.823067] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.406 [2024-10-21 12:13:50.823074] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.406 [2024-10-21 12:13:50.823087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.406 qpair failed and we were unable to recover it. 00:29:14.406 [2024-10-21 12:13:50.832951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.406 [2024-10-21 12:13:50.832999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.406 [2024-10-21 12:13:50.833024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.406 [2024-10-21 12:13:50.833036] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.406 [2024-10-21 12:13:50.833043] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.406 [2024-10-21 12:13:50.833062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.406 qpair failed and we were unable to recover it. 00:29:14.406 [2024-10-21 12:13:50.843017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.406 [2024-10-21 12:13:50.843090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.406 [2024-10-21 12:13:50.843104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.406 [2024-10-21 12:13:50.843112] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.406 [2024-10-21 12:13:50.843118] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.406 [2024-10-21 12:13:50.843133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.406 qpair failed and we were unable to recover it. 00:29:14.406 [2024-10-21 12:13:50.853066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.407 [2024-10-21 12:13:50.853118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.407 [2024-10-21 12:13:50.853143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.407 [2024-10-21 12:13:50.853152] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.407 [2024-10-21 12:13:50.853159] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.407 [2024-10-21 12:13:50.853177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.407 qpair failed and we were unable to recover it. 00:29:14.407 [2024-10-21 12:13:50.863094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.407 [2024-10-21 12:13:50.863184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.407 [2024-10-21 12:13:50.863200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.407 [2024-10-21 12:13:50.863207] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.407 [2024-10-21 12:13:50.863214] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.407 [2024-10-21 12:13:50.863232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.407 qpair failed and we were unable to recover it. 00:29:14.407 [2024-10-21 12:13:50.873117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.407 [2024-10-21 12:13:50.873168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.407 [2024-10-21 12:13:50.873182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.407 [2024-10-21 12:13:50.873189] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.407 [2024-10-21 12:13:50.873195] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.407 [2024-10-21 12:13:50.873210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.407 qpair failed and we were unable to recover it. 00:29:14.407 [2024-10-21 12:13:50.883155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.407 [2024-10-21 12:13:50.883213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.407 [2024-10-21 12:13:50.883227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.407 [2024-10-21 12:13:50.883234] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.407 [2024-10-21 12:13:50.883240] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.407 [2024-10-21 12:13:50.883254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.407 qpair failed and we were unable to recover it. 00:29:14.407 [2024-10-21 12:13:50.893128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.407 [2024-10-21 12:13:50.893179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.407 [2024-10-21 12:13:50.893192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.407 [2024-10-21 12:13:50.893199] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.407 [2024-10-21 12:13:50.893206] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.407 [2024-10-21 12:13:50.893220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.407 qpair failed and we were unable to recover it. 00:29:14.407 [2024-10-21 12:13:50.903191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.407 [2024-10-21 12:13:50.903232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.407 [2024-10-21 12:13:50.903245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.407 [2024-10-21 12:13:50.903252] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.407 [2024-10-21 12:13:50.903258] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.407 [2024-10-21 12:13:50.903272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.407 qpair failed and we were unable to recover it. 00:29:14.407 [2024-10-21 12:13:50.913207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.407 [2024-10-21 12:13:50.913261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.407 [2024-10-21 12:13:50.913275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.407 [2024-10-21 12:13:50.913282] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.407 [2024-10-21 12:13:50.913288] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.407 [2024-10-21 12:13:50.913302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.407 qpair failed and we were unable to recover it. 00:29:14.407 [2024-10-21 12:13:50.923252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.407 [2024-10-21 12:13:50.923298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.407 [2024-10-21 12:13:50.923314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.407 [2024-10-21 12:13:50.923324] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.407 [2024-10-21 12:13:50.923331] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.407 [2024-10-21 12:13:50.923346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.407 qpair failed and we were unable to recover it. 00:29:14.407 [2024-10-21 12:13:50.933180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.407 [2024-10-21 12:13:50.933234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.407 [2024-10-21 12:13:50.933248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.407 [2024-10-21 12:13:50.933255] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.407 [2024-10-21 12:13:50.933262] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.407 [2024-10-21 12:13:50.933276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.407 qpair failed and we were unable to recover it. 00:29:14.407 [2024-10-21 12:13:50.943198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.407 [2024-10-21 12:13:50.943241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.407 [2024-10-21 12:13:50.943255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.407 [2024-10-21 12:13:50.943262] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.407 [2024-10-21 12:13:50.943269] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.407 [2024-10-21 12:13:50.943283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.407 qpair failed and we were unable to recover it. 00:29:14.407 [2024-10-21 12:13:50.953339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.407 [2024-10-21 12:13:50.953384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.407 [2024-10-21 12:13:50.953398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.407 [2024-10-21 12:13:50.953405] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.407 [2024-10-21 12:13:50.953412] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.407 [2024-10-21 12:13:50.953426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.407 qpair failed and we were unable to recover it. 00:29:14.407 [2024-10-21 12:13:50.963350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.407 [2024-10-21 12:13:50.963395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.407 [2024-10-21 12:13:50.963408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.407 [2024-10-21 12:13:50.963415] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.407 [2024-10-21 12:13:50.963421] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.407 [2024-10-21 12:13:50.963439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.407 qpair failed and we were unable to recover it. 00:29:14.407 [2024-10-21 12:13:50.973387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.407 [2024-10-21 12:13:50.973445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.407 [2024-10-21 12:13:50.973458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.407 [2024-10-21 12:13:50.973464] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.408 [2024-10-21 12:13:50.973471] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.408 [2024-10-21 12:13:50.973484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.408 qpair failed and we were unable to recover it. 00:29:14.408 [2024-10-21 12:13:50.983415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.408 [2024-10-21 12:13:50.983463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.408 [2024-10-21 12:13:50.983476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.408 [2024-10-21 12:13:50.983483] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.408 [2024-10-21 12:13:50.983489] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.408 [2024-10-21 12:13:50.983503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.408 qpair failed and we were unable to recover it. 00:29:14.408 [2024-10-21 12:13:50.993427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.408 [2024-10-21 12:13:50.993473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.408 [2024-10-21 12:13:50.993487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.408 [2024-10-21 12:13:50.993495] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.408 [2024-10-21 12:13:50.993501] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.408 [2024-10-21 12:13:50.993515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.408 qpair failed and we were unable to recover it. 00:29:14.670 [2024-10-21 12:13:51.003474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.670 [2024-10-21 12:13:51.003519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.670 [2024-10-21 12:13:51.003532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.670 [2024-10-21 12:13:51.003539] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.670 [2024-10-21 12:13:51.003546] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.670 [2024-10-21 12:13:51.003560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.670 qpair failed and we were unable to recover it. 00:29:14.670 [2024-10-21 12:13:51.013484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.670 [2024-10-21 12:13:51.013531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.670 [2024-10-21 12:13:51.013549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.670 [2024-10-21 12:13:51.013556] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.670 [2024-10-21 12:13:51.013562] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.670 [2024-10-21 12:13:51.013576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.670 qpair failed and we were unable to recover it. 00:29:14.670 [2024-10-21 12:13:51.023501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.670 [2024-10-21 12:13:51.023547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.670 [2024-10-21 12:13:51.023560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.670 [2024-10-21 12:13:51.023567] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.670 [2024-10-21 12:13:51.023573] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.670 [2024-10-21 12:13:51.023587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.670 qpair failed and we were unable to recover it. 00:29:14.670 [2024-10-21 12:13:51.033577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.670 [2024-10-21 12:13:51.033631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.670 [2024-10-21 12:13:51.033645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.670 [2024-10-21 12:13:51.033651] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.670 [2024-10-21 12:13:51.033658] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.670 [2024-10-21 12:13:51.033672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.670 qpair failed and we were unable to recover it. 00:29:14.670 [2024-10-21 12:13:51.043599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.670 [2024-10-21 12:13:51.043645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.670 [2024-10-21 12:13:51.043659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.670 [2024-10-21 12:13:51.043667] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.670 [2024-10-21 12:13:51.043673] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.670 [2024-10-21 12:13:51.043689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.670 qpair failed and we were unable to recover it. 00:29:14.670 [2024-10-21 12:13:51.053633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.670 [2024-10-21 12:13:51.053683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.670 [2024-10-21 12:13:51.053696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.670 [2024-10-21 12:13:51.053703] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.670 [2024-10-21 12:13:51.053713] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.670 [2024-10-21 12:13:51.053727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.670 qpair failed and we were unable to recover it. 00:29:14.670 [2024-10-21 12:13:51.063639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.670 [2024-10-21 12:13:51.063683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.670 [2024-10-21 12:13:51.063697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.670 [2024-10-21 12:13:51.063704] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.670 [2024-10-21 12:13:51.063710] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.670 [2024-10-21 12:13:51.063724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.670 qpair failed and we were unable to recover it. 00:29:14.670 [2024-10-21 12:13:51.073652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.670 [2024-10-21 12:13:51.073693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.670 [2024-10-21 12:13:51.073707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.670 [2024-10-21 12:13:51.073714] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.670 [2024-10-21 12:13:51.073720] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.670 [2024-10-21 12:13:51.073735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.670 qpair failed and we were unable to recover it. 00:29:14.670 [2024-10-21 12:13:51.083652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.671 [2024-10-21 12:13:51.083699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.671 [2024-10-21 12:13:51.083712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.671 [2024-10-21 12:13:51.083719] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.671 [2024-10-21 12:13:51.083725] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.671 [2024-10-21 12:13:51.083739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.671 qpair failed and we were unable to recover it. 00:29:14.671 [2024-10-21 12:13:51.093722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.671 [2024-10-21 12:13:51.093772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.671 [2024-10-21 12:13:51.093785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.671 [2024-10-21 12:13:51.093792] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.671 [2024-10-21 12:13:51.093798] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.671 [2024-10-21 12:13:51.093812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.671 qpair failed and we were unable to recover it. 00:29:14.671 [2024-10-21 12:13:51.103705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.671 [2024-10-21 12:13:51.103789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.671 [2024-10-21 12:13:51.103803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.671 [2024-10-21 12:13:51.103810] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.671 [2024-10-21 12:13:51.103817] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.671 [2024-10-21 12:13:51.103831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.671 qpair failed and we were unable to recover it. 00:29:14.671 [2024-10-21 12:13:51.113779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.671 [2024-10-21 12:13:51.113849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.671 [2024-10-21 12:13:51.113862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.671 [2024-10-21 12:13:51.113869] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.671 [2024-10-21 12:13:51.113875] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.671 [2024-10-21 12:13:51.113889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.671 qpair failed and we were unable to recover it. 00:29:14.671 [2024-10-21 12:13:51.123759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.671 [2024-10-21 12:13:51.123805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.671 [2024-10-21 12:13:51.123819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.671 [2024-10-21 12:13:51.123826] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.671 [2024-10-21 12:13:51.123832] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.671 [2024-10-21 12:13:51.123847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.671 qpair failed and we were unable to recover it. 00:29:14.671 [2024-10-21 12:13:51.133832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.671 [2024-10-21 12:13:51.133878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.671 [2024-10-21 12:13:51.133892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.671 [2024-10-21 12:13:51.133899] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.671 [2024-10-21 12:13:51.133905] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.671 [2024-10-21 12:13:51.133919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.671 qpair failed and we were unable to recover it. 00:29:14.671 [2024-10-21 12:13:51.143800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.671 [2024-10-21 12:13:51.143846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.671 [2024-10-21 12:13:51.143859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.671 [2024-10-21 12:13:51.143866] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.671 [2024-10-21 12:13:51.143876] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.671 [2024-10-21 12:13:51.143890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.671 qpair failed and we were unable to recover it. 00:29:14.671 [2024-10-21 12:13:51.153849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.671 [2024-10-21 12:13:51.153893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.671 [2024-10-21 12:13:51.153906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.671 [2024-10-21 12:13:51.153913] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.671 [2024-10-21 12:13:51.153919] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.671 [2024-10-21 12:13:51.153933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.671 qpair failed and we were unable to recover it. 00:29:14.671 [2024-10-21 12:13:51.163864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.671 [2024-10-21 12:13:51.163924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.671 [2024-10-21 12:13:51.163937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.671 [2024-10-21 12:13:51.163943] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.671 [2024-10-21 12:13:51.163949] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.671 [2024-10-21 12:13:51.163963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.671 qpair failed and we were unable to recover it. 00:29:14.671 [2024-10-21 12:13:51.173806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.671 [2024-10-21 12:13:51.173852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.671 [2024-10-21 12:13:51.173866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.671 [2024-10-21 12:13:51.173873] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.671 [2024-10-21 12:13:51.173879] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.671 [2024-10-21 12:13:51.173893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.671 qpair failed and we were unable to recover it. 00:29:14.671 [2024-10-21 12:13:51.184009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.671 [2024-10-21 12:13:51.184077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.671 [2024-10-21 12:13:51.184091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.671 [2024-10-21 12:13:51.184099] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.671 [2024-10-21 12:13:51.184105] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.671 [2024-10-21 12:13:51.184119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.671 qpair failed and we were unable to recover it. 00:29:14.671 [2024-10-21 12:13:51.193985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.671 [2024-10-21 12:13:51.194036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.671 [2024-10-21 12:13:51.194049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.671 [2024-10-21 12:13:51.194056] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.671 [2024-10-21 12:13:51.194062] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.671 [2024-10-21 12:13:51.194076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.671 qpair failed and we were unable to recover it. 00:29:14.671 [2024-10-21 12:13:51.204001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.672 [2024-10-21 12:13:51.204065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.672 [2024-10-21 12:13:51.204078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.672 [2024-10-21 12:13:51.204085] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.672 [2024-10-21 12:13:51.204091] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.672 [2024-10-21 12:13:51.204105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.672 qpair failed and we were unable to recover it. 00:29:14.672 [2024-10-21 12:13:51.214015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.672 [2024-10-21 12:13:51.214061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.672 [2024-10-21 12:13:51.214074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.672 [2024-10-21 12:13:51.214081] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.672 [2024-10-21 12:13:51.214087] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.672 [2024-10-21 12:13:51.214101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.672 qpair failed and we were unable to recover it. 00:29:14.672 [2024-10-21 12:13:51.224079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.672 [2024-10-21 12:13:51.224123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.672 [2024-10-21 12:13:51.224136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.672 [2024-10-21 12:13:51.224143] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.672 [2024-10-21 12:13:51.224149] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.672 [2024-10-21 12:13:51.224163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.672 qpair failed and we were unable to recover it. 00:29:14.672 [2024-10-21 12:13:51.234042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.672 [2024-10-21 12:13:51.234089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.672 [2024-10-21 12:13:51.234103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.672 [2024-10-21 12:13:51.234113] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.672 [2024-10-21 12:13:51.234119] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.672 [2024-10-21 12:13:51.234133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.672 qpair failed and we were unable to recover it. 00:29:14.672 [2024-10-21 12:13:51.244107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.672 [2024-10-21 12:13:51.244158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.672 [2024-10-21 12:13:51.244171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.672 [2024-10-21 12:13:51.244178] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.672 [2024-10-21 12:13:51.244184] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.672 [2024-10-21 12:13:51.244198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.672 qpair failed and we were unable to recover it. 00:29:14.672 [2024-10-21 12:13:51.254148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.672 [2024-10-21 12:13:51.254201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.672 [2024-10-21 12:13:51.254214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.672 [2024-10-21 12:13:51.254221] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.672 [2024-10-21 12:13:51.254227] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.672 [2024-10-21 12:13:51.254240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.672 qpair failed and we were unable to recover it. 00:29:14.934 [2024-10-21 12:13:51.264030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.934 [2024-10-21 12:13:51.264080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.934 [2024-10-21 12:13:51.264093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.934 [2024-10-21 12:13:51.264100] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.934 [2024-10-21 12:13:51.264106] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.934 [2024-10-21 12:13:51.264120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.934 qpair failed and we were unable to recover it. 00:29:14.934 [2024-10-21 12:13:51.274265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.934 [2024-10-21 12:13:51.274314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.934 [2024-10-21 12:13:51.274331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.934 [2024-10-21 12:13:51.274338] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.934 [2024-10-21 12:13:51.274344] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.934 [2024-10-21 12:13:51.274358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.934 qpair failed and we were unable to recover it. 00:29:14.934 [2024-10-21 12:13:51.284127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.934 [2024-10-21 12:13:51.284173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.934 [2024-10-21 12:13:51.284187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.934 [2024-10-21 12:13:51.284193] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.934 [2024-10-21 12:13:51.284200] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.934 [2024-10-21 12:13:51.284214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.934 qpair failed and we were unable to recover it. 00:29:14.934 [2024-10-21 12:13:51.294227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.934 [2024-10-21 12:13:51.294277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.934 [2024-10-21 12:13:51.294290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.934 [2024-10-21 12:13:51.294296] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.934 [2024-10-21 12:13:51.294303] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.934 [2024-10-21 12:13:51.294316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.934 qpair failed and we were unable to recover it. 00:29:14.934 [2024-10-21 12:13:51.304268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.934 [2024-10-21 12:13:51.304309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.934 [2024-10-21 12:13:51.304326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.934 [2024-10-21 12:13:51.304333] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.934 [2024-10-21 12:13:51.304339] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.934 [2024-10-21 12:13:51.304353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.934 qpair failed and we were unable to recover it. 00:29:14.934 [2024-10-21 12:13:51.314362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.934 [2024-10-21 12:13:51.314432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.934 [2024-10-21 12:13:51.314445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.934 [2024-10-21 12:13:51.314452] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.934 [2024-10-21 12:13:51.314458] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.934 [2024-10-21 12:13:51.314472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.934 qpair failed and we were unable to recover it. 00:29:14.934 [2024-10-21 12:13:51.324332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.934 [2024-10-21 12:13:51.324377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.934 [2024-10-21 12:13:51.324390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.934 [2024-10-21 12:13:51.324401] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.934 [2024-10-21 12:13:51.324407] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.934 [2024-10-21 12:13:51.324422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.934 qpair failed and we were unable to recover it. 00:29:14.934 [2024-10-21 12:13:51.334358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.934 [2024-10-21 12:13:51.334405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.934 [2024-10-21 12:13:51.334420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.934 [2024-10-21 12:13:51.334427] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.934 [2024-10-21 12:13:51.334433] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.934 [2024-10-21 12:13:51.334447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.934 qpair failed and we were unable to recover it. 00:29:14.934 [2024-10-21 12:13:51.344380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.935 [2024-10-21 12:13:51.344422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.935 [2024-10-21 12:13:51.344435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.935 [2024-10-21 12:13:51.344442] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.935 [2024-10-21 12:13:51.344448] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.935 [2024-10-21 12:13:51.344462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-10-21 12:13:51.354393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.935 [2024-10-21 12:13:51.354435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.935 [2024-10-21 12:13:51.354449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.935 [2024-10-21 12:13:51.354455] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.935 [2024-10-21 12:13:51.354461] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.935 [2024-10-21 12:13:51.354475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-10-21 12:13:51.364416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.935 [2024-10-21 12:13:51.364480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.935 [2024-10-21 12:13:51.364493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.935 [2024-10-21 12:13:51.364500] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.935 [2024-10-21 12:13:51.364506] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.935 [2024-10-21 12:13:51.364520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-10-21 12:13:51.374430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.935 [2024-10-21 12:13:51.374479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.935 [2024-10-21 12:13:51.374493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.935 [2024-10-21 12:13:51.374500] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.935 [2024-10-21 12:13:51.374506] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.935 [2024-10-21 12:13:51.374520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-10-21 12:13:51.384466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.935 [2024-10-21 12:13:51.384513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.935 [2024-10-21 12:13:51.384525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.935 [2024-10-21 12:13:51.384532] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.935 [2024-10-21 12:13:51.384539] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.935 [2024-10-21 12:13:51.384553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-10-21 12:13:51.394652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.935 [2024-10-21 12:13:51.394736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.935 [2024-10-21 12:13:51.394749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.935 [2024-10-21 12:13:51.394756] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.935 [2024-10-21 12:13:51.394762] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.935 [2024-10-21 12:13:51.394776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-10-21 12:13:51.404521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.935 [2024-10-21 12:13:51.404610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.935 [2024-10-21 12:13:51.404624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.935 [2024-10-21 12:13:51.404631] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.935 [2024-10-21 12:13:51.404637] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.935 [2024-10-21 12:13:51.404651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-10-21 12:13:51.414556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.935 [2024-10-21 12:13:51.414605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.935 [2024-10-21 12:13:51.414622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.935 [2024-10-21 12:13:51.414629] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.935 [2024-10-21 12:13:51.414636] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.935 [2024-10-21 12:13:51.414650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-10-21 12:13:51.424623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.935 [2024-10-21 12:13:51.424669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.935 [2024-10-21 12:13:51.424682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.935 [2024-10-21 12:13:51.424689] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.935 [2024-10-21 12:13:51.424696] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.935 [2024-10-21 12:13:51.424710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-10-21 12:13:51.434597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.935 [2024-10-21 12:13:51.434638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.935 [2024-10-21 12:13:51.434651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.935 [2024-10-21 12:13:51.434658] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.935 [2024-10-21 12:13:51.434665] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.935 [2024-10-21 12:13:51.434679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-10-21 12:13:51.444616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.935 [2024-10-21 12:13:51.444660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.935 [2024-10-21 12:13:51.444673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.935 [2024-10-21 12:13:51.444680] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.935 [2024-10-21 12:13:51.444686] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.935 [2024-10-21 12:13:51.444700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-10-21 12:13:51.454700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.935 [2024-10-21 12:13:51.454785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.935 [2024-10-21 12:13:51.454798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.935 [2024-10-21 12:13:51.454805] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.935 [2024-10-21 12:13:51.454812] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.935 [2024-10-21 12:13:51.454833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-10-21 12:13:51.464682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.935 [2024-10-21 12:13:51.464734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.935 [2024-10-21 12:13:51.464747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.935 [2024-10-21 12:13:51.464754] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.935 [2024-10-21 12:13:51.464761] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.936 [2024-10-21 12:13:51.464776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.936 qpair failed and we were unable to recover it. 00:29:14.936 [2024-10-21 12:13:51.474757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.936 [2024-10-21 12:13:51.474798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.936 [2024-10-21 12:13:51.474812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.936 [2024-10-21 12:13:51.474819] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.936 [2024-10-21 12:13:51.474825] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.936 [2024-10-21 12:13:51.474839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.936 qpair failed and we were unable to recover it. 00:29:14.936 [2024-10-21 12:13:51.484772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.936 [2024-10-21 12:13:51.484817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.936 [2024-10-21 12:13:51.484830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.936 [2024-10-21 12:13:51.484837] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.936 [2024-10-21 12:13:51.484843] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.936 [2024-10-21 12:13:51.484858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.936 qpair failed and we were unable to recover it. 00:29:14.936 [2024-10-21 12:13:51.494801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.936 [2024-10-21 12:13:51.494850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.936 [2024-10-21 12:13:51.494863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.936 [2024-10-21 12:13:51.494869] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.936 [2024-10-21 12:13:51.494876] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.936 [2024-10-21 12:13:51.494889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.936 qpair failed and we were unable to recover it. 00:29:14.936 [2024-10-21 12:13:51.504811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.936 [2024-10-21 12:13:51.504864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.936 [2024-10-21 12:13:51.504880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.936 [2024-10-21 12:13:51.504887] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.936 [2024-10-21 12:13:51.504893] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.936 [2024-10-21 12:13:51.504908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.936 qpair failed and we were unable to recover it. 00:29:14.936 [2024-10-21 12:13:51.514850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.936 [2024-10-21 12:13:51.514904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.936 [2024-10-21 12:13:51.514917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.936 [2024-10-21 12:13:51.514924] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.936 [2024-10-21 12:13:51.514931] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.936 [2024-10-21 12:13:51.514944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.936 qpair failed and we were unable to recover it. 00:29:14.936 [2024-10-21 12:13:51.524860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.936 [2024-10-21 12:13:51.524911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.936 [2024-10-21 12:13:51.524924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.936 [2024-10-21 12:13:51.524931] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.936 [2024-10-21 12:13:51.524937] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:14.936 [2024-10-21 12:13:51.524952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.936 qpair failed and we were unable to recover it. 00:29:15.197 [2024-10-21 12:13:51.534900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.197 [2024-10-21 12:13:51.534951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.197 [2024-10-21 12:13:51.534965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.197 [2024-10-21 12:13:51.534972] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.197 [2024-10-21 12:13:51.534978] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:15.197 [2024-10-21 12:13:51.534993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.197 qpair failed and we were unable to recover it. 00:29:15.197 [2024-10-21 12:13:51.544933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.197 [2024-10-21 12:13:51.544981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.197 [2024-10-21 12:13:51.544995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.197 [2024-10-21 12:13:51.545002] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.197 [2024-10-21 12:13:51.545008] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:15.197 [2024-10-21 12:13:51.545027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.197 qpair failed and we were unable to recover it. 00:29:15.197 [2024-10-21 12:13:51.554956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.197 [2024-10-21 12:13:51.555000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.197 [2024-10-21 12:13:51.555013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.197 [2024-10-21 12:13:51.555020] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.197 [2024-10-21 12:13:51.555026] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:15.197 [2024-10-21 12:13:51.555040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.197 qpair failed and we were unable to recover it. 00:29:15.197 [2024-10-21 12:13:51.564984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.198 [2024-10-21 12:13:51.565044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.198 [2024-10-21 12:13:51.565057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.198 [2024-10-21 12:13:51.565064] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.198 [2024-10-21 12:13:51.565070] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:15.198 [2024-10-21 12:13:51.565084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.198 qpair failed and we were unable to recover it. 00:29:15.198 [2024-10-21 12:13:51.575027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.198 [2024-10-21 12:13:51.575116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.198 [2024-10-21 12:13:51.575130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.198 [2024-10-21 12:13:51.575136] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.198 [2024-10-21 12:13:51.575142] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:15.198 [2024-10-21 12:13:51.575157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.198 qpair failed and we were unable to recover it. 00:29:15.198 [2024-10-21 12:13:51.585023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.198 [2024-10-21 12:13:51.585097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.198 [2024-10-21 12:13:51.585121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.198 [2024-10-21 12:13:51.585130] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.198 [2024-10-21 12:13:51.585137] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:15.198 [2024-10-21 12:13:51.585155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.198 qpair failed and we were unable to recover it. 00:29:15.198 [2024-10-21 12:13:51.595104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.198 [2024-10-21 12:13:51.595164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.198 [2024-10-21 12:13:51.595183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.198 [2024-10-21 12:13:51.595190] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.198 [2024-10-21 12:13:51.595197] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:15.198 [2024-10-21 12:13:51.595212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.198 qpair failed and we were unable to recover it. 00:29:15.198 [2024-10-21 12:13:51.605090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.198 [2024-10-21 12:13:51.605161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.198 [2024-10-21 12:13:51.605175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.198 [2024-10-21 12:13:51.605182] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.198 [2024-10-21 12:13:51.605188] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:15.198 [2024-10-21 12:13:51.605203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.198 qpair failed and we were unable to recover it. 00:29:15.198 [2024-10-21 12:13:51.615083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.198 [2024-10-21 12:13:51.615139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.198 [2024-10-21 12:13:51.615153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.198 [2024-10-21 12:13:51.615160] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.198 [2024-10-21 12:13:51.615166] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:15.198 [2024-10-21 12:13:51.615180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.198 qpair failed and we were unable to recover it. 00:29:15.198 [2024-10-21 12:13:51.625122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.198 [2024-10-21 12:13:51.625164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.198 [2024-10-21 12:13:51.625177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.198 [2024-10-21 12:13:51.625184] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.198 [2024-10-21 12:13:51.625190] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:15.198 [2024-10-21 12:13:51.625204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.198 qpair failed and we were unable to recover it. 00:29:15.198 [2024-10-21 12:13:51.635142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.198 [2024-10-21 12:13:51.635186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.198 [2024-10-21 12:13:51.635200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.198 [2024-10-21 12:13:51.635207] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.198 [2024-10-21 12:13:51.635217] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:15.198 [2024-10-21 12:13:51.635232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.198 qpair failed and we were unable to recover it. 00:29:15.198 [2024-10-21 12:13:51.645202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.198 [2024-10-21 12:13:51.645257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.198 [2024-10-21 12:13:51.645270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.198 [2024-10-21 12:13:51.645277] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.198 [2024-10-21 12:13:51.645283] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:15.198 [2024-10-21 12:13:51.645297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.198 qpair failed and we were unable to recover it. 00:29:15.198 [2024-10-21 12:13:51.655102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.198 [2024-10-21 12:13:51.655152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.198 [2024-10-21 12:13:51.655165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.198 [2024-10-21 12:13:51.655172] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.198 [2024-10-21 12:13:51.655178] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:15.198 [2024-10-21 12:13:51.655192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.198 qpair failed and we were unable to recover it. 00:29:15.198 [2024-10-21 12:13:51.665249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.198 [2024-10-21 12:13:51.665293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.198 [2024-10-21 12:13:51.665306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.198 [2024-10-21 12:13:51.665313] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.198 [2024-10-21 12:13:51.665323] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:15.198 [2024-10-21 12:13:51.665338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.198 qpair failed and we were unable to recover it. 00:29:15.198 [2024-10-21 12:13:51.675266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.198 [2024-10-21 12:13:51.675312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.198 [2024-10-21 12:13:51.675330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.198 [2024-10-21 12:13:51.675337] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.198 [2024-10-21 12:13:51.675343] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:15.198 [2024-10-21 12:13:51.675358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.198 qpair failed and we were unable to recover it. 00:29:15.198 [2024-10-21 12:13:51.685214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.198 [2024-10-21 12:13:51.685270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.198 [2024-10-21 12:13:51.685284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.198 [2024-10-21 12:13:51.685291] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.198 [2024-10-21 12:13:51.685297] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:15.198 [2024-10-21 12:13:51.685316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.198 qpair failed and we were unable to recover it. 00:29:15.198 [2024-10-21 12:13:51.695342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.198 [2024-10-21 12:13:51.695388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.198 [2024-10-21 12:13:51.695402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.198 [2024-10-21 12:13:51.695409] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.198 [2024-10-21 12:13:51.695416] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:15.199 [2024-10-21 12:13:51.695430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.199 qpair failed and we were unable to recover it. 00:29:15.199 [2024-10-21 12:13:51.705358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.199 [2024-10-21 12:13:51.705403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.199 [2024-10-21 12:13:51.705416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.199 [2024-10-21 12:13:51.705423] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.199 [2024-10-21 12:13:51.705429] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:15.199 [2024-10-21 12:13:51.705444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.199 qpair failed and we were unable to recover it. 00:29:15.199 [2024-10-21 12:13:51.715368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.199 [2024-10-21 12:13:51.715411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.199 [2024-10-21 12:13:51.715425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.199 [2024-10-21 12:13:51.715432] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.199 [2024-10-21 12:13:51.715438] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:15.199 [2024-10-21 12:13:51.715452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.199 qpair failed and we were unable to recover it. 00:29:15.199 [2024-10-21 12:13:51.725399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.199 [2024-10-21 12:13:51.725489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.199 [2024-10-21 12:13:51.725502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.199 [2024-10-21 12:13:51.725509] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.199 [2024-10-21 12:13:51.725518] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:15.199 [2024-10-21 12:13:51.725533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.199 qpair failed and we were unable to recover it. 00:29:15.199 [2024-10-21 12:13:51.735474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.199 [2024-10-21 12:13:51.735554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.199 [2024-10-21 12:13:51.735568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.199 [2024-10-21 12:13:51.735575] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.199 [2024-10-21 12:13:51.735581] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:15.199 [2024-10-21 12:13:51.735595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.199 qpair failed and we were unable to recover it. 00:29:15.199 [2024-10-21 12:13:51.745423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.199 [2024-10-21 12:13:51.745467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.199 [2024-10-21 12:13:51.745481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.199 [2024-10-21 12:13:51.745488] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.199 [2024-10-21 12:13:51.745494] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:15.199 [2024-10-21 12:13:51.745509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.199 qpair failed and we were unable to recover it. 00:29:15.199 [2024-10-21 12:13:51.755476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.199 [2024-10-21 12:13:51.755520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.199 [2024-10-21 12:13:51.755533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.199 [2024-10-21 12:13:51.755540] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.199 [2024-10-21 12:13:51.755546] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:15.199 [2024-10-21 12:13:51.755560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.199 qpair failed and we were unable to recover it. 00:29:15.199 [2024-10-21 12:13:51.765497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.199 [2024-10-21 12:13:51.765543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.199 [2024-10-21 12:13:51.765556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.199 [2024-10-21 12:13:51.765563] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.199 [2024-10-21 12:13:51.765569] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:15.199 [2024-10-21 12:13:51.765583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.199 qpair failed and we were unable to recover it. 00:29:15.199 [2024-10-21 12:13:51.775555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.199 [2024-10-21 12:13:51.775626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.199 [2024-10-21 12:13:51.775640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.199 [2024-10-21 12:13:51.775647] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.199 [2024-10-21 12:13:51.775653] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:15.199 [2024-10-21 12:13:51.775667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.199 qpair failed and we were unable to recover it. 00:29:15.199 [2024-10-21 12:13:51.785558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.199 [2024-10-21 12:13:51.785597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.199 [2024-10-21 12:13:51.785610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.199 [2024-10-21 12:13:51.785617] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.199 [2024-10-21 12:13:51.785623] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:15.199 [2024-10-21 12:13:51.785637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.199 qpair failed and we were unable to recover it. 00:29:15.462 [2024-10-21 12:13:51.795565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.462 [2024-10-21 12:13:51.795609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.462 [2024-10-21 12:13:51.795622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.462 [2024-10-21 12:13:51.795629] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.462 [2024-10-21 12:13:51.795635] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:15.462 [2024-10-21 12:13:51.795649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.462 qpair failed and we were unable to recover it. 00:29:15.462 [2024-10-21 12:13:51.805593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.462 [2024-10-21 12:13:51.805640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.462 [2024-10-21 12:13:51.805653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.462 [2024-10-21 12:13:51.805660] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.462 [2024-10-21 12:13:51.805666] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:15.462 [2024-10-21 12:13:51.805680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.462 qpair failed and we were unable to recover it. 00:29:15.462 [2024-10-21 12:13:51.815656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.462 [2024-10-21 12:13:51.815699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.462 [2024-10-21 12:13:51.815713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.462 [2024-10-21 12:13:51.815723] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.462 [2024-10-21 12:13:51.815729] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:15.462 [2024-10-21 12:13:51.815743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.462 qpair failed and we were unable to recover it. 00:29:15.462 [2024-10-21 12:13:51.825661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.462 [2024-10-21 12:13:51.825702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.462 [2024-10-21 12:13:51.825715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.462 [2024-10-21 12:13:51.825722] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.462 [2024-10-21 12:13:51.825728] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:15.462 [2024-10-21 12:13:51.825742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.462 qpair failed and we were unable to recover it. 00:29:15.463 [2024-10-21 12:13:51.835657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.463 [2024-10-21 12:13:51.835697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.463 [2024-10-21 12:13:51.835710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.463 [2024-10-21 12:13:51.835717] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.463 [2024-10-21 12:13:51.835723] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:15.463 [2024-10-21 12:13:51.835737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.463 qpair failed and we were unable to recover it. 00:29:15.463 [2024-10-21 12:13:51.845713] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.463 [2024-10-21 12:13:51.845761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.463 [2024-10-21 12:13:51.845775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.463 [2024-10-21 12:13:51.845781] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.463 [2024-10-21 12:13:51.845788] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:15.463 [2024-10-21 12:13:51.845801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.463 qpair failed and we were unable to recover it. 00:29:15.463 [2024-10-21 12:13:51.855762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.463 [2024-10-21 12:13:51.855811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.463 [2024-10-21 12:13:51.855824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.463 [2024-10-21 12:13:51.855831] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.463 [2024-10-21 12:13:51.855838] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:15.463 [2024-10-21 12:13:51.855852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.463 qpair failed and we were unable to recover it. 00:29:15.463 [2024-10-21 12:13:51.865720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.463 [2024-10-21 12:13:51.865785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.463 [2024-10-21 12:13:51.865798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.463 [2024-10-21 12:13:51.865805] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.463 [2024-10-21 12:13:51.865811] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:15.463 [2024-10-21 12:13:51.865825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.463 qpair failed and we were unable to recover it. 00:29:15.463 [2024-10-21 12:13:51.875782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.463 [2024-10-21 12:13:51.875835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.463 [2024-10-21 12:13:51.875848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.463 [2024-10-21 12:13:51.875855] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.463 [2024-10-21 12:13:51.875861] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:15.463 [2024-10-21 12:13:51.875875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.463 qpair failed and we were unable to recover it. 00:29:15.463 [2024-10-21 12:13:51.885837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.463 [2024-10-21 12:13:51.885883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.463 [2024-10-21 12:13:51.885896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.463 [2024-10-21 12:13:51.885903] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.463 [2024-10-21 12:13:51.885909] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:15.463 [2024-10-21 12:13:51.885923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.463 qpair failed and we were unable to recover it. 00:29:15.463 [2024-10-21 12:13:51.895739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.463 [2024-10-21 12:13:51.895794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.463 [2024-10-21 12:13:51.895809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.463 [2024-10-21 12:13:51.895816] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.463 [2024-10-21 12:13:51.895822] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:15.463 [2024-10-21 12:13:51.895836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.463 qpair failed and we were unable to recover it. 00:29:15.463 [2024-10-21 12:13:51.905877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.463 [2024-10-21 12:13:51.905921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.463 [2024-10-21 12:13:51.905938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.463 [2024-10-21 12:13:51.905945] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.463 [2024-10-21 12:13:51.905951] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:15.463 [2024-10-21 12:13:51.905966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.463 qpair failed and we were unable to recover it. 00:29:15.463 [2024-10-21 12:13:51.915895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.463 [2024-10-21 12:13:51.915938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.463 [2024-10-21 12:13:51.915951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.463 [2024-10-21 12:13:51.915958] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.463 [2024-10-21 12:13:51.915964] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:15.463 [2024-10-21 12:13:51.915977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.463 qpair failed and we were unable to recover it. 00:29:15.463 [2024-10-21 12:13:51.925919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.463 [2024-10-21 12:13:51.925966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.463 [2024-10-21 12:13:51.925979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.463 [2024-10-21 12:13:51.925986] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.463 [2024-10-21 12:13:51.925993] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:15.463 [2024-10-21 12:13:51.926007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.463 qpair failed and we were unable to recover it. 00:29:15.463 [2024-10-21 12:13:51.935957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.463 [2024-10-21 12:13:51.936004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.463 [2024-10-21 12:13:51.936018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.463 [2024-10-21 12:13:51.936025] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.463 [2024-10-21 12:13:51.936031] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:15.463 [2024-10-21 12:13:51.936045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.463 qpair failed and we were unable to recover it. 00:29:15.463 [2024-10-21 12:13:51.945970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.463 [2024-10-21 12:13:51.946010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.463 [2024-10-21 12:13:51.946023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.463 [2024-10-21 12:13:51.946030] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.463 [2024-10-21 12:13:51.946036] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:15.463 [2024-10-21 12:13:51.946053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.463 qpair failed and we were unable to recover it. 00:29:15.464 [2024-10-21 12:13:51.955891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.464 [2024-10-21 12:13:51.955943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.464 [2024-10-21 12:13:51.955957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.464 [2024-10-21 12:13:51.955964] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.464 [2024-10-21 12:13:51.955970] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:15.464 [2024-10-21 12:13:51.955984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.464 qpair failed and we were unable to recover it. 00:29:15.464 [2024-10-21 12:13:51.966030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.464 [2024-10-21 12:13:51.966073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.464 [2024-10-21 12:13:51.966086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.464 [2024-10-21 12:13:51.966092] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.464 [2024-10-21 12:13:51.966099] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:15.464 [2024-10-21 12:13:51.966113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.464 qpair failed and we were unable to recover it. 00:29:15.464 [2024-10-21 12:13:51.976070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.464 [2024-10-21 12:13:51.976119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.464 [2024-10-21 12:13:51.976144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.464 [2024-10-21 12:13:51.976152] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.464 [2024-10-21 12:13:51.976160] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:15.464 [2024-10-21 12:13:51.976178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.464 qpair failed and we were unable to recover it. 00:29:15.464 [2024-10-21 12:13:51.986079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.464 [2024-10-21 12:13:51.986152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.464 [2024-10-21 12:13:51.986167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.464 [2024-10-21 12:13:51.986175] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.464 [2024-10-21 12:13:51.986181] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:15.464 [2024-10-21 12:13:51.986196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.464 qpair failed and we were unable to recover it. 00:29:15.464 [2024-10-21 12:13:51.996113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.464 [2024-10-21 12:13:51.996174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.464 [2024-10-21 12:13:51.996192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.464 [2024-10-21 12:13:51.996199] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.464 [2024-10-21 12:13:51.996206] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:15.464 [2024-10-21 12:13:51.996220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.464 qpair failed and we were unable to recover it. 00:29:15.464 [2024-10-21 12:13:52.006146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.464 [2024-10-21 12:13:52.006190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.464 [2024-10-21 12:13:52.006204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.464 [2024-10-21 12:13:52.006211] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.464 [2024-10-21 12:13:52.006217] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf24000b90 00:29:15.464 [2024-10-21 12:13:52.006231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.464 qpair failed and we were unable to recover it. 00:29:15.464 [2024-10-21 12:13:52.016227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.464 [2024-10-21 12:13:52.016355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.464 [2024-10-21 12:13:52.016420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.464 [2024-10-21 12:13:52.016445] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.464 [2024-10-21 12:13:52.016466] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf18000b90 00:29:15.464 [2024-10-21 12:13:52.016518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.464 qpair failed and we were unable to recover it. 00:29:15.464 [2024-10-21 12:13:52.026198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.464 [2024-10-21 12:13:52.026262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.464 [2024-10-21 12:13:52.026294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.464 [2024-10-21 12:13:52.026310] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.464 [2024-10-21 12:13:52.026331] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf18000b90 00:29:15.464 [2024-10-21 12:13:52.026363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.464 qpair failed and we were unable to recover it. 00:29:15.464 Write completed with error (sct=0, sc=8) 00:29:15.464 starting I/O failed 00:29:15.464 Write completed with error (sct=0, sc=8) 00:29:15.464 starting I/O failed 00:29:15.464 Read completed with error (sct=0, sc=8) 00:29:15.464 starting I/O failed 00:29:15.464 Write completed with error (sct=0, sc=8) 00:29:15.464 starting I/O failed 00:29:15.464 Read completed with error (sct=0, sc=8) 00:29:15.464 starting I/O failed 00:29:15.464 Read completed with error (sct=0, sc=8) 00:29:15.464 starting I/O failed 00:29:15.464 Read completed with error (sct=0, sc=8) 00:29:15.464 starting I/O failed 00:29:15.464 Write completed with error (sct=0, sc=8) 00:29:15.464 starting I/O failed 00:29:15.464 Read completed with error (sct=0, sc=8) 00:29:15.464 starting I/O failed 00:29:15.464 Write completed with error (sct=0, sc=8) 00:29:15.464 starting I/O failed 00:29:15.464 Write completed with error (sct=0, sc=8) 00:29:15.464 starting I/O failed 00:29:15.464 Read completed with error (sct=0, sc=8) 00:29:15.464 starting I/O failed 00:29:15.464 Read completed with error (sct=0, sc=8) 00:29:15.464 starting I/O failed 00:29:15.464 Write completed with error (sct=0, sc=8) 00:29:15.464 starting I/O failed 00:29:15.464 Read completed with error (sct=0, sc=8) 00:29:15.464 starting I/O failed 00:29:15.464 Read completed with error (sct=0, sc=8) 00:29:15.464 starting I/O failed 00:29:15.464 Read completed with error (sct=0, sc=8) 00:29:15.464 starting I/O failed 00:29:15.464 Read completed with error (sct=0, sc=8) 00:29:15.464 starting I/O failed 00:29:15.464 Write completed with error (sct=0, sc=8) 00:29:15.464 starting I/O failed 00:29:15.464 Write completed with error (sct=0, sc=8) 00:29:15.464 starting I/O failed 00:29:15.464 Write completed with error (sct=0, sc=8) 00:29:15.464 starting I/O failed 00:29:15.464 Write completed with error (sct=0, sc=8) 00:29:15.464 starting I/O failed 00:29:15.464 Read completed with error (sct=0, sc=8) 00:29:15.464 starting I/O failed 00:29:15.464 Read completed with error (sct=0, sc=8) 00:29:15.464 starting I/O failed 00:29:15.464 Write completed with error (sct=0, sc=8) 00:29:15.464 starting I/O failed 00:29:15.464 Read completed with error (sct=0, sc=8) 00:29:15.464 starting I/O failed 00:29:15.464 Write completed with error (sct=0, sc=8) 00:29:15.464 starting I/O failed 00:29:15.464 Read completed with error (sct=0, sc=8) 00:29:15.464 starting I/O failed 00:29:15.464 Read completed with error (sct=0, sc=8) 00:29:15.464 starting I/O failed 00:29:15.464 Read completed with error (sct=0, sc=8) 00:29:15.464 starting I/O failed 00:29:15.464 Read completed with error (sct=0, sc=8) 00:29:15.464 starting I/O failed 00:29:15.464 Read completed with error (sct=0, sc=8) 00:29:15.464 starting I/O failed 00:29:15.464 [2024-10-21 12:13:52.027219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:15.464 [2024-10-21 12:13:52.036232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.464 [2024-10-21 12:13:52.036342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.464 [2024-10-21 12:13:52.036408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.464 [2024-10-21 12:13:52.036434] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.464 [2024-10-21 12:13:52.036454] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1eba770 00:29:15.464 [2024-10-21 12:13:52.036508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:15.465 qpair failed and we were unable to recover it. 00:29:15.465 [2024-10-21 12:13:52.046242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.465 [2024-10-21 12:13:52.046330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.465 [2024-10-21 12:13:52.046361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.465 [2024-10-21 12:13:52.046376] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.465 [2024-10-21 12:13:52.046389] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1eba770 00:29:15.465 [2024-10-21 12:13:52.046417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:15.465 qpair failed and we were unable to recover it. 00:29:15.465 [2024-10-21 12:13:52.046863] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb1480 is same with the state(6) to be set 00:29:15.726 [2024-10-21 12:13:52.056304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.726 [2024-10-21 12:13:52.056404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.726 [2024-10-21 12:13:52.056468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.726 [2024-10-21 12:13:52.056492] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.726 [2024-10-21 12:13:52.056509] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf1c000b90 00:29:15.726 [2024-10-21 12:13:52.056562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.726 qpair failed and we were unable to recover it. 00:29:15.726 [2024-10-21 12:13:52.066330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.726 [2024-10-21 12:13:52.066405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.726 [2024-10-21 12:13:52.066438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.726 [2024-10-21 12:13:52.066455] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.726 [2024-10-21 12:13:52.066471] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf1c000b90 00:29:15.726 [2024-10-21 12:13:52.066505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.726 qpair failed and we were unable to recover it. 00:29:15.726 [2024-10-21 12:13:52.067008] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb1480 (9): Bad file descriptor 00:29:15.726 Initializing NVMe Controllers 00:29:15.726 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:15.726 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:15.726 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:15.726 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:15.726 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:15.726 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:15.726 Initialization complete. Launching workers. 00:29:15.726 Starting thread on core 1 00:29:15.726 Starting thread on core 2 00:29:15.726 Starting thread on core 3 00:29:15.726 Starting thread on core 0 00:29:15.726 12:13:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:15.726 00:29:15.726 real 0m11.299s 00:29:15.726 user 0m22.086s 00:29:15.726 sys 0m3.972s 00:29:15.726 12:13:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:15.726 12:13:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:15.726 ************************************ 00:29:15.726 END TEST nvmf_target_disconnect_tc2 00:29:15.726 ************************************ 00:29:15.726 12:13:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:15.726 12:13:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:15.726 12:13:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:15.726 12:13:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:15.726 12:13:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:29:15.726 12:13:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:15.726 12:13:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:29:15.726 12:13:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:15.726 12:13:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:15.726 rmmod nvme_tcp 00:29:15.726 rmmod nvme_fabrics 00:29:15.726 rmmod nvme_keyring 00:29:15.726 12:13:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:15.726 12:13:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:29:15.726 12:13:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:29:15.726 12:13:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@515 -- # '[' -n 1161737 ']' 00:29:15.726 12:13:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # killprocess 1161737 00:29:15.726 12:13:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1161737 ']' 00:29:15.726 12:13:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 1161737 00:29:15.726 12:13:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:29:15.726 12:13:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:15.726 12:13:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1161737 00:29:15.726 12:13:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:29:15.726 12:13:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:29:15.726 12:13:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1161737' 00:29:15.726 killing process with pid 1161737 00:29:15.726 12:13:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 1161737 00:29:15.726 12:13:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 1161737 00:29:15.987 12:13:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:15.987 12:13:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:15.987 12:13:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:15.987 12:13:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:29:15.987 12:13:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:29:15.987 12:13:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:15.987 12:13:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:29:15.987 12:13:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:15.987 12:13:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:15.987 12:13:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:15.987 12:13:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:15.987 12:13:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:17.919 12:13:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:17.919 00:29:17.919 real 0m21.661s 00:29:17.919 user 0m49.440s 00:29:17.919 sys 0m10.105s 00:29:17.919 12:13:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:17.919 12:13:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:17.919 ************************************ 00:29:17.919 END TEST nvmf_target_disconnect 00:29:17.919 ************************************ 00:29:17.919 12:13:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:29:17.919 00:29:17.919 real 6m30.434s 00:29:17.919 user 11m25.739s 00:29:17.919 sys 2m15.729s 00:29:17.919 12:13:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:17.919 12:13:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.919 ************************************ 00:29:17.919 END TEST nvmf_host 00:29:17.919 ************************************ 00:29:18.180 12:13:54 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:29:18.180 12:13:54 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:29:18.180 12:13:54 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:18.180 12:13:54 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:18.180 12:13:54 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:18.180 12:13:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:18.180 ************************************ 00:29:18.180 START TEST nvmf_target_core_interrupt_mode 00:29:18.180 ************************************ 00:29:18.180 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:18.180 * Looking for test storage... 00:29:18.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:18.180 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:18.180 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:29:18.180 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:18.442 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:18.442 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:18.442 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:18.442 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:18.442 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:29:18.442 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:29:18.442 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:29:18.442 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:29:18.442 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:29:18.442 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:29:18.442 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:29:18.442 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:18.442 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:29:18.442 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:29:18.442 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:18.442 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:18.442 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:29:18.442 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:29:18.442 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:18.442 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:29:18.442 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:29:18.442 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:29:18.442 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:29:18.442 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:18.442 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:29:18.442 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:29:18.442 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:18.442 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:18.442 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:29:18.442 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:18.442 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:18.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.442 --rc genhtml_branch_coverage=1 00:29:18.442 --rc genhtml_function_coverage=1 00:29:18.442 --rc genhtml_legend=1 00:29:18.442 --rc geninfo_all_blocks=1 00:29:18.442 --rc geninfo_unexecuted_blocks=1 00:29:18.442 00:29:18.442 ' 00:29:18.442 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:18.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.442 --rc genhtml_branch_coverage=1 00:29:18.442 --rc genhtml_function_coverage=1 00:29:18.442 --rc genhtml_legend=1 00:29:18.442 --rc geninfo_all_blocks=1 00:29:18.442 --rc geninfo_unexecuted_blocks=1 00:29:18.442 00:29:18.442 ' 00:29:18.442 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:18.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.442 --rc genhtml_branch_coverage=1 00:29:18.442 --rc genhtml_function_coverage=1 00:29:18.443 --rc genhtml_legend=1 00:29:18.443 --rc geninfo_all_blocks=1 00:29:18.443 --rc geninfo_unexecuted_blocks=1 00:29:18.443 00:29:18.443 ' 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:18.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.443 --rc genhtml_branch_coverage=1 00:29:18.443 --rc genhtml_function_coverage=1 00:29:18.443 --rc genhtml_legend=1 00:29:18.443 --rc geninfo_all_blocks=1 00:29:18.443 --rc geninfo_unexecuted_blocks=1 00:29:18.443 00:29:18.443 ' 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:18.443 ************************************ 00:29:18.443 START TEST nvmf_abort 00:29:18.443 ************************************ 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:18.443 * Looking for test storage... 00:29:18.443 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:29:18.443 12:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:18.705 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:18.705 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:18.705 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:18.705 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:18.705 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:29:18.705 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:29:18.705 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:29:18.705 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:29:18.705 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:29:18.705 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:29:18.705 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:29:18.705 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:18.705 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:29:18.705 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:29:18.705 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:18.705 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:18.705 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:29:18.705 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:29:18.705 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:18.705 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:29:18.705 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:29:18.705 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:29:18.705 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:29:18.705 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:18.705 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:29:18.705 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:29:18.705 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:18.705 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:18.705 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:29:18.705 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:18.705 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:18.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.705 --rc genhtml_branch_coverage=1 00:29:18.705 --rc genhtml_function_coverage=1 00:29:18.705 --rc genhtml_legend=1 00:29:18.705 --rc geninfo_all_blocks=1 00:29:18.705 --rc geninfo_unexecuted_blocks=1 00:29:18.705 00:29:18.705 ' 00:29:18.705 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:18.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.705 --rc genhtml_branch_coverage=1 00:29:18.705 --rc genhtml_function_coverage=1 00:29:18.705 --rc genhtml_legend=1 00:29:18.705 --rc geninfo_all_blocks=1 00:29:18.705 --rc geninfo_unexecuted_blocks=1 00:29:18.705 00:29:18.705 ' 00:29:18.705 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:18.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.705 --rc genhtml_branch_coverage=1 00:29:18.705 --rc genhtml_function_coverage=1 00:29:18.705 --rc genhtml_legend=1 00:29:18.705 --rc geninfo_all_blocks=1 00:29:18.705 --rc geninfo_unexecuted_blocks=1 00:29:18.705 00:29:18.705 ' 00:29:18.705 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:18.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.705 --rc genhtml_branch_coverage=1 00:29:18.705 --rc genhtml_function_coverage=1 00:29:18.705 --rc genhtml_legend=1 00:29:18.705 --rc geninfo_all_blocks=1 00:29:18.705 --rc geninfo_unexecuted_blocks=1 00:29:18.705 00:29:18.705 ' 00:29:18.705 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:18.705 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:29:18.705 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:18.705 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:18.705 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:18.705 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:18.705 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:18.705 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:18.706 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:18.706 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:18.706 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:18.706 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:18.706 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:18.706 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:18.706 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:18.706 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:18.706 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:18.706 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:18.706 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:18.706 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:29:18.706 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:18.706 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:18.706 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:18.706 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.706 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.706 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.706 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:29:18.706 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.706 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:29:18.706 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:18.706 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:18.706 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:18.706 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:18.706 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:18.706 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:18.706 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:18.706 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:18.706 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:18.706 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:18.706 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:18.706 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:29:18.706 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:29:18.706 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:18.706 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:18.706 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:18.706 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:18.706 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:18.706 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:18.706 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:18.706 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:18.706 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:18.706 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:18.706 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:29:18.706 12:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:26.851 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:26.851 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:29:26.851 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:26.851 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:26.851 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:26.851 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:26.851 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:26.851 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:29:26.851 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:26.851 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:29:26.851 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:29:26.851 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:29:26.851 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:29:26.851 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:29:26.851 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:29:26.851 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:26.851 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:26.851 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:26.851 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:26.851 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:26.851 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:26.852 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:26.852 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:26.852 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:26.852 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:26.852 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:26.852 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.598 ms 00:29:26.852 00:29:26.852 --- 10.0.0.2 ping statistics --- 00:29:26.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:26.852 rtt min/avg/max/mdev = 0.598/0.598/0.598/0.000 ms 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:26.852 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:26.852 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:29:26.852 00:29:26.852 --- 10.0.0.1 ping statistics --- 00:29:26.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:26.852 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=1167308 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 1167308 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:26.852 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 1167308 ']' 00:29:26.853 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:26.853 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:26.853 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:26.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:26.853 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:26.853 12:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:26.853 [2024-10-21 12:14:02.656941] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:26.853 [2024-10-21 12:14:02.658065] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:29:26.853 [2024-10-21 12:14:02.658117] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:26.853 [2024-10-21 12:14:02.746014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:26.853 [2024-10-21 12:14:02.798328] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:26.853 [2024-10-21 12:14:02.798376] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:26.853 [2024-10-21 12:14:02.798384] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:26.853 [2024-10-21 12:14:02.798391] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:26.853 [2024-10-21 12:14:02.798397] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:26.853 [2024-10-21 12:14:02.800407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:26.853 [2024-10-21 12:14:02.800595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:26.853 [2024-10-21 12:14:02.800707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:26.853 [2024-10-21 12:14:02.876586] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:26.853 [2024-10-21 12:14:02.877644] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:26.853 [2024-10-21 12:14:02.877966] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:26.853 [2024-10-21 12:14:02.878119] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:27.113 12:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:27.113 12:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:29:27.113 12:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:27.113 12:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:27.113 12:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:27.113 12:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:27.113 12:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:29:27.113 12:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.113 12:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:27.113 [2024-10-21 12:14:03.521687] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:27.113 12:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.113 12:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:29:27.113 12:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.113 12:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:27.113 Malloc0 00:29:27.113 12:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.113 12:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:27.113 12:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.113 12:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:27.113 Delay0 00:29:27.113 12:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.113 12:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:27.113 12:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.113 12:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:27.113 12:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.113 12:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:29:27.113 12:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.113 12:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:27.113 12:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.113 12:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:27.113 12:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.113 12:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:27.113 [2024-10-21 12:14:03.625621] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:27.113 12:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.113 12:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:27.113 12:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.113 12:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:27.113 12:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.113 12:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:29:27.374 [2024-10-21 12:14:03.755501] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:29:29.292 Initializing NVMe Controllers 00:29:29.292 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:29.292 controller IO queue size 128 less than required 00:29:29.292 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:29:29.292 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:29:29.292 Initialization complete. Launching workers. 00:29:29.292 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28576 00:29:29.292 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28633, failed to submit 66 00:29:29.292 success 28576, unsuccessful 57, failed 0 00:29:29.292 12:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:29.292 12:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.292 12:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:29.292 12:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.292 12:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:29:29.292 12:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:29:29.292 12:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:29.292 12:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:29:29.292 12:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:29.292 12:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:29:29.292 12:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:29.292 12:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:29.292 rmmod nvme_tcp 00:29:29.292 rmmod nvme_fabrics 00:29:29.553 rmmod nvme_keyring 00:29:29.553 12:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:29.553 12:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:29:29.553 12:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:29:29.553 12:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 1167308 ']' 00:29:29.553 12:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 1167308 00:29:29.553 12:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 1167308 ']' 00:29:29.553 12:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 1167308 00:29:29.553 12:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:29:29.553 12:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:29.553 12:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1167308 00:29:29.553 12:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:29.553 12:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:29.553 12:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1167308' 00:29:29.553 killing process with pid 1167308 00:29:29.553 12:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@969 -- # kill 1167308 00:29:29.553 12:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@974 -- # wait 1167308 00:29:29.814 12:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:29.814 12:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:29.814 12:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:29.814 12:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:29:29.814 12:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:29:29.814 12:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:29.814 12:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:29:29.814 12:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:29.814 12:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:29.814 12:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:29.814 12:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:29.814 12:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:31.727 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:31.727 00:29:31.727 real 0m13.372s 00:29:31.727 user 0m10.885s 00:29:31.727 sys 0m6.960s 00:29:31.727 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:31.727 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:31.727 ************************************ 00:29:31.727 END TEST nvmf_abort 00:29:31.727 ************************************ 00:29:31.727 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:29:31.727 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:31.727 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:31.727 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:31.989 ************************************ 00:29:31.989 START TEST nvmf_ns_hotplug_stress 00:29:31.989 ************************************ 00:29:31.989 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:29:31.989 * Looking for test storage... 00:29:31.989 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:31.989 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:31.989 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:29:31.989 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:31.989 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:31.989 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:31.989 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:31.989 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:31.989 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:29:31.989 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:29:31.989 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:29:31.989 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:29:31.989 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:29:31.989 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:29:31.989 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:29:31.989 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:31.989 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:29:31.989 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:29:31.989 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:31.989 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:31.989 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:29:31.989 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:29:31.989 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:31.989 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:29:31.989 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:29:31.989 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:29:31.989 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:29:31.989 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:31.989 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:29:31.989 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:29:31.989 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:31.989 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:31.989 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:29:31.989 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:31.989 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:31.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.989 --rc genhtml_branch_coverage=1 00:29:31.989 --rc genhtml_function_coverage=1 00:29:31.989 --rc genhtml_legend=1 00:29:31.989 --rc geninfo_all_blocks=1 00:29:31.989 --rc geninfo_unexecuted_blocks=1 00:29:31.989 00:29:31.989 ' 00:29:31.989 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:31.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.989 --rc genhtml_branch_coverage=1 00:29:31.989 --rc genhtml_function_coverage=1 00:29:31.989 --rc genhtml_legend=1 00:29:31.989 --rc geninfo_all_blocks=1 00:29:31.989 --rc geninfo_unexecuted_blocks=1 00:29:31.989 00:29:31.989 ' 00:29:31.989 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:31.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.989 --rc genhtml_branch_coverage=1 00:29:31.989 --rc genhtml_function_coverage=1 00:29:31.989 --rc genhtml_legend=1 00:29:31.989 --rc geninfo_all_blocks=1 00:29:31.989 --rc geninfo_unexecuted_blocks=1 00:29:31.989 00:29:31.989 ' 00:29:31.989 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:31.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.989 --rc genhtml_branch_coverage=1 00:29:31.989 --rc genhtml_function_coverage=1 00:29:31.989 --rc genhtml_legend=1 00:29:31.989 --rc geninfo_all_blocks=1 00:29:31.989 --rc geninfo_unexecuted_blocks=1 00:29:31.989 00:29:31.989 ' 00:29:31.989 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:31.989 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:29:31.989 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:31.989 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:31.989 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:31.990 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:31.990 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:31.990 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:31.990 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:31.990 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:31.990 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:31.990 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:31.990 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:31.990 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:31.990 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:31.990 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:31.990 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:31.990 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:31.990 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:31.990 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:29:31.990 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:31.990 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:31.990 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:31.990 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.990 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.990 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.990 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:29:31.990 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.990 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:29:31.990 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:31.990 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:31.990 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:31.990 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:31.990 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:31.990 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:31.990 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:31.990 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:31.990 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:31.990 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:31.990 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:31.990 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:29:31.990 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:31.990 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:31.990 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:31.990 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:31.990 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:31.990 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:31.990 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:31.990 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:31.990 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:31.990 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:31.990 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:29:31.990 12:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:40.136 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:40.136 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:29:40.136 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:40.136 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:40.136 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:40.136 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:40.136 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:40.136 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:29:40.136 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:40.136 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:29:40.136 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:29:40.136 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:29:40.136 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:29:40.136 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:40.137 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:40.137 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:40.137 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:40.137 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:40.137 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:40.137 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:40.137 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.595 ms 00:29:40.137 00:29:40.137 --- 10.0.0.2 ping statistics --- 00:29:40.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:40.138 rtt min/avg/max/mdev = 0.595/0.595/0.595/0.000 ms 00:29:40.138 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:40.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:40.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:29:40.138 00:29:40.138 --- 10.0.0.1 ping statistics --- 00:29:40.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:40.138 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:29:40.138 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:40.138 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:29:40.138 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:40.138 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:40.138 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:40.138 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:40.138 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:40.138 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:40.138 12:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:40.138 12:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:29:40.138 12:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:40.138 12:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:40.138 12:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:40.138 12:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=1172085 00:29:40.138 12:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 1172085 00:29:40.138 12:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:40.138 12:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 1172085 ']' 00:29:40.138 12:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:40.138 12:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:40.138 12:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:40.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:40.138 12:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:40.138 12:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:40.138 [2024-10-21 12:14:16.090986] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:40.138 [2024-10-21 12:14:16.092124] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:29:40.138 [2024-10-21 12:14:16.092177] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:40.138 [2024-10-21 12:14:16.182047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:40.138 [2024-10-21 12:14:16.233475] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:40.138 [2024-10-21 12:14:16.233523] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:40.138 [2024-10-21 12:14:16.233537] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:40.138 [2024-10-21 12:14:16.233545] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:40.138 [2024-10-21 12:14:16.233550] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:40.138 [2024-10-21 12:14:16.235356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:40.138 [2024-10-21 12:14:16.235584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:40.138 [2024-10-21 12:14:16.235587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:40.138 [2024-10-21 12:14:16.310987] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:40.138 [2024-10-21 12:14:16.311912] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:40.138 [2024-10-21 12:14:16.312404] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:40.138 [2024-10-21 12:14:16.312535] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:40.398 12:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:40.398 12:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:29:40.398 12:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:40.398 12:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:40.398 12:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:40.398 12:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:40.398 12:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:29:40.398 12:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:40.659 [2024-10-21 12:14:17.120655] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:40.659 12:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:40.920 12:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:40.920 [2024-10-21 12:14:17.509522] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:41.180 12:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:41.180 12:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:29:41.442 Malloc0 00:29:41.442 12:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:41.703 Delay0 00:29:41.703 12:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:41.703 12:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:29:41.965 NULL1 00:29:41.965 12:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:29:42.227 12:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1172518 00:29:42.227 12:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1172518 00:29:42.227 12:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:29:42.227 12:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:43.171 Read completed with error (sct=0, sc=11) 00:29:43.432 12:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:43.433 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:43.433 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:43.433 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:43.433 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:43.433 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:43.693 12:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:29:43.693 12:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:29:43.693 true 00:29:43.693 12:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1172518 00:29:43.693 12:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:44.636 12:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:44.636 12:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:29:44.636 12:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:29:44.896 true 00:29:44.896 12:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1172518 00:29:44.896 12:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:45.159 12:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:45.419 12:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:29:45.419 12:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:29:45.419 true 00:29:45.419 12:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1172518 00:29:45.419 12:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:46.799 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:46.799 12:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:46.799 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:46.799 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:46.799 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:46.799 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:46.799 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:46.799 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:46.799 12:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:29:46.799 12:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:29:47.059 true 00:29:47.059 12:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1172518 00:29:47.059 12:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:47.998 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:47.998 12:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:47.998 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:47.998 12:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:29:47.998 12:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:29:47.998 true 00:29:47.998 12:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1172518 00:29:47.998 12:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:48.258 12:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:48.519 12:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:29:48.519 12:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:29:48.519 true 00:29:48.780 12:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1172518 00:29:48.780 12:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:48.780 12:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:49.040 12:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:29:49.040 12:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:29:49.040 true 00:29:49.301 12:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1172518 00:29:49.301 12:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:49.301 12:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:49.561 12:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:29:49.561 12:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:29:49.561 true 00:29:49.822 12:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1172518 00:29:49.822 12:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:50.763 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:50.763 12:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:51.056 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:51.056 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:51.056 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:51.056 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:51.056 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:51.056 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:51.056 12:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:29:51.056 12:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:29:51.393 true 00:29:51.393 12:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1172518 00:29:51.393 12:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:51.976 12:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:52.236 12:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:29:52.236 12:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:29:52.497 true 00:29:52.497 12:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1172518 00:29:52.497 12:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:52.758 12:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:52.758 12:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:29:52.758 12:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:29:53.018 true 00:29:53.018 12:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1172518 00:29:53.018 12:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:53.959 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:54.219 12:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:54.219 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:54.219 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:54.219 12:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:29:54.219 12:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:29:54.479 true 00:29:54.479 12:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1172518 00:29:54.479 12:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:54.739 12:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:54.739 12:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:29:54.739 12:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:29:54.999 true 00:29:54.999 12:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1172518 00:29:54.999 12:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:56.383 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:56.383 12:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:56.383 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:56.383 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:56.383 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:56.383 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:56.383 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:56.383 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:56.383 12:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:29:56.383 12:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:29:56.644 true 00:29:56.644 12:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1172518 00:29:56.644 12:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:57.587 12:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:57.587 12:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:29:57.587 12:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:29:57.847 true 00:29:57.848 12:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1172518 00:29:57.848 12:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:57.848 12:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:58.108 12:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:29:58.108 12:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:29:58.369 true 00:29:58.369 12:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1172518 00:29:58.369 12:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:59.310 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:59.310 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:59.310 12:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:59.310 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:59.570 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:59.570 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:59.570 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:59.570 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:59.570 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:59.570 12:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:29:59.570 12:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:29:59.831 true 00:29:59.831 12:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1172518 00:29:59.831 12:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:00.772 12:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:00.772 12:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:30:00.772 12:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:30:01.032 true 00:30:01.032 12:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1172518 00:30:01.032 12:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:01.293 12:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:01.293 12:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:30:01.293 12:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:30:01.554 true 00:30:01.554 12:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1172518 00:30:01.554 12:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:02.496 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:02.755 12:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:02.755 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:02.755 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:02.755 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:02.755 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:02.755 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:02.755 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:02.755 12:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:30:02.755 12:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:30:03.015 true 00:30:03.015 12:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1172518 00:30:03.015 12:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:03.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:03.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:03.956 12:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:03.956 12:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:30:03.956 12:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:30:04.216 true 00:30:04.216 12:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1172518 00:30:04.216 12:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:04.476 12:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:04.476 12:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:30:04.476 12:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:30:04.736 true 00:30:04.736 12:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1172518 00:30:04.736 12:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:06.120 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:06.120 12:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:06.120 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:06.120 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:06.120 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:06.120 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:06.120 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:06.120 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:06.120 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:06.120 12:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:30:06.120 12:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:30:06.382 true 00:30:06.382 12:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1172518 00:30:06.382 12:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:07.323 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:07.323 12:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:07.323 12:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:30:07.323 12:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:30:07.585 true 00:30:07.585 12:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1172518 00:30:07.585 12:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:07.585 12:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:07.847 12:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:30:07.847 12:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:30:08.108 true 00:30:08.108 12:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1172518 00:30:08.108 12:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:08.368 12:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:08.368 12:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:30:08.368 12:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:30:08.629 true 00:30:08.629 12:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1172518 00:30:08.629 12:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:08.890 12:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:08.890 12:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:30:08.890 12:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:30:09.151 true 00:30:09.151 12:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1172518 00:30:09.151 12:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:10.532 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:10.532 12:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:10.532 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:10.532 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:10.532 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:10.532 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:10.532 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:10.532 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:10.532 12:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:30:10.533 12:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:30:10.793 true 00:30:10.793 12:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1172518 00:30:10.793 12:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:11.734 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:11.734 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:11.734 12:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:11.734 12:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:30:11.734 12:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:30:11.734 true 00:30:11.993 12:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1172518 00:30:11.993 12:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:11.994 12:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:12.254 12:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:30:12.254 12:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:30:12.254 Initializing NVMe Controllers 00:30:12.254 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:12.254 Controller IO queue size 128, less than required. 00:30:12.254 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:12.254 Controller IO queue size 128, less than required. 00:30:12.254 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:12.254 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:12.254 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:12.254 Initialization complete. Launching workers. 00:30:12.254 ======================================================== 00:30:12.254 Latency(us) 00:30:12.254 Device Information : IOPS MiB/s Average min max 00:30:12.254 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2290.76 1.12 36098.87 1800.97 1032069.92 00:30:12.254 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 19336.17 9.44 6619.46 1201.05 302128.84 00:30:12.254 ======================================================== 00:30:12.254 Total : 21626.93 10.56 9741.97 1201.05 1032069.92 00:30:12.254 00:30:12.514 true 00:30:12.514 12:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1172518 00:30:12.514 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1172518) - No such process 00:30:12.514 12:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1172518 00:30:12.514 12:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:12.514 12:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:12.774 12:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:30:12.774 12:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:30:12.774 12:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:30:12.774 12:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:12.774 12:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:30:13.034 null0 00:30:13.034 12:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:13.034 12:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:13.034 12:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:30:13.034 null1 00:30:13.034 12:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:13.034 12:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:13.034 12:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:30:13.294 null2 00:30:13.294 12:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:13.294 12:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:13.294 12:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:30:13.294 null3 00:30:13.294 12:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:13.294 12:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:13.294 12:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:30:13.554 null4 00:30:13.554 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:13.554 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:13.554 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:30:13.815 null5 00:30:13.815 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:13.815 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:13.815 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:30:13.815 null6 00:30:13.815 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:13.815 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:13.815 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:30:14.076 null7 00:30:14.076 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:14.076 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:14.076 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:30:14.076 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:14.076 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:14.076 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:14.076 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:30:14.076 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:14.076 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:30:14.076 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:14.076 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.076 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:14.076 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:14.076 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:14.076 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:14.076 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:30:14.076 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:30:14.076 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:14.076 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:30:14.076 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.076 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:14.076 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:30:14.076 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:14.076 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:14.076 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:14.076 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.076 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:14.076 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:14.076 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:14.076 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:14.076 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:14.076 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:30:14.076 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:30:14.076 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:14.076 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.076 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:14.076 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:14.076 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:14.076 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:14.076 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:30:14.076 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:30:14.076 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:14.076 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:14.076 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:14.076 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.077 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:14.077 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:14.077 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:30:14.077 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:14.077 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:30:14.077 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:14.077 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:14.077 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:14.077 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.077 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:14.077 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:30:14.077 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:14.077 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:30:14.077 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:14.077 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:14.077 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:14.077 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.077 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1178681 1178683 1178686 1178690 1178692 1178695 1178696 1178699 00:30:14.077 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:30:14.077 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:14.077 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:30:14.077 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:14.077 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.077 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:14.347 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:14.347 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:14.347 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:14.347 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:14.347 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:14.347 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:14.347 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:14.347 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:14.347 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.347 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.347 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:14.609 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.609 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.609 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:14.609 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.609 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.609 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:14.609 12:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.609 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.609 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:14.609 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.609 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.609 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:14.609 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.609 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.609 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:14.609 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.609 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.609 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:14.609 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.609 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.609 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:14.609 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:14.609 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:14.609 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:14.871 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:14.871 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:14.871 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:14.871 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:14.871 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:14.871 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.871 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.871 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:14.871 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.871 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.871 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:14.871 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.871 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.871 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:14.871 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.871 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.871 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:14.871 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.871 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.871 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:14.871 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.871 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.871 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:14.871 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.871 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.871 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:14.871 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:14.871 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.871 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.871 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:15.131 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:15.131 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:15.131 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:15.131 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:15.131 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:15.131 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.131 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.132 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:15.132 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:15.132 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:15.392 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.392 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.392 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:15.392 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.392 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.392 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:15.392 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.392 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.392 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:15.392 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.392 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.393 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:15.393 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.393 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.393 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:15.393 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.393 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.393 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:15.393 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.393 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.393 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:15.393 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:15.393 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:15.393 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:15.393 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:15.393 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:15.393 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:15.654 12:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:15.654 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:15.655 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.655 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.655 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:15.655 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.655 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.655 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:15.655 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.655 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.655 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:15.655 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.655 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.655 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:15.655 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.655 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.655 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:15.655 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:15.655 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.655 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.655 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:15.655 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.655 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.655 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:15.655 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.655 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.655 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:15.917 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:15.917 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:15.917 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:15.917 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.917 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.917 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:15.917 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:15.917 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:15.917 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:15.917 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:15.917 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.917 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.917 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:15.917 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.917 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.917 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:16.179 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:16.179 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:16.179 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.179 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:16.179 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:16.179 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.179 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:16.179 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:16.179 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.179 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:16.179 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:16.179 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.179 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:16.179 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:16.179 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.179 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:16.179 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:16.179 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:16.179 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.179 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:16.179 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:16.179 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:16.179 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:16.179 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:16.441 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:16.441 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:16.441 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.441 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:16.441 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:16.441 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:16.441 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.441 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:16.441 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:16.441 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:16.441 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.441 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:16.441 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:16.441 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.441 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:16.441 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:16.441 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.441 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:16.441 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:16.441 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.441 12:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:16.441 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:16.702 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:16.702 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.702 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:16.702 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:16.702 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.702 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:16.703 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:16.703 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:16.703 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:16.703 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:16.703 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:16.703 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:16.703 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.703 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:16.703 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:16.703 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:16.703 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.703 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:16.703 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:16.703 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.703 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:16.965 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:16.965 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:16.965 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.965 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:16.965 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:16.965 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.965 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:16.965 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:16.965 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.965 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:16.965 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:16.965 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:16.965 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.965 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:16.965 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:16.965 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:16.965 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:16.965 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.965 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:16.965 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:17.227 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.227 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.227 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:17.227 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:17.227 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:17.227 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:17.227 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.227 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.227 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:17.227 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.227 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.227 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:17.227 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:17.227 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.227 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.227 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:17.227 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:17.227 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.227 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.227 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.227 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.227 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:17.227 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:17.227 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.227 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.227 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:17.227 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:17.227 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:17.489 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.489 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.490 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:17.490 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:17.490 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.490 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.490 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:17.490 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:17.490 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:17.490 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:17.490 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.490 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.490 12:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:17.490 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.490 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.490 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:17.490 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:17.490 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.490 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.490 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:17.751 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:17.751 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.751 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.751 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:17.751 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:17.751 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.751 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.751 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:17.751 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.751 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.751 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:17.751 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.751 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.751 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:17.751 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.751 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.751 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:17.751 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:18.011 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.012 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.012 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.012 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.012 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:18.012 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:18.012 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.012 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.012 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.012 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.272 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.272 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.272 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.272 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.272 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:30:18.273 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:30:18.273 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:18.273 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:30:18.273 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:18.273 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:30:18.273 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:18.273 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:18.273 rmmod nvme_tcp 00:30:18.273 rmmod nvme_fabrics 00:30:18.273 rmmod nvme_keyring 00:30:18.273 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:18.273 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:30:18.273 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:30:18.273 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 1172085 ']' 00:30:18.273 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 1172085 00:30:18.273 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 1172085 ']' 00:30:18.273 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 1172085 00:30:18.273 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:30:18.273 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:18.273 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1172085 00:30:18.273 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:18.273 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:18.273 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1172085' 00:30:18.273 killing process with pid 1172085 00:30:18.273 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 1172085 00:30:18.273 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 1172085 00:30:18.534 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:18.534 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:18.534 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:18.534 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:30:18.534 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:30:18.534 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:18.534 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:30:18.534 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:18.534 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:18.534 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:18.534 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:18.534 12:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:20.447 12:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:20.448 00:30:20.448 real 0m48.638s 00:30:20.448 user 2m55.639s 00:30:20.448 sys 0m20.753s 00:30:20.448 12:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:20.448 12:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:20.448 ************************************ 00:30:20.448 END TEST nvmf_ns_hotplug_stress 00:30:20.448 ************************************ 00:30:20.448 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:20.448 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:20.448 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:20.448 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:20.710 ************************************ 00:30:20.710 START TEST nvmf_delete_subsystem 00:30:20.710 ************************************ 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:20.710 * Looking for test storage... 00:30:20.710 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:20.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:20.710 --rc genhtml_branch_coverage=1 00:30:20.710 --rc genhtml_function_coverage=1 00:30:20.710 --rc genhtml_legend=1 00:30:20.710 --rc geninfo_all_blocks=1 00:30:20.710 --rc geninfo_unexecuted_blocks=1 00:30:20.710 00:30:20.710 ' 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:20.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:20.710 --rc genhtml_branch_coverage=1 00:30:20.710 --rc genhtml_function_coverage=1 00:30:20.710 --rc genhtml_legend=1 00:30:20.710 --rc geninfo_all_blocks=1 00:30:20.710 --rc geninfo_unexecuted_blocks=1 00:30:20.710 00:30:20.710 ' 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:20.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:20.710 --rc genhtml_branch_coverage=1 00:30:20.710 --rc genhtml_function_coverage=1 00:30:20.710 --rc genhtml_legend=1 00:30:20.710 --rc geninfo_all_blocks=1 00:30:20.710 --rc geninfo_unexecuted_blocks=1 00:30:20.710 00:30:20.710 ' 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:20.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:20.710 --rc genhtml_branch_coverage=1 00:30:20.710 --rc genhtml_function_coverage=1 00:30:20.710 --rc genhtml_legend=1 00:30:20.710 --rc geninfo_all_blocks=1 00:30:20.710 --rc geninfo_unexecuted_blocks=1 00:30:20.710 00:30:20.710 ' 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:20.710 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.711 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.711 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.711 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:30:20.711 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.711 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:30:20.711 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:20.711 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:20.711 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:20.711 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:20.711 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:20.711 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:20.711 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:20.711 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:20.711 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:20.711 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:20.711 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:30:20.711 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:20.711 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:20.711 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:20.711 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:20.711 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:20.711 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:20.711 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:20.711 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:20.711 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:20.711 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:20.711 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:30:20.711 12:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:28.849 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:28.849 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:30:28.849 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:28.849 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:28.850 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:28.850 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:28.850 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:28.850 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:28.850 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:28.850 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:30:28.850 00:30:28.850 --- 10.0.0.2 ping statistics --- 00:30:28.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:28.850 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:30:28.850 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:28.850 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:28.850 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:30:28.851 00:30:28.851 --- 10.0.0.1 ping statistics --- 00:30:28.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:28.851 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:30:28.851 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:28.851 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:30:28.851 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:28.851 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:28.851 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:28.851 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:28.851 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:28.851 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:28.851 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:28.851 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:30:28.851 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:28.851 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:28.851 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:28.851 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=1183858 00:30:28.851 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 1183858 00:30:28.851 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:30:28.851 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 1183858 ']' 00:30:28.851 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:28.851 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:28.851 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:28.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:28.851 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:28.851 12:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:28.851 [2024-10-21 12:15:04.872080] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:28.851 [2024-10-21 12:15:04.873198] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:30:28.851 [2024-10-21 12:15:04.873251] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:28.851 [2024-10-21 12:15:04.962386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:28.851 [2024-10-21 12:15:05.014607] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:28.851 [2024-10-21 12:15:05.014658] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:28.851 [2024-10-21 12:15:05.014667] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:28.851 [2024-10-21 12:15:05.014674] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:28.851 [2024-10-21 12:15:05.014681] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:28.851 [2024-10-21 12:15:05.016276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:28.851 [2024-10-21 12:15:05.016279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:28.851 [2024-10-21 12:15:05.092868] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:28.851 [2024-10-21 12:15:05.093350] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:28.851 [2024-10-21 12:15:05.093699] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:29.111 12:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:29.111 12:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:30:29.111 12:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:29.111 12:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:29.111 12:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:29.372 12:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:29.372 12:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:29.372 12:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.372 12:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:29.372 [2024-10-21 12:15:05.733377] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:29.372 12:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.372 12:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:29.372 12:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.372 12:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:29.372 12:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.372 12:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:29.372 12:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.372 12:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:29.372 [2024-10-21 12:15:05.765834] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:29.372 12:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.372 12:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:30:29.372 12:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.372 12:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:29.372 NULL1 00:30:29.372 12:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.372 12:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:29.372 12:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.372 12:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:29.372 Delay0 00:30:29.372 12:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.372 12:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:29.372 12:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.372 12:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:29.372 12:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.372 12:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1184168 00:30:29.372 12:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:30:29.372 12:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:30:29.372 [2024-10-21 12:15:05.877001] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:31.286 12:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:31.286 12:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.286 12:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:31.546 Write completed with error (sct=0, sc=8) 00:30:31.546 starting I/O failed: -6 00:30:31.546 Read completed with error (sct=0, sc=8) 00:30:31.546 Read completed with error (sct=0, sc=8) 00:30:31.546 Write completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 starting I/O failed: -6 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Write completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 starting I/O failed: -6 00:30:31.547 Write completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 starting I/O failed: -6 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Write completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 starting I/O failed: -6 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Write completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 starting I/O failed: -6 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Write completed with error (sct=0, sc=8) 00:30:31.547 starting I/O failed: -6 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Write completed with error (sct=0, sc=8) 00:30:31.547 starting I/O failed: -6 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Write completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 starting I/O failed: -6 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Write completed with error (sct=0, sc=8) 00:30:31.547 Write completed with error (sct=0, sc=8) 00:30:31.547 Write completed with error (sct=0, sc=8) 00:30:31.547 starting I/O failed: -6 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Write completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 starting I/O failed: -6 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Write completed with error (sct=0, sc=8) 00:30:31.547 Write completed with error (sct=0, sc=8) 00:30:31.547 starting I/O failed: -6 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 [2024-10-21 12:15:07.959349] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e7120 is same with the state(6) to be set 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Write completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Write completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Write completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Write completed with error (sct=0, sc=8) 00:30:31.547 Write completed with error (sct=0, sc=8) 00:30:31.547 Write completed with error (sct=0, sc=8) 00:30:31.547 Write completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Write completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Write completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Write completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Write completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Write completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Write completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 starting I/O failed: -6 00:30:31.547 Write completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 starting I/O failed: -6 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 starting I/O failed: -6 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Write completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 starting I/O failed: -6 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Write completed with error (sct=0, sc=8) 00:30:31.547 starting I/O failed: -6 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Write completed with error (sct=0, sc=8) 00:30:31.547 Write completed with error (sct=0, sc=8) 00:30:31.547 starting I/O failed: -6 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Write completed with error (sct=0, sc=8) 00:30:31.547 Write completed with error (sct=0, sc=8) 00:30:31.547 starting I/O failed: -6 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Write completed with error (sct=0, sc=8) 00:30:31.547 starting I/O failed: -6 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 starting I/O failed: -6 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Write completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 starting I/O failed: -6 00:30:31.547 [2024-10-21 12:15:07.964098] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb41000cfe0 is same with the state(6) to be set 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Write completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Write completed with error (sct=0, sc=8) 00:30:31.547 Write completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Write completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Write completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Write completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Read completed with error (sct=0, sc=8) 00:30:31.547 Write completed with error (sct=0, sc=8) 00:30:31.548 Write completed with error (sct=0, sc=8) 00:30:31.548 Write completed with error (sct=0, sc=8) 00:30:31.548 Write completed with error (sct=0, sc=8) 00:30:31.548 Read completed with error (sct=0, sc=8) 00:30:31.548 Read completed with error (sct=0, sc=8) 00:30:31.548 Read completed with error (sct=0, sc=8) 00:30:31.548 Write completed with error (sct=0, sc=8) 00:30:31.548 Read completed with error (sct=0, sc=8) 00:30:31.548 Read completed with error (sct=0, sc=8) 00:30:31.548 Read completed with error (sct=0, sc=8) 00:30:31.548 Read completed with error (sct=0, sc=8) 00:30:31.548 Read completed with error (sct=0, sc=8) 00:30:31.548 Write completed with error (sct=0, sc=8) 00:30:31.548 Read completed with error (sct=0, sc=8) 00:30:31.548 Read completed with error (sct=0, sc=8) 00:30:31.548 Read completed with error (sct=0, sc=8) 00:30:31.548 Read completed with error (sct=0, sc=8) 00:30:32.488 [2024-10-21 12:15:08.935660] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed050 is same with the state(6) to be set 00:30:32.488 Read completed with error (sct=0, sc=8) 00:30:32.488 Read completed with error (sct=0, sc=8) 00:30:32.488 Read completed with error (sct=0, sc=8) 00:30:32.488 Read completed with error (sct=0, sc=8) 00:30:32.488 Read completed with error (sct=0, sc=8) 00:30:32.488 Write completed with error (sct=0, sc=8) 00:30:32.488 Read completed with error (sct=0, sc=8) 00:30:32.489 Write completed with error (sct=0, sc=8) 00:30:32.489 Read completed with error (sct=0, sc=8) 00:30:32.489 Read completed with error (sct=0, sc=8) 00:30:32.489 Read completed with error (sct=0, sc=8) 00:30:32.489 Read completed with error (sct=0, sc=8) 00:30:32.489 Read completed with error (sct=0, sc=8) 00:30:32.489 Read completed with error (sct=0, sc=8) 00:30:32.489 Read completed with error (sct=0, sc=8) 00:30:32.489 Write completed with error (sct=0, sc=8) 00:30:32.489 Write completed with error (sct=0, sc=8) 00:30:32.489 Write completed with error (sct=0, sc=8) 00:30:32.489 Read completed with error (sct=0, sc=8) 00:30:32.489 Read completed with error (sct=0, sc=8) 00:30:32.489 Write completed with error (sct=0, sc=8) 00:30:32.489 Read completed with error (sct=0, sc=8) 00:30:32.489 Read completed with error (sct=0, sc=8) 00:30:32.489 Write completed with error (sct=0, sc=8) 00:30:32.489 Read completed with error (sct=0, sc=8) 00:30:32.489 Write completed with error (sct=0, sc=8) 00:30:32.489 [2024-10-21 12:15:08.962561] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0490 is same with the state(6) to be set 00:30:32.489 Read completed with error (sct=0, sc=8) 00:30:32.489 Write completed with error (sct=0, sc=8) 00:30:32.489 Write completed with error (sct=0, sc=8) 00:30:32.489 Read completed with error (sct=0, sc=8) 00:30:32.489 Read completed with error (sct=0, sc=8) 00:30:32.489 Write completed with error (sct=0, sc=8) 00:30:32.489 Write completed with error (sct=0, sc=8) 00:30:32.489 Write completed with error (sct=0, sc=8) 00:30:32.489 Read completed with error (sct=0, sc=8) 00:30:32.489 Read completed with error (sct=0, sc=8) 00:30:32.489 Read completed with error (sct=0, sc=8) 00:30:32.489 Read completed with error (sct=0, sc=8) 00:30:32.489 Read completed with error (sct=0, sc=8) 00:30:32.489 Write completed with error (sct=0, sc=8) 00:30:32.489 Write completed with error (sct=0, sc=8) 00:30:32.489 Write completed with error (sct=0, sc=8) 00:30:32.489 Read completed with error (sct=0, sc=8) 00:30:32.489 Read completed with error (sct=0, sc=8) 00:30:32.489 Write completed with error (sct=0, sc=8) 00:30:32.489 Write completed with error (sct=0, sc=8) 00:30:32.489 Read completed with error (sct=0, sc=8) 00:30:32.489 Read completed with error (sct=0, sc=8) 00:30:32.489 Read completed with error (sct=0, sc=8) 00:30:32.489 Read completed with error (sct=0, sc=8) 00:30:32.489 Write completed with error (sct=0, sc=8) 00:30:32.489 [2024-10-21 12:15:08.963086] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114fae0 is same with the state(6) to be set 00:30:32.489 Write completed with error (sct=0, sc=8) 00:30:32.489 Read completed with error (sct=0, sc=8) 00:30:32.489 Write completed with error (sct=0, sc=8) 00:30:32.489 Write completed with error (sct=0, sc=8) 00:30:32.489 Read completed with error (sct=0, sc=8) 00:30:32.489 Read completed with error (sct=0, sc=8) 00:30:32.489 Read completed with error (sct=0, sc=8) 00:30:32.489 Read completed with error (sct=0, sc=8) 00:30:32.489 Read completed with error (sct=0, sc=8) 00:30:32.489 Read completed with error (sct=0, sc=8) 00:30:32.489 Read completed with error (sct=0, sc=8) 00:30:32.489 Read completed with error (sct=0, sc=8) 00:30:32.489 Write completed with error (sct=0, sc=8) 00:30:32.489 Read completed with error (sct=0, sc=8) 00:30:32.489 Read completed with error (sct=0, sc=8) 00:30:32.489 Read completed with error (sct=0, sc=8) 00:30:32.489 Write completed with error (sct=0, sc=8) 00:30:32.489 Write completed with error (sct=0, sc=8) 00:30:32.489 [2024-10-21 12:15:08.964938] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb410000c00 is same with the state(6) to be set 00:30:32.489 Read completed with error (sct=0, sc=8) 00:30:32.489 Read completed with error (sct=0, sc=8) 00:30:32.489 Read completed with error (sct=0, sc=8) 00:30:32.489 Read completed with error (sct=0, sc=8) 00:30:32.489 Read completed with error (sct=0, sc=8) 00:30:32.489 Read completed with error (sct=0, sc=8) 00:30:32.489 Read completed with error (sct=0, sc=8) 00:30:32.489 Read completed with error (sct=0, sc=8) 00:30:32.489 Write completed with error (sct=0, sc=8) 00:30:32.489 Read completed with error (sct=0, sc=8) 00:30:32.489 Write completed with error (sct=0, sc=8) 00:30:32.489 Read completed with error (sct=0, sc=8) 00:30:32.489 Read completed with error (sct=0, sc=8) 00:30:32.489 Write completed with error (sct=0, sc=8) 00:30:32.489 Write completed with error (sct=0, sc=8) 00:30:32.489 Read completed with error (sct=0, sc=8) 00:30:32.489 Write completed with error (sct=0, sc=8) 00:30:32.489 [2024-10-21 12:15:08.965236] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb41000d310 is same with the state(6) to be set 00:30:32.489 Initializing NVMe Controllers 00:30:32.489 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:32.489 Controller IO queue size 128, less than required. 00:30:32.489 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:32.489 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:32.489 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:32.489 Initialization complete. Launching workers. 00:30:32.489 ======================================================== 00:30:32.489 Latency(us) 00:30:32.489 Device Information : IOPS MiB/s Average min max 00:30:32.489 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 172.18 0.08 890854.77 316.34 1006809.56 00:30:32.489 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.26 0.08 926023.91 355.99 1010963.95 00:30:32.489 ======================================================== 00:30:32.489 Total : 329.44 0.16 907642.45 316.34 1010963.95 00:30:32.489 00:30:32.489 [2024-10-21 12:15:08.965719] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ed050 (9): Bad file descriptor 00:30:32.489 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:30:32.489 12:15:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.489 12:15:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:30:32.489 12:15:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1184168 00:30:32.489 12:15:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:30:33.060 12:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:30:33.060 12:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1184168 00:30:33.060 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1184168) - No such process 00:30:33.060 12:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1184168 00:30:33.060 12:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:30:33.060 12:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1184168 00:30:33.060 12:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:30:33.060 12:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:33.060 12:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:30:33.060 12:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:33.060 12:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1184168 00:30:33.060 12:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:30:33.060 12:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:33.060 12:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:33.060 12:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:33.060 12:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:33.060 12:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.060 12:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:33.060 12:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.060 12:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:33.060 12:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.060 12:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:33.060 [2024-10-21 12:15:09.501738] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:33.060 12:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.060 12:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:33.060 12:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.060 12:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:33.060 12:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.060 12:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1184927 00:30:33.060 12:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:30:33.060 12:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1184927 00:30:33.060 12:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:30:33.060 12:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:33.060 [2024-10-21 12:15:09.589073] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:33.631 12:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:33.631 12:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1184927 00:30:33.631 12:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:34.203 12:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:34.203 12:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1184927 00:30:34.203 12:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:34.464 12:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:34.464 12:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1184927 00:30:34.464 12:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:35.036 12:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:35.036 12:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1184927 00:30:35.036 12:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:35.607 12:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:35.607 12:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1184927 00:30:35.607 12:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:36.178 12:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:36.178 12:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1184927 00:30:36.178 12:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:36.178 Initializing NVMe Controllers 00:30:36.178 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:36.178 Controller IO queue size 128, less than required. 00:30:36.178 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:36.178 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:36.178 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:36.178 Initialization complete. Launching workers. 00:30:36.178 ======================================================== 00:30:36.178 Latency(us) 00:30:36.178 Device Information : IOPS MiB/s Average min max 00:30:36.178 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002072.35 1000236.16 1005969.78 00:30:36.178 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003522.17 1000326.60 1009649.61 00:30:36.178 ======================================================== 00:30:36.178 Total : 256.00 0.12 1002797.26 1000236.16 1009649.61 00:30:36.178 00:30:36.749 12:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:36.749 12:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1184927 00:30:36.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1184927) - No such process 00:30:36.749 12:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1184927 00:30:36.749 12:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:30:36.749 12:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:30:36.749 12:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:36.749 12:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:30:36.749 12:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:36.749 12:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:30:36.749 12:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:36.749 12:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:36.749 rmmod nvme_tcp 00:30:36.749 rmmod nvme_fabrics 00:30:36.749 rmmod nvme_keyring 00:30:36.749 12:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:36.749 12:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:30:36.749 12:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:30:36.749 12:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 1183858 ']' 00:30:36.749 12:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 1183858 00:30:36.749 12:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 1183858 ']' 00:30:36.749 12:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 1183858 00:30:36.749 12:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:30:36.749 12:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:36.749 12:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1183858 00:30:36.749 12:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:36.750 12:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:36.750 12:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1183858' 00:30:36.750 killing process with pid 1183858 00:30:36.750 12:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 1183858 00:30:36.750 12:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 1183858 00:30:36.750 12:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:36.750 12:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:36.750 12:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:36.750 12:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:30:36.750 12:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:30:36.750 12:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:36.750 12:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:30:36.750 12:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:36.750 12:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:36.750 12:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:36.750 12:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:36.750 12:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:39.295 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:39.295 00:30:39.295 real 0m18.331s 00:30:39.295 user 0m26.340s 00:30:39.295 sys 0m7.493s 00:30:39.295 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:39.295 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:39.295 ************************************ 00:30:39.295 END TEST nvmf_delete_subsystem 00:30:39.295 ************************************ 00:30:39.295 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:30:39.295 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:39.295 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:39.295 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:39.295 ************************************ 00:30:39.295 START TEST nvmf_host_management 00:30:39.295 ************************************ 00:30:39.295 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:30:39.295 * Looking for test storage... 00:30:39.295 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:39.295 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:39.295 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:30:39.295 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:39.295 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:39.295 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:39.295 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:39.295 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:39.295 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:30:39.295 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:30:39.295 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:30:39.295 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:30:39.295 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:30:39.295 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:30:39.295 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:30:39.295 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:39.295 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:30:39.295 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:30:39.295 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:39.295 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:39.295 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:30:39.295 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:30:39.295 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:39.295 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:30:39.295 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:30:39.295 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:30:39.295 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:30:39.295 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:39.295 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:30:39.295 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:30:39.295 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:39.295 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:39.295 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:30:39.295 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:39.295 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:39.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.295 --rc genhtml_branch_coverage=1 00:30:39.295 --rc genhtml_function_coverage=1 00:30:39.295 --rc genhtml_legend=1 00:30:39.295 --rc geninfo_all_blocks=1 00:30:39.295 --rc geninfo_unexecuted_blocks=1 00:30:39.295 00:30:39.295 ' 00:30:39.295 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:39.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.295 --rc genhtml_branch_coverage=1 00:30:39.295 --rc genhtml_function_coverage=1 00:30:39.295 --rc genhtml_legend=1 00:30:39.295 --rc geninfo_all_blocks=1 00:30:39.295 --rc geninfo_unexecuted_blocks=1 00:30:39.295 00:30:39.295 ' 00:30:39.295 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:39.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.295 --rc genhtml_branch_coverage=1 00:30:39.295 --rc genhtml_function_coverage=1 00:30:39.295 --rc genhtml_legend=1 00:30:39.295 --rc geninfo_all_blocks=1 00:30:39.295 --rc geninfo_unexecuted_blocks=1 00:30:39.295 00:30:39.295 ' 00:30:39.295 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:39.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.295 --rc genhtml_branch_coverage=1 00:30:39.295 --rc genhtml_function_coverage=1 00:30:39.295 --rc genhtml_legend=1 00:30:39.295 --rc geninfo_all_blocks=1 00:30:39.295 --rc geninfo_unexecuted_blocks=1 00:30:39.295 00:30:39.295 ' 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:30:39.296 12:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:47.622 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:47.622 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:47.622 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:47.622 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:47.623 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:47.623 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:47.623 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:47.623 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:47.623 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:47.623 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:47.623 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:47.623 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:47.623 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:47.623 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:47.623 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:47.623 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:47.623 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:30:47.623 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:47.623 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:47.623 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:47.623 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:47.623 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:47.623 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:47.623 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:47.623 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:47.623 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:47.623 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:47.623 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:47.623 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:47.623 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:47.623 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:47.623 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:47.623 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:47.623 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:47.623 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:47.623 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:47.623 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:47.623 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:47.623 12:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:47.623 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:47.623 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:47.623 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:47.623 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:47.623 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:47.623 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.672 ms 00:30:47.623 00:30:47.623 --- 10.0.0.2 ping statistics --- 00:30:47.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:47.623 rtt min/avg/max/mdev = 0.672/0.672/0.672/0.000 ms 00:30:47.623 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:47.623 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:47.623 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:30:47.623 00:30:47.623 --- 10.0.0.1 ping statistics --- 00:30:47.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:47.623 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:30:47.623 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:47.623 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:30:47.623 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:47.623 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:47.623 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:47.623 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:47.623 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:47.623 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:47.623 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:47.623 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:30:47.623 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:30:47.623 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:30:47.623 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:47.623 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:47.623 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:47.623 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=1190121 00:30:47.623 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 1190121 00:30:47.623 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:30:47.623 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1190121 ']' 00:30:47.623 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:47.623 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:47.623 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:47.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:47.623 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:47.623 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:47.623 [2024-10-21 12:15:23.222107] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:47.623 [2024-10-21 12:15:23.223229] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:30:47.623 [2024-10-21 12:15:23.223278] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:47.623 [2024-10-21 12:15:23.295486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:47.623 [2024-10-21 12:15:23.343418] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:47.623 [2024-10-21 12:15:23.343467] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:47.624 [2024-10-21 12:15:23.343473] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:47.624 [2024-10-21 12:15:23.343479] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:47.624 [2024-10-21 12:15:23.343484] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:47.624 [2024-10-21 12:15:23.345272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:47.624 [2024-10-21 12:15:23.345438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:47.624 [2024-10-21 12:15:23.345696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:47.624 [2024-10-21 12:15:23.345698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:47.624 [2024-10-21 12:15:23.417444] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:47.624 [2024-10-21 12:15:23.418545] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:47.624 [2024-10-21 12:15:23.418553] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:47.624 [2024-10-21 12:15:23.419143] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:47.624 [2024-10-21 12:15:23.419184] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:47.624 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:47.624 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:30:47.624 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:47.624 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:47.624 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:47.624 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:47.624 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:47.624 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.624 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:47.624 [2024-10-21 12:15:23.502353] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:47.624 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.624 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:30:47.624 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:47.624 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:47.624 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:47.624 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:30:47.624 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:30:47.624 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.624 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:47.624 Malloc0 00:30:47.624 [2024-10-21 12:15:23.603051] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:47.624 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.624 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:30:47.624 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:47.624 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:47.624 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1190338 00:30:47.624 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1190338 /var/tmp/bdevperf.sock 00:30:47.624 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1190338 ']' 00:30:47.624 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:47.624 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:47.624 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:47.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:47.624 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:47.624 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:30:47.624 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:47.624 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:47.624 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:30:47.624 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:30:47.624 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:47.624 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:47.624 { 00:30:47.624 "params": { 00:30:47.624 "name": "Nvme$subsystem", 00:30:47.624 "trtype": "$TEST_TRANSPORT", 00:30:47.624 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:47.624 "adrfam": "ipv4", 00:30:47.624 "trsvcid": "$NVMF_PORT", 00:30:47.624 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:47.624 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:47.624 "hdgst": ${hdgst:-false}, 00:30:47.624 "ddgst": ${ddgst:-false} 00:30:47.624 }, 00:30:47.624 "method": "bdev_nvme_attach_controller" 00:30:47.624 } 00:30:47.624 EOF 00:30:47.624 )") 00:30:47.624 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:30:47.624 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:30:47.624 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:30:47.624 12:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:30:47.624 "params": { 00:30:47.624 "name": "Nvme0", 00:30:47.624 "trtype": "tcp", 00:30:47.624 "traddr": "10.0.0.2", 00:30:47.624 "adrfam": "ipv4", 00:30:47.624 "trsvcid": "4420", 00:30:47.624 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:47.624 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:47.624 "hdgst": false, 00:30:47.624 "ddgst": false 00:30:47.624 }, 00:30:47.624 "method": "bdev_nvme_attach_controller" 00:30:47.624 }' 00:30:47.624 [2024-10-21 12:15:23.710078] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:30:47.624 [2024-10-21 12:15:23.710146] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1190338 ] 00:30:47.624 [2024-10-21 12:15:23.792144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:47.624 [2024-10-21 12:15:23.859506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:47.624 Running I/O for 10 seconds... 00:30:48.200 12:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:48.200 12:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:30:48.200 12:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:48.200 12:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.200 12:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:48.200 12:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.200 12:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:48.200 12:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:30:48.200 12:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:48.200 12:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:30:48.200 12:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:30:48.200 12:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:30:48.200 12:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:30:48.200 12:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:30:48.200 12:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:30:48.200 12:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:30:48.200 12:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.200 12:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:48.200 12:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.200 12:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=771 00:30:48.200 12:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 771 -ge 100 ']' 00:30:48.200 12:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:30:48.200 12:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:30:48.200 12:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:30:48.200 12:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:48.200 12:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.200 12:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:48.200 [2024-10-21 12:15:24.614365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8235a0 is same with the state(6) to be set 00:30:48.200 [2024-10-21 12:15:24.614435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8235a0 is same with the state(6) to be set 00:30:48.200 [2024-10-21 12:15:24.614445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8235a0 is same with the state(6) to be set 00:30:48.200 [2024-10-21 12:15:24.614453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8235a0 is same with the state(6) to be set 00:30:48.200 [2024-10-21 12:15:24.614462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8235a0 is same with the state(6) to be set 00:30:48.200 [2024-10-21 12:15:24.614470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8235a0 is same with the state(6) to be set 00:30:48.200 [2024-10-21 12:15:24.614478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8235a0 is same with the state(6) to be set 00:30:48.200 [2024-10-21 12:15:24.614485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8235a0 is same with the state(6) to be set 00:30:48.200 [2024-10-21 12:15:24.614491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8235a0 is same with the state(6) to be set 00:30:48.201 [2024-10-21 12:15:24.614498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8235a0 is same with the state(6) to be set 00:30:48.201 [2024-10-21 12:15:24.614506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8235a0 is same with the state(6) to be set 00:30:48.201 [2024-10-21 12:15:24.614513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8235a0 is same with the state(6) to be set 00:30:48.201 [2024-10-21 12:15:24.614520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8235a0 is same with the state(6) to be set 00:30:48.201 [2024-10-21 12:15:24.614528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8235a0 is same with the state(6) to be set 00:30:48.201 [2024-10-21 12:15:24.614535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8235a0 is same with the state(6) to be set 00:30:48.201 [2024-10-21 12:15:24.614542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8235a0 is same with the state(6) to be set 00:30:48.201 [2024-10-21 12:15:24.614571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8235a0 is same with the state(6) to be set 00:30:48.201 [2024-10-21 12:15:24.614579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8235a0 is same with the state(6) to be set 00:30:48.201 [2024-10-21 12:15:24.614586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8235a0 is same with the state(6) to be set 00:30:48.201 [2024-10-21 12:15:24.614594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8235a0 is same with the state(6) to be set 00:30:48.201 [2024-10-21 12:15:24.614601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8235a0 is same with the state(6) to be set 00:30:48.201 [2024-10-21 12:15:24.614608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8235a0 is same with the state(6) to be set 00:30:48.201 [2024-10-21 12:15:24.614615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8235a0 is same with the state(6) to be set 00:30:48.201 [2024-10-21 12:15:24.614623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8235a0 is same with the state(6) to be set 00:30:48.201 [2024-10-21 12:15:24.614630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8235a0 is same with the state(6) to be set 00:30:48.201 [2024-10-21 12:15:24.614637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8235a0 is same with the state(6) to be set 00:30:48.201 [2024-10-21 12:15:24.614644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8235a0 is same with the state(6) to be set 00:30:48.201 [2024-10-21 12:15:24.614651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8235a0 is same with the state(6) to be set 00:30:48.201 [2024-10-21 12:15:24.614658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8235a0 is same with the state(6) to be set 00:30:48.201 [2024-10-21 12:15:24.614665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8235a0 is same with the state(6) to be set 00:30:48.201 [2024-10-21 12:15:24.614672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8235a0 is same with the state(6) to be set 00:30:48.201 [2024-10-21 12:15:24.614680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8235a0 is same with the state(6) to be set 00:30:48.201 [2024-10-21 12:15:24.614687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8235a0 is same with the state(6) to be set 00:30:48.201 [2024-10-21 12:15:24.616457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.201 [2024-10-21 12:15:24.616516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.201 [2024-10-21 12:15:24.616528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.201 [2024-10-21 12:15:24.616536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.201 [2024-10-21 12:15:24.616545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.201 [2024-10-21 12:15:24.616555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.201 [2024-10-21 12:15:24.616565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.201 [2024-10-21 12:15:24.616573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.201 [2024-10-21 12:15:24.616581] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fcca10 is same with the state(6) to be set 00:30:48.201 12:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.201 12:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:48.201 12:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.201 12:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:48.201 [2024-10-21 12:15:24.628764] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fcca10 (9): Bad file descriptor 00:30:48.201 [2024-10-21 12:15:24.628871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.201 [2024-10-21 12:15:24.628884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.201 [2024-10-21 12:15:24.628902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.201 [2024-10-21 12:15:24.628911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.201 [2024-10-21 12:15:24.628921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.201 [2024-10-21 12:15:24.628929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.201 [2024-10-21 12:15:24.628941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.201 [2024-10-21 12:15:24.628949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.201 [2024-10-21 12:15:24.628960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.201 [2024-10-21 12:15:24.628967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.201 [2024-10-21 12:15:24.628977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.201 [2024-10-21 12:15:24.628985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.201 [2024-10-21 12:15:24.628995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.201 [2024-10-21 12:15:24.629002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.201 [2024-10-21 12:15:24.629012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.201 [2024-10-21 12:15:24.629019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.201 [2024-10-21 12:15:24.629029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.201 [2024-10-21 12:15:24.629036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.201 [2024-10-21 12:15:24.629046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.201 [2024-10-21 12:15:24.629054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.201 [2024-10-21 12:15:24.629063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.201 [2024-10-21 12:15:24.629071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.201 [2024-10-21 12:15:24.629088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.201 [2024-10-21 12:15:24.629096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.201 [2024-10-21 12:15:24.629106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.201 [2024-10-21 12:15:24.629114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.201 [2024-10-21 12:15:24.629124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.201 [2024-10-21 12:15:24.629131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.201 [2024-10-21 12:15:24.629140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.201 [2024-10-21 12:15:24.629148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.201 [2024-10-21 12:15:24.629158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.201 [2024-10-21 12:15:24.629166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.201 [2024-10-21 12:15:24.629176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.201 [2024-10-21 12:15:24.629183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.201 [2024-10-21 12:15:24.629193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.201 [2024-10-21 12:15:24.629200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.201 [2024-10-21 12:15:24.629210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.202 [2024-10-21 12:15:24.629217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.202 [2024-10-21 12:15:24.629226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:115328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.202 [2024-10-21 12:15:24.629235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.202 [2024-10-21 12:15:24.629244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.202 [2024-10-21 12:15:24.629252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.202 [2024-10-21 12:15:24.629261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.202 [2024-10-21 12:15:24.629268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.202 [2024-10-21 12:15:24.629278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:115712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.202 [2024-10-21 12:15:24.629286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.202 [2024-10-21 12:15:24.629297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:115840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.202 [2024-10-21 12:15:24.629306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.202 [2024-10-21 12:15:24.629316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.202 [2024-10-21 12:15:24.629333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.202 [2024-10-21 12:15:24.629343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:116096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.202 [2024-10-21 12:15:24.629351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.202 [2024-10-21 12:15:24.629361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.202 [2024-10-21 12:15:24.629369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.202 [2024-10-21 12:15:24.629379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.202 [2024-10-21 12:15:24.629387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.202 [2024-10-21 12:15:24.629396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.202 [2024-10-21 12:15:24.629404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.202 [2024-10-21 12:15:24.629413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.202 [2024-10-21 12:15:24.629420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.202 [2024-10-21 12:15:24.629430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:116736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.202 [2024-10-21 12:15:24.629438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.202 [2024-10-21 12:15:24.629447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:116864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.202 [2024-10-21 12:15:24.629454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.202 [2024-10-21 12:15:24.629463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:116992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.202 [2024-10-21 12:15:24.629470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.202 [2024-10-21 12:15:24.629480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:117120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.202 [2024-10-21 12:15:24.629488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.202 [2024-10-21 12:15:24.629497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.202 [2024-10-21 12:15:24.629504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.202 [2024-10-21 12:15:24.629514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.202 [2024-10-21 12:15:24.629526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.202 [2024-10-21 12:15:24.629537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.202 [2024-10-21 12:15:24.629545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.202 [2024-10-21 12:15:24.629555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:117632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.202 [2024-10-21 12:15:24.629563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.202 [2024-10-21 12:15:24.629573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.202 [2024-10-21 12:15:24.629581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.202 [2024-10-21 12:15:24.629591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:117888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.202 [2024-10-21 12:15:24.629599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.202 [2024-10-21 12:15:24.629609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:118016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.202 [2024-10-21 12:15:24.629617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.202 [2024-10-21 12:15:24.629637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:118144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.202 [2024-10-21 12:15:24.629644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.202 [2024-10-21 12:15:24.629654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:118272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.202 [2024-10-21 12:15:24.629662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.202 [2024-10-21 12:15:24.629672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:118400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.202 [2024-10-21 12:15:24.629679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.202 [2024-10-21 12:15:24.629689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:118528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.202 [2024-10-21 12:15:24.629697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.202 [2024-10-21 12:15:24.629707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:118656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.202 [2024-10-21 12:15:24.629715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.202 [2024-10-21 12:15:24.629724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:118784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.202 [2024-10-21 12:15:24.629732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.202 [2024-10-21 12:15:24.629742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:118912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.202 [2024-10-21 12:15:24.629749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.202 [2024-10-21 12:15:24.629761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:119040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.202 [2024-10-21 12:15:24.629769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.202 [2024-10-21 12:15:24.629779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:119168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.202 [2024-10-21 12:15:24.629787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.202 [2024-10-21 12:15:24.629796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:119296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.202 [2024-10-21 12:15:24.629804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.202 [2024-10-21 12:15:24.629813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:119424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.202 [2024-10-21 12:15:24.629822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.202 [2024-10-21 12:15:24.629832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:119552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.202 [2024-10-21 12:15:24.629839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.202 [2024-10-21 12:15:24.629849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:119680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.202 [2024-10-21 12:15:24.629856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.202 [2024-10-21 12:15:24.629866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:119808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.203 [2024-10-21 12:15:24.629874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.203 [2024-10-21 12:15:24.629883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:119936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.203 [2024-10-21 12:15:24.629891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.203 [2024-10-21 12:15:24.629900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.203 [2024-10-21 12:15:24.629907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.203 [2024-10-21 12:15:24.629917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:120192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.203 [2024-10-21 12:15:24.629925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.203 [2024-10-21 12:15:24.629934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.203 [2024-10-21 12:15:24.629942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.203 [2024-10-21 12:15:24.629951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.203 [2024-10-21 12:15:24.629958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.203 [2024-10-21 12:15:24.629968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:120576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.203 [2024-10-21 12:15:24.629978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.203 [2024-10-21 12:15:24.629988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.203 [2024-10-21 12:15:24.629996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.203 [2024-10-21 12:15:24.630005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:120832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.203 [2024-10-21 12:15:24.630013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.203 [2024-10-21 12:15:24.630022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.203 [2024-10-21 12:15:24.630029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.203 [2024-10-21 12:15:24.630119] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21e49e0 was disconnected and freed. reset controller. 00:30:48.203 [2024-10-21 12:15:24.631334] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:48.203 12:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.203 12:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:30:48.203 task offset: 112896 on job bdev=Nvme0n1 fails 00:30:48.203 00:30:48.203 Latency(us) 00:30:48.203 [2024-10-21T10:15:24.798Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:48.203 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:48.203 Job: Nvme0n1 ended in about 0.56 seconds with error 00:30:48.203 Verification LBA range: start 0x0 length 0x400 00:30:48.203 Nvme0n1 : 0.56 1578.65 98.67 114.55 0.00 36831.26 1733.97 35389.44 00:30:48.203 [2024-10-21T10:15:24.798Z] =================================================================================================================== 00:30:48.203 [2024-10-21T10:15:24.798Z] Total : 1578.65 98.67 114.55 0.00 36831.26 1733.97 35389.44 00:30:48.203 [2024-10-21 12:15:24.633538] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:48.203 [2024-10-21 12:15:24.726094] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:49.149 12:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1190338 00:30:49.149 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1190338) - No such process 00:30:49.149 12:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:30:49.149 12:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:30:49.149 12:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:30:49.149 12:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:30:49.149 12:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:30:49.149 12:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:30:49.149 12:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:49.149 12:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:49.149 { 00:30:49.149 "params": { 00:30:49.149 "name": "Nvme$subsystem", 00:30:49.149 "trtype": "$TEST_TRANSPORT", 00:30:49.149 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:49.149 "adrfam": "ipv4", 00:30:49.149 "trsvcid": "$NVMF_PORT", 00:30:49.149 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:49.149 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:49.149 "hdgst": ${hdgst:-false}, 00:30:49.149 "ddgst": ${ddgst:-false} 00:30:49.149 }, 00:30:49.149 "method": "bdev_nvme_attach_controller" 00:30:49.149 } 00:30:49.149 EOF 00:30:49.149 )") 00:30:49.149 12:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:30:49.149 12:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:30:49.149 12:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:30:49.149 12:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:30:49.149 "params": { 00:30:49.149 "name": "Nvme0", 00:30:49.149 "trtype": "tcp", 00:30:49.149 "traddr": "10.0.0.2", 00:30:49.149 "adrfam": "ipv4", 00:30:49.149 "trsvcid": "4420", 00:30:49.150 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:49.150 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:49.150 "hdgst": false, 00:30:49.150 "ddgst": false 00:30:49.150 }, 00:30:49.150 "method": "bdev_nvme_attach_controller" 00:30:49.150 }' 00:30:49.150 [2024-10-21 12:15:25.691004] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:30:49.150 [2024-10-21 12:15:25.691061] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1190689 ] 00:30:49.410 [2024-10-21 12:15:25.767410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:49.410 [2024-10-21 12:15:25.802586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:49.671 Running I/O for 1 seconds... 00:30:50.612 1472.00 IOPS, 92.00 MiB/s 00:30:50.612 Latency(us) 00:30:50.612 [2024-10-21T10:15:27.207Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:50.612 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:50.612 Verification LBA range: start 0x0 length 0x400 00:30:50.612 Nvme0n1 : 1.04 1480.68 92.54 0.00 0.00 42423.44 2826.24 36044.80 00:30:50.612 [2024-10-21T10:15:27.207Z] =================================================================================================================== 00:30:50.612 [2024-10-21T10:15:27.207Z] Total : 1480.68 92.54 0.00 0.00 42423.44 2826.24 36044.80 00:30:50.874 12:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:30:50.874 12:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:30:50.874 12:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:50.874 12:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:50.874 12:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:30:50.874 12:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:50.874 12:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:30:50.874 12:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:50.874 12:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:30:50.874 12:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:50.874 12:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:50.874 rmmod nvme_tcp 00:30:50.874 rmmod nvme_fabrics 00:30:50.874 rmmod nvme_keyring 00:30:50.874 12:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:50.874 12:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:30:50.874 12:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:30:50.874 12:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 1190121 ']' 00:30:50.874 12:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 1190121 00:30:50.874 12:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 1190121 ']' 00:30:50.874 12:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 1190121 00:30:50.874 12:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:30:50.874 12:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:50.874 12:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1190121 00:30:50.874 12:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:50.874 12:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:50.874 12:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1190121' 00:30:50.874 killing process with pid 1190121 00:30:50.874 12:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 1190121 00:30:50.874 12:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 1190121 00:30:51.136 [2024-10-21 12:15:27.539631] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:30:51.136 12:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:51.136 12:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:51.136 12:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:51.136 12:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:30:51.136 12:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:30:51.136 12:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:51.136 12:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:30:51.136 12:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:51.136 12:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:51.136 12:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:51.136 12:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:51.136 12:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:53.050 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:53.312 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:30:53.312 00:30:53.312 real 0m14.189s 00:30:53.312 user 0m19.883s 00:30:53.312 sys 0m7.478s 00:30:53.312 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:53.312 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:53.312 ************************************ 00:30:53.312 END TEST nvmf_host_management 00:30:53.312 ************************************ 00:30:53.312 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:30:53.312 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:53.312 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:53.312 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:53.312 ************************************ 00:30:53.312 START TEST nvmf_lvol 00:30:53.312 ************************************ 00:30:53.312 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:30:53.312 * Looking for test storage... 00:30:53.312 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:53.312 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:53.312 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:30:53.312 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:53.574 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:53.574 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:53.574 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:53.574 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:53.574 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:30:53.574 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:30:53.574 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:30:53.574 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:30:53.574 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:30:53.574 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:30:53.574 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:30:53.574 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:53.574 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:30:53.574 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:30:53.574 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:53.574 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:53.574 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:30:53.574 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:30:53.574 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:53.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:53.575 --rc genhtml_branch_coverage=1 00:30:53.575 --rc genhtml_function_coverage=1 00:30:53.575 --rc genhtml_legend=1 00:30:53.575 --rc geninfo_all_blocks=1 00:30:53.575 --rc geninfo_unexecuted_blocks=1 00:30:53.575 00:30:53.575 ' 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:53.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:53.575 --rc genhtml_branch_coverage=1 00:30:53.575 --rc genhtml_function_coverage=1 00:30:53.575 --rc genhtml_legend=1 00:30:53.575 --rc geninfo_all_blocks=1 00:30:53.575 --rc geninfo_unexecuted_blocks=1 00:30:53.575 00:30:53.575 ' 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:53.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:53.575 --rc genhtml_branch_coverage=1 00:30:53.575 --rc genhtml_function_coverage=1 00:30:53.575 --rc genhtml_legend=1 00:30:53.575 --rc geninfo_all_blocks=1 00:30:53.575 --rc geninfo_unexecuted_blocks=1 00:30:53.575 00:30:53.575 ' 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:53.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:53.575 --rc genhtml_branch_coverage=1 00:30:53.575 --rc genhtml_function_coverage=1 00:30:53.575 --rc genhtml_legend=1 00:30:53.575 --rc geninfo_all_blocks=1 00:30:53.575 --rc geninfo_unexecuted_blocks=1 00:30:53.575 00:30:53.575 ' 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:30:53.575 12:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:01.723 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:01.723 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:01.723 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:01.723 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:01.723 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:01.724 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:01.724 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:01.724 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:01.724 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:01.724 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:01.724 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:01.724 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:01.724 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:01.724 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:01.724 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:01.724 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:01.724 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:01.724 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:01.724 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:01.724 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:01.724 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:01.724 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:01.724 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:01.724 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:01.724 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:01.724 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:01.724 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:01.724 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:01.724 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.607 ms 00:31:01.724 00:31:01.724 --- 10.0.0.2 ping statistics --- 00:31:01.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:01.724 rtt min/avg/max/mdev = 0.607/0.607/0.607/0.000 ms 00:31:01.724 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:01.724 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:01.724 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:31:01.724 00:31:01.724 --- 10.0.0.1 ping statistics --- 00:31:01.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:01.724 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:31:01.724 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:01.724 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:31:01.724 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:01.724 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:01.724 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:01.724 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:01.724 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:01.724 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:01.724 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:01.724 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:31:01.724 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:01.724 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:01.724 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:01.724 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=1195049 00:31:01.724 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 1195049 00:31:01.724 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:31:01.724 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 1195049 ']' 00:31:01.724 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:01.724 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:01.724 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:01.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:01.724 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:01.724 12:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:01.724 [2024-10-21 12:15:37.489747] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:01.724 [2024-10-21 12:15:37.490959] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:31:01.724 [2024-10-21 12:15:37.491013] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:01.724 [2024-10-21 12:15:37.579846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:01.724 [2024-10-21 12:15:37.632763] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:01.724 [2024-10-21 12:15:37.632810] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:01.724 [2024-10-21 12:15:37.632818] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:01.724 [2024-10-21 12:15:37.632826] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:01.724 [2024-10-21 12:15:37.632832] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:01.724 [2024-10-21 12:15:37.634934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:01.724 [2024-10-21 12:15:37.635093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:01.724 [2024-10-21 12:15:37.635094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:01.724 [2024-10-21 12:15:37.711239] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:01.724 [2024-10-21 12:15:37.711713] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:01.724 [2024-10-21 12:15:37.712056] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:01.724 [2024-10-21 12:15:37.713112] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:01.986 12:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:01.986 12:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:31:01.986 12:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:01.986 12:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:01.986 12:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:01.986 12:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:01.986 12:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:01.986 [2024-10-21 12:15:38.523957] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:01.986 12:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:02.250 12:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:31:02.250 12:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:02.511 12:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:31:02.511 12:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:31:02.772 12:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:31:03.033 12:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=e21d845c-a863-4626-8c10-ec17fe17ae48 00:31:03.033 12:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e21d845c-a863-4626-8c10-ec17fe17ae48 lvol 20 00:31:03.033 12:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=ec56a6c2-5941-4d2d-a108-4c249f6c9d7f 00:31:03.033 12:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:03.294 12:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ec56a6c2-5941-4d2d-a108-4c249f6c9d7f 00:31:03.554 12:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:03.555 [2024-10-21 12:15:40.116005] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:03.816 12:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:03.816 12:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1195723 00:31:03.816 12:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:31:03.816 12:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:31:05.203 12:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot ec56a6c2-5941-4d2d-a108-4c249f6c9d7f MY_SNAPSHOT 00:31:05.203 12:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=41cc7111-6849-4ad9-a1bb-339cd76b05bd 00:31:05.203 12:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize ec56a6c2-5941-4d2d-a108-4c249f6c9d7f 30 00:31:05.464 12:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 41cc7111-6849-4ad9-a1bb-339cd76b05bd MY_CLONE 00:31:05.725 12:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=912c5b3f-e6ea-4541-b687-249cb840f16b 00:31:05.725 12:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 912c5b3f-e6ea-4541-b687-249cb840f16b 00:31:06.297 12:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1195723 00:31:14.433 Initializing NVMe Controllers 00:31:14.433 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:31:14.433 Controller IO queue size 128, less than required. 00:31:14.433 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:14.433 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:31:14.433 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:31:14.433 Initialization complete. Launching workers. 00:31:14.433 ======================================================== 00:31:14.433 Latency(us) 00:31:14.433 Device Information : IOPS MiB/s Average min max 00:31:14.433 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15049.20 58.79 8507.86 583.64 60852.69 00:31:14.433 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15073.40 58.88 8494.49 4083.42 61716.59 00:31:14.433 ======================================================== 00:31:14.433 Total : 30122.60 117.67 8501.17 583.64 61716.59 00:31:14.433 00:31:14.433 12:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:14.433 12:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ec56a6c2-5941-4d2d-a108-4c249f6c9d7f 00:31:14.693 12:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e21d845c-a863-4626-8c10-ec17fe17ae48 00:31:14.954 12:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:31:14.954 12:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:31:14.954 12:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:31:14.954 12:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:14.954 12:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:31:14.954 12:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:14.954 12:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:31:14.954 12:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:14.954 12:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:14.954 rmmod nvme_tcp 00:31:14.954 rmmod nvme_fabrics 00:31:14.954 rmmod nvme_keyring 00:31:14.954 12:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:14.954 12:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:31:14.954 12:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:31:14.954 12:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 1195049 ']' 00:31:14.954 12:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 1195049 00:31:14.954 12:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 1195049 ']' 00:31:14.954 12:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 1195049 00:31:14.954 12:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:31:14.954 12:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:14.954 12:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1195049 00:31:14.954 12:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:14.954 12:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:14.954 12:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1195049' 00:31:14.954 killing process with pid 1195049 00:31:14.954 12:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 1195049 00:31:14.954 12:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 1195049 00:31:15.215 12:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:15.215 12:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:15.215 12:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:15.215 12:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:31:15.215 12:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:15.215 12:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:31:15.215 12:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:31:15.215 12:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:15.215 12:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:15.215 12:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:15.215 12:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:15.215 12:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:17.127 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:17.127 00:31:17.127 real 0m23.925s 00:31:17.127 user 0m56.128s 00:31:17.127 sys 0m10.847s 00:31:17.127 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:17.127 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:17.127 ************************************ 00:31:17.127 END TEST nvmf_lvol 00:31:17.127 ************************************ 00:31:17.127 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:17.127 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:17.127 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:17.127 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:17.388 ************************************ 00:31:17.388 START TEST nvmf_lvs_grow 00:31:17.388 ************************************ 00:31:17.388 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:17.388 * Looking for test storage... 00:31:17.389 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:17.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.389 --rc genhtml_branch_coverage=1 00:31:17.389 --rc genhtml_function_coverage=1 00:31:17.389 --rc genhtml_legend=1 00:31:17.389 --rc geninfo_all_blocks=1 00:31:17.389 --rc geninfo_unexecuted_blocks=1 00:31:17.389 00:31:17.389 ' 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:17.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.389 --rc genhtml_branch_coverage=1 00:31:17.389 --rc genhtml_function_coverage=1 00:31:17.389 --rc genhtml_legend=1 00:31:17.389 --rc geninfo_all_blocks=1 00:31:17.389 --rc geninfo_unexecuted_blocks=1 00:31:17.389 00:31:17.389 ' 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:17.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.389 --rc genhtml_branch_coverage=1 00:31:17.389 --rc genhtml_function_coverage=1 00:31:17.389 --rc genhtml_legend=1 00:31:17.389 --rc geninfo_all_blocks=1 00:31:17.389 --rc geninfo_unexecuted_blocks=1 00:31:17.389 00:31:17.389 ' 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:17.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.389 --rc genhtml_branch_coverage=1 00:31:17.389 --rc genhtml_function_coverage=1 00:31:17.389 --rc genhtml_legend=1 00:31:17.389 --rc geninfo_all_blocks=1 00:31:17.389 --rc geninfo_unexecuted_blocks=1 00:31:17.389 00:31:17.389 ' 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:17.389 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:17.390 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.390 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.390 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.390 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:31:17.390 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.390 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:31:17.390 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:17.390 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:17.390 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:17.390 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:17.390 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:17.390 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:17.390 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:17.390 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:17.390 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:17.390 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:17.390 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:17.390 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:17.651 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:31:17.651 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:17.651 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:17.651 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:17.651 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:17.651 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:17.651 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:17.651 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:17.651 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:17.651 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:17.651 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:17.651 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:31:17.651 12:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:25.793 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:25.793 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:25.793 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:25.793 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:25.793 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:25.794 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:25.794 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:25.794 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:25.794 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:25.794 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:25.794 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:25.794 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:25.794 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:25.794 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:25.794 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:25.794 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:25.794 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:25.794 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:25.794 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:25.794 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:25.794 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:25.794 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:25.794 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:25.794 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:25.794 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:31:25.794 00:31:25.794 --- 10.0.0.2 ping statistics --- 00:31:25.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:25.794 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:31:25.794 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:25.794 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:25.794 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:31:25.794 00:31:25.794 --- 10.0.0.1 ping statistics --- 00:31:25.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:25.794 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:31:25.794 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:25.794 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:31:25.794 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:25.794 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:25.794 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:25.794 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:25.794 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:25.794 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:25.794 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:25.794 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:31:25.794 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:25.794 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:25.794 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:25.794 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=1201965 00:31:25.794 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 1201965 00:31:25.794 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:31:25.794 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 1201965 ']' 00:31:25.794 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:25.794 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:25.794 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:25.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:25.794 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:25.794 12:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:25.794 [2024-10-21 12:16:01.530737] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:25.794 [2024-10-21 12:16:01.531866] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:31:25.794 [2024-10-21 12:16:01.531917] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:25.794 [2024-10-21 12:16:01.618091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:25.794 [2024-10-21 12:16:01.669674] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:25.794 [2024-10-21 12:16:01.669724] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:25.794 [2024-10-21 12:16:01.669733] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:25.794 [2024-10-21 12:16:01.669740] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:25.794 [2024-10-21 12:16:01.669747] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:25.794 [2024-10-21 12:16:01.670495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:25.794 [2024-10-21 12:16:01.745988] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:25.794 [2024-10-21 12:16:01.746286] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:25.794 12:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:25.794 12:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:31:25.794 12:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:25.794 12:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:25.794 12:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:26.054 12:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:26.054 12:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:26.054 [2024-10-21 12:16:02.563392] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:26.054 12:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:31:26.054 12:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:26.054 12:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:26.054 12:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:26.054 ************************************ 00:31:26.054 START TEST lvs_grow_clean 00:31:26.054 ************************************ 00:31:26.054 12:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:31:26.054 12:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:31:26.054 12:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:31:26.054 12:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:31:26.054 12:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:31:26.054 12:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:31:26.054 12:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:31:26.054 12:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:26.054 12:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:26.054 12:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:26.314 12:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:31:26.314 12:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:31:26.574 12:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=b223e273-021e-41df-8dc4-c39ca06da2ed 00:31:26.574 12:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b223e273-021e-41df-8dc4-c39ca06da2ed 00:31:26.574 12:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:31:26.834 12:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:31:26.834 12:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:31:26.834 12:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b223e273-021e-41df-8dc4-c39ca06da2ed lvol 150 00:31:27.095 12:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=a0299a45-0de4-4a97-b3b3-df2ef5e4e99d 00:31:27.095 12:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:27.095 12:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:31:27.095 [2024-10-21 12:16:03.603039] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:31:27.095 [2024-10-21 12:16:03.603208] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:31:27.095 true 00:31:27.095 12:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b223e273-021e-41df-8dc4-c39ca06da2ed 00:31:27.095 12:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:31:27.355 12:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:31:27.355 12:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:27.614 12:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a0299a45-0de4-4a97-b3b3-df2ef5e4e99d 00:31:27.614 12:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:27.874 [2024-10-21 12:16:04.343732] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:27.874 12:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:28.136 12:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1202451 00:31:28.136 12:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:28.136 12:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:31:28.136 12:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1202451 /var/tmp/bdevperf.sock 00:31:28.136 12:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 1202451 ']' 00:31:28.136 12:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:28.136 12:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:28.136 12:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:28.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:28.136 12:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:28.136 12:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:31:28.136 [2024-10-21 12:16:04.583017] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:31:28.136 [2024-10-21 12:16:04.583092] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1202451 ] 00:31:28.136 [2024-10-21 12:16:04.666481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:28.136 [2024-10-21 12:16:04.718611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:29.079 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:29.079 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:31:29.079 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:31:29.340 Nvme0n1 00:31:29.340 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:31:29.601 [ 00:31:29.601 { 00:31:29.601 "name": "Nvme0n1", 00:31:29.601 "aliases": [ 00:31:29.601 "a0299a45-0de4-4a97-b3b3-df2ef5e4e99d" 00:31:29.601 ], 00:31:29.601 "product_name": "NVMe disk", 00:31:29.601 "block_size": 4096, 00:31:29.601 "num_blocks": 38912, 00:31:29.601 "uuid": "a0299a45-0de4-4a97-b3b3-df2ef5e4e99d", 00:31:29.601 "numa_id": 0, 00:31:29.601 "assigned_rate_limits": { 00:31:29.601 "rw_ios_per_sec": 0, 00:31:29.601 "rw_mbytes_per_sec": 0, 00:31:29.601 "r_mbytes_per_sec": 0, 00:31:29.601 "w_mbytes_per_sec": 0 00:31:29.601 }, 00:31:29.601 "claimed": false, 00:31:29.601 "zoned": false, 00:31:29.601 "supported_io_types": { 00:31:29.601 "read": true, 00:31:29.601 "write": true, 00:31:29.601 "unmap": true, 00:31:29.601 "flush": true, 00:31:29.601 "reset": true, 00:31:29.601 "nvme_admin": true, 00:31:29.601 "nvme_io": true, 00:31:29.601 "nvme_io_md": false, 00:31:29.601 "write_zeroes": true, 00:31:29.601 "zcopy": false, 00:31:29.601 "get_zone_info": false, 00:31:29.601 "zone_management": false, 00:31:29.601 "zone_append": false, 00:31:29.601 "compare": true, 00:31:29.601 "compare_and_write": true, 00:31:29.601 "abort": true, 00:31:29.601 "seek_hole": false, 00:31:29.601 "seek_data": false, 00:31:29.601 "copy": true, 00:31:29.601 "nvme_iov_md": false 00:31:29.601 }, 00:31:29.601 "memory_domains": [ 00:31:29.601 { 00:31:29.601 "dma_device_id": "system", 00:31:29.601 "dma_device_type": 1 00:31:29.601 } 00:31:29.601 ], 00:31:29.601 "driver_specific": { 00:31:29.601 "nvme": [ 00:31:29.601 { 00:31:29.601 "trid": { 00:31:29.602 "trtype": "TCP", 00:31:29.602 "adrfam": "IPv4", 00:31:29.602 "traddr": "10.0.0.2", 00:31:29.602 "trsvcid": "4420", 00:31:29.602 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:29.602 }, 00:31:29.602 "ctrlr_data": { 00:31:29.602 "cntlid": 1, 00:31:29.602 "vendor_id": "0x8086", 00:31:29.602 "model_number": "SPDK bdev Controller", 00:31:29.602 "serial_number": "SPDK0", 00:31:29.602 "firmware_revision": "25.01", 00:31:29.602 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:29.602 "oacs": { 00:31:29.602 "security": 0, 00:31:29.602 "format": 0, 00:31:29.602 "firmware": 0, 00:31:29.602 "ns_manage": 0 00:31:29.602 }, 00:31:29.602 "multi_ctrlr": true, 00:31:29.602 "ana_reporting": false 00:31:29.602 }, 00:31:29.602 "vs": { 00:31:29.602 "nvme_version": "1.3" 00:31:29.602 }, 00:31:29.602 "ns_data": { 00:31:29.602 "id": 1, 00:31:29.602 "can_share": true 00:31:29.602 } 00:31:29.602 } 00:31:29.602 ], 00:31:29.602 "mp_policy": "active_passive" 00:31:29.602 } 00:31:29.602 } 00:31:29.602 ] 00:31:29.602 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1202783 00:31:29.602 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:31:29.602 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:29.602 Running I/O for 10 seconds... 00:31:30.547 Latency(us) 00:31:30.547 [2024-10-21T10:16:07.142Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:30.547 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:30.547 Nvme0n1 : 1.00 16759.00 65.46 0.00 0.00 0.00 0.00 0.00 00:31:30.547 [2024-10-21T10:16:07.142Z] =================================================================================================================== 00:31:30.547 [2024-10-21T10:16:07.142Z] Total : 16759.00 65.46 0.00 0.00 0.00 0.00 0.00 00:31:30.547 00:31:31.490 12:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b223e273-021e-41df-8dc4-c39ca06da2ed 00:31:31.490 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:31.490 Nvme0n1 : 2.00 17046.00 66.59 0.00 0.00 0.00 0.00 0.00 00:31:31.490 [2024-10-21T10:16:08.085Z] =================================================================================================================== 00:31:31.490 [2024-10-21T10:16:08.085Z] Total : 17046.00 66.59 0.00 0.00 0.00 0.00 0.00 00:31:31.490 00:31:31.751 true 00:31:31.751 12:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b223e273-021e-41df-8dc4-c39ca06da2ed 00:31:31.751 12:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:31:32.012 12:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:31:32.012 12:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:31:32.012 12:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1202783 00:31:32.582 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:32.583 Nvme0n1 : 3.00 17273.00 67.47 0.00 0.00 0.00 0.00 0.00 00:31:32.583 [2024-10-21T10:16:09.178Z] =================================================================================================================== 00:31:32.583 [2024-10-21T10:16:09.178Z] Total : 17273.00 67.47 0.00 0.00 0.00 0.00 0.00 00:31:32.583 00:31:33.525 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:33.525 Nvme0n1 : 4.00 17466.75 68.23 0.00 0.00 0.00 0.00 0.00 00:31:33.525 [2024-10-21T10:16:10.120Z] =================================================================================================================== 00:31:33.525 [2024-10-21T10:16:10.120Z] Total : 17466.75 68.23 0.00 0.00 0.00 0.00 0.00 00:31:33.525 00:31:34.911 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:34.911 Nvme0n1 : 5.00 17647.20 68.93 0.00 0.00 0.00 0.00 0.00 00:31:34.911 [2024-10-21T10:16:11.506Z] =================================================================================================================== 00:31:34.911 [2024-10-21T10:16:11.506Z] Total : 17647.20 68.93 0.00 0.00 0.00 0.00 0.00 00:31:34.911 00:31:35.483 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:35.483 Nvme0n1 : 6.00 18964.67 74.08 0.00 0.00 0.00 0.00 0.00 00:31:35.483 [2024-10-21T10:16:12.078Z] =================================================================================================================== 00:31:35.483 [2024-10-21T10:16:12.078Z] Total : 18964.67 74.08 0.00 0.00 0.00 0.00 0.00 00:31:35.483 00:31:36.868 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:36.868 Nvme0n1 : 7.00 19910.00 77.77 0.00 0.00 0.00 0.00 0.00 00:31:36.868 [2024-10-21T10:16:13.463Z] =================================================================================================================== 00:31:36.868 [2024-10-21T10:16:13.463Z] Total : 19910.00 77.77 0.00 0.00 0.00 0.00 0.00 00:31:36.868 00:31:37.812 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:37.812 Nvme0n1 : 8.00 20621.25 80.55 0.00 0.00 0.00 0.00 0.00 00:31:37.812 [2024-10-21T10:16:14.407Z] =================================================================================================================== 00:31:37.812 [2024-10-21T10:16:14.407Z] Total : 20621.25 80.55 0.00 0.00 0.00 0.00 0.00 00:31:37.812 00:31:38.755 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:38.755 Nvme0n1 : 9.00 21174.44 82.71 0.00 0.00 0.00 0.00 0.00 00:31:38.755 [2024-10-21T10:16:15.350Z] =================================================================================================================== 00:31:38.755 [2024-10-21T10:16:15.350Z] Total : 21174.44 82.71 0.00 0.00 0.00 0.00 0.00 00:31:38.755 00:31:39.699 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:39.699 Nvme0n1 : 10.00 21617.00 84.44 0.00 0.00 0.00 0.00 0.00 00:31:39.699 [2024-10-21T10:16:16.294Z] =================================================================================================================== 00:31:39.699 [2024-10-21T10:16:16.294Z] Total : 21617.00 84.44 0.00 0.00 0.00 0.00 0.00 00:31:39.699 00:31:39.699 00:31:39.699 Latency(us) 00:31:39.699 [2024-10-21T10:16:16.294Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:39.699 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:39.699 Nvme0n1 : 10.00 21620.56 84.46 0.00 0.00 5916.68 3713.71 32331.09 00:31:39.699 [2024-10-21T10:16:16.294Z] =================================================================================================================== 00:31:39.699 [2024-10-21T10:16:16.294Z] Total : 21620.56 84.46 0.00 0.00 5916.68 3713.71 32331.09 00:31:39.699 { 00:31:39.699 "results": [ 00:31:39.699 { 00:31:39.699 "job": "Nvme0n1", 00:31:39.699 "core_mask": "0x2", 00:31:39.699 "workload": "randwrite", 00:31:39.699 "status": "finished", 00:31:39.699 "queue_depth": 128, 00:31:39.699 "io_size": 4096, 00:31:39.699 "runtime": 10.004276, 00:31:39.699 "iops": 21620.555050660336, 00:31:39.699 "mibps": 84.45529316664194, 00:31:39.699 "io_failed": 0, 00:31:39.699 "io_timeout": 0, 00:31:39.699 "avg_latency_us": 5916.6795557980195, 00:31:39.699 "min_latency_us": 3713.7066666666665, 00:31:39.699 "max_latency_us": 32331.093333333334 00:31:39.699 } 00:31:39.699 ], 00:31:39.699 "core_count": 1 00:31:39.699 } 00:31:39.699 12:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1202451 00:31:39.699 12:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 1202451 ']' 00:31:39.699 12:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 1202451 00:31:39.699 12:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:31:39.699 12:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:39.699 12:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1202451 00:31:39.699 12:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:39.699 12:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:39.699 12:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1202451' 00:31:39.699 killing process with pid 1202451 00:31:39.699 12:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 1202451 00:31:39.699 Received shutdown signal, test time was about 10.000000 seconds 00:31:39.699 00:31:39.699 Latency(us) 00:31:39.699 [2024-10-21T10:16:16.294Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:39.699 [2024-10-21T10:16:16.294Z] =================================================================================================================== 00:31:39.699 [2024-10-21T10:16:16.294Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:39.699 12:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 1202451 00:31:39.699 12:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:39.959 12:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:40.219 12:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b223e273-021e-41df-8dc4-c39ca06da2ed 00:31:40.219 12:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:31:40.480 12:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:31:40.480 12:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:31:40.480 12:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:40.480 [2024-10-21 12:16:17.003084] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:31:40.480 12:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b223e273-021e-41df-8dc4-c39ca06da2ed 00:31:40.480 12:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:31:40.480 12:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b223e273-021e-41df-8dc4-c39ca06da2ed 00:31:40.480 12:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:40.480 12:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:40.480 12:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:40.480 12:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:40.480 12:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:40.480 12:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:40.480 12:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:40.480 12:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:31:40.480 12:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b223e273-021e-41df-8dc4-c39ca06da2ed 00:31:40.741 request: 00:31:40.741 { 00:31:40.741 "uuid": "b223e273-021e-41df-8dc4-c39ca06da2ed", 00:31:40.741 "method": "bdev_lvol_get_lvstores", 00:31:40.741 "req_id": 1 00:31:40.741 } 00:31:40.741 Got JSON-RPC error response 00:31:40.741 response: 00:31:40.741 { 00:31:40.741 "code": -19, 00:31:40.741 "message": "No such device" 00:31:40.741 } 00:31:40.741 12:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:31:40.741 12:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:40.741 12:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:40.741 12:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:40.741 12:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:41.002 aio_bdev 00:31:41.002 12:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a0299a45-0de4-4a97-b3b3-df2ef5e4e99d 00:31:41.002 12:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=a0299a45-0de4-4a97-b3b3-df2ef5e4e99d 00:31:41.002 12:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:41.002 12:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:31:41.002 12:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:41.002 12:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:41.002 12:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:41.002 12:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a0299a45-0de4-4a97-b3b3-df2ef5e4e99d -t 2000 00:31:41.263 [ 00:31:41.263 { 00:31:41.263 "name": "a0299a45-0de4-4a97-b3b3-df2ef5e4e99d", 00:31:41.263 "aliases": [ 00:31:41.263 "lvs/lvol" 00:31:41.263 ], 00:31:41.263 "product_name": "Logical Volume", 00:31:41.263 "block_size": 4096, 00:31:41.263 "num_blocks": 38912, 00:31:41.263 "uuid": "a0299a45-0de4-4a97-b3b3-df2ef5e4e99d", 00:31:41.263 "assigned_rate_limits": { 00:31:41.263 "rw_ios_per_sec": 0, 00:31:41.263 "rw_mbytes_per_sec": 0, 00:31:41.263 "r_mbytes_per_sec": 0, 00:31:41.263 "w_mbytes_per_sec": 0 00:31:41.263 }, 00:31:41.263 "claimed": false, 00:31:41.263 "zoned": false, 00:31:41.263 "supported_io_types": { 00:31:41.263 "read": true, 00:31:41.263 "write": true, 00:31:41.263 "unmap": true, 00:31:41.263 "flush": false, 00:31:41.263 "reset": true, 00:31:41.263 "nvme_admin": false, 00:31:41.263 "nvme_io": false, 00:31:41.263 "nvme_io_md": false, 00:31:41.263 "write_zeroes": true, 00:31:41.263 "zcopy": false, 00:31:41.263 "get_zone_info": false, 00:31:41.263 "zone_management": false, 00:31:41.263 "zone_append": false, 00:31:41.263 "compare": false, 00:31:41.263 "compare_and_write": false, 00:31:41.263 "abort": false, 00:31:41.263 "seek_hole": true, 00:31:41.263 "seek_data": true, 00:31:41.263 "copy": false, 00:31:41.263 "nvme_iov_md": false 00:31:41.263 }, 00:31:41.263 "driver_specific": { 00:31:41.263 "lvol": { 00:31:41.263 "lvol_store_uuid": "b223e273-021e-41df-8dc4-c39ca06da2ed", 00:31:41.263 "base_bdev": "aio_bdev", 00:31:41.263 "thin_provision": false, 00:31:41.263 "num_allocated_clusters": 38, 00:31:41.263 "snapshot": false, 00:31:41.263 "clone": false, 00:31:41.263 "esnap_clone": false 00:31:41.263 } 00:31:41.263 } 00:31:41.263 } 00:31:41.263 ] 00:31:41.263 12:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:31:41.263 12:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b223e273-021e-41df-8dc4-c39ca06da2ed 00:31:41.263 12:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:31:41.524 12:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:31:41.524 12:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b223e273-021e-41df-8dc4-c39ca06da2ed 00:31:41.524 12:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:31:41.524 12:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:31:41.524 12:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a0299a45-0de4-4a97-b3b3-df2ef5e4e99d 00:31:41.785 12:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b223e273-021e-41df-8dc4-c39ca06da2ed 00:31:42.077 12:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:42.077 12:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:42.077 00:31:42.077 real 0m16.020s 00:31:42.077 user 0m15.678s 00:31:42.077 sys 0m1.465s 00:31:42.077 12:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:42.077 12:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:31:42.077 ************************************ 00:31:42.077 END TEST lvs_grow_clean 00:31:42.077 ************************************ 00:31:42.404 12:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:31:42.404 12:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:42.404 12:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:42.404 12:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:42.404 ************************************ 00:31:42.404 START TEST lvs_grow_dirty 00:31:42.404 ************************************ 00:31:42.404 12:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:31:42.404 12:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:31:42.404 12:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:31:42.404 12:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:31:42.404 12:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:31:42.404 12:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:31:42.404 12:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:31:42.404 12:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:42.404 12:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:42.404 12:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:42.404 12:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:31:42.404 12:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:31:42.665 12:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=f2100bd1-1153-413b-b173-5c527aea8b20 00:31:42.665 12:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f2100bd1-1153-413b-b173-5c527aea8b20 00:31:42.665 12:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:31:42.925 12:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:31:42.925 12:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:31:42.926 12:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f2100bd1-1153-413b-b173-5c527aea8b20 lvol 150 00:31:42.926 12:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=bbd01eea-d7b1-491b-a236-33a1ea38d630 00:31:42.926 12:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:42.926 12:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:31:43.187 [2024-10-21 12:16:19.623010] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:31:43.187 [2024-10-21 12:16:19.623156] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:31:43.187 true 00:31:43.187 12:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f2100bd1-1153-413b-b173-5c527aea8b20 00:31:43.187 12:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:31:43.447 12:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:31:43.447 12:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:43.447 12:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bbd01eea-d7b1-491b-a236-33a1ea38d630 00:31:43.708 12:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:43.708 [2024-10-21 12:16:20.295551] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:43.970 12:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:43.970 12:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1205539 00:31:43.970 12:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:43.970 12:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:31:43.970 12:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1205539 /var/tmp/bdevperf.sock 00:31:43.970 12:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1205539 ']' 00:31:43.970 12:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:43.970 12:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:43.970 12:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:43.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:43.970 12:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:43.970 12:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:43.970 [2024-10-21 12:16:20.558273] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:31:43.970 [2024-10-21 12:16:20.558337] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1205539 ] 00:31:44.230 [2024-10-21 12:16:20.636182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:44.230 [2024-10-21 12:16:20.666045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:44.803 12:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:44.803 12:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:31:44.803 12:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:31:45.063 Nvme0n1 00:31:45.063 12:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:31:45.326 [ 00:31:45.326 { 00:31:45.326 "name": "Nvme0n1", 00:31:45.326 "aliases": [ 00:31:45.326 "bbd01eea-d7b1-491b-a236-33a1ea38d630" 00:31:45.326 ], 00:31:45.326 "product_name": "NVMe disk", 00:31:45.326 "block_size": 4096, 00:31:45.326 "num_blocks": 38912, 00:31:45.326 "uuid": "bbd01eea-d7b1-491b-a236-33a1ea38d630", 00:31:45.326 "numa_id": 0, 00:31:45.326 "assigned_rate_limits": { 00:31:45.326 "rw_ios_per_sec": 0, 00:31:45.326 "rw_mbytes_per_sec": 0, 00:31:45.326 "r_mbytes_per_sec": 0, 00:31:45.326 "w_mbytes_per_sec": 0 00:31:45.326 }, 00:31:45.326 "claimed": false, 00:31:45.326 "zoned": false, 00:31:45.326 "supported_io_types": { 00:31:45.326 "read": true, 00:31:45.326 "write": true, 00:31:45.326 "unmap": true, 00:31:45.326 "flush": true, 00:31:45.326 "reset": true, 00:31:45.326 "nvme_admin": true, 00:31:45.326 "nvme_io": true, 00:31:45.326 "nvme_io_md": false, 00:31:45.326 "write_zeroes": true, 00:31:45.326 "zcopy": false, 00:31:45.326 "get_zone_info": false, 00:31:45.326 "zone_management": false, 00:31:45.326 "zone_append": false, 00:31:45.326 "compare": true, 00:31:45.326 "compare_and_write": true, 00:31:45.326 "abort": true, 00:31:45.326 "seek_hole": false, 00:31:45.326 "seek_data": false, 00:31:45.326 "copy": true, 00:31:45.326 "nvme_iov_md": false 00:31:45.326 }, 00:31:45.326 "memory_domains": [ 00:31:45.326 { 00:31:45.326 "dma_device_id": "system", 00:31:45.326 "dma_device_type": 1 00:31:45.326 } 00:31:45.326 ], 00:31:45.326 "driver_specific": { 00:31:45.326 "nvme": [ 00:31:45.326 { 00:31:45.326 "trid": { 00:31:45.326 "trtype": "TCP", 00:31:45.326 "adrfam": "IPv4", 00:31:45.326 "traddr": "10.0.0.2", 00:31:45.326 "trsvcid": "4420", 00:31:45.326 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:45.326 }, 00:31:45.326 "ctrlr_data": { 00:31:45.326 "cntlid": 1, 00:31:45.326 "vendor_id": "0x8086", 00:31:45.326 "model_number": "SPDK bdev Controller", 00:31:45.326 "serial_number": "SPDK0", 00:31:45.326 "firmware_revision": "25.01", 00:31:45.326 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:45.326 "oacs": { 00:31:45.326 "security": 0, 00:31:45.326 "format": 0, 00:31:45.326 "firmware": 0, 00:31:45.326 "ns_manage": 0 00:31:45.326 }, 00:31:45.326 "multi_ctrlr": true, 00:31:45.326 "ana_reporting": false 00:31:45.326 }, 00:31:45.326 "vs": { 00:31:45.326 "nvme_version": "1.3" 00:31:45.326 }, 00:31:45.326 "ns_data": { 00:31:45.326 "id": 1, 00:31:45.326 "can_share": true 00:31:45.326 } 00:31:45.326 } 00:31:45.326 ], 00:31:45.326 "mp_policy": "active_passive" 00:31:45.326 } 00:31:45.326 } 00:31:45.326 ] 00:31:45.326 12:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1205876 00:31:45.326 12:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:31:45.326 12:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:45.326 Running I/O for 10 seconds... 00:31:46.268 Latency(us) 00:31:46.268 [2024-10-21T10:16:22.863Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:46.268 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:46.268 Nvme0n1 : 1.00 17522.00 68.45 0.00 0.00 0.00 0.00 0.00 00:31:46.268 [2024-10-21T10:16:22.863Z] =================================================================================================================== 00:31:46.268 [2024-10-21T10:16:22.863Z] Total : 17522.00 68.45 0.00 0.00 0.00 0.00 0.00 00:31:46.268 00:31:47.212 12:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f2100bd1-1153-413b-b173-5c527aea8b20 00:31:47.473 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:47.473 Nvme0n1 : 2.00 17762.00 69.38 0.00 0.00 0.00 0.00 0.00 00:31:47.473 [2024-10-21T10:16:24.068Z] =================================================================================================================== 00:31:47.473 [2024-10-21T10:16:24.068Z] Total : 17762.00 69.38 0.00 0.00 0.00 0.00 0.00 00:31:47.473 00:31:47.473 true 00:31:47.473 12:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f2100bd1-1153-413b-b173-5c527aea8b20 00:31:47.473 12:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:31:47.734 12:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:31:47.734 12:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:31:47.734 12:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1205876 00:31:48.304 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:48.304 Nvme0n1 : 3.00 17851.67 69.73 0.00 0.00 0.00 0.00 0.00 00:31:48.304 [2024-10-21T10:16:24.899Z] =================================================================================================================== 00:31:48.304 [2024-10-21T10:16:24.899Z] Total : 17851.67 69.73 0.00 0.00 0.00 0.00 0.00 00:31:48.304 00:31:49.245 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:49.245 Nvme0n1 : 4.00 17916.50 69.99 0.00 0.00 0.00 0.00 0.00 00:31:49.245 [2024-10-21T10:16:25.840Z] =================================================================================================================== 00:31:49.245 [2024-10-21T10:16:25.840Z] Total : 17916.50 69.99 0.00 0.00 0.00 0.00 0.00 00:31:49.245 00:31:50.630 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:50.630 Nvme0n1 : 5.00 18288.60 71.44 0.00 0.00 0.00 0.00 0.00 00:31:50.630 [2024-10-21T10:16:27.225Z] =================================================================================================================== 00:31:50.630 [2024-10-21T10:16:27.225Z] Total : 18288.60 71.44 0.00 0.00 0.00 0.00 0.00 00:31:50.630 00:31:51.572 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:51.572 Nvme0n1 : 6.00 19496.33 76.16 0.00 0.00 0.00 0.00 0.00 00:31:51.572 [2024-10-21T10:16:28.167Z] =================================================================================================================== 00:31:51.572 [2024-10-21T10:16:28.167Z] Total : 19496.33 76.16 0.00 0.00 0.00 0.00 0.00 00:31:51.572 00:31:52.515 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:52.515 Nvme0n1 : 7.00 20367.43 79.56 0.00 0.00 0.00 0.00 0.00 00:31:52.515 [2024-10-21T10:16:29.110Z] =================================================================================================================== 00:31:52.515 [2024-10-21T10:16:29.110Z] Total : 20367.43 79.56 0.00 0.00 0.00 0.00 0.00 00:31:52.515 00:31:53.456 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:53.456 Nvme0n1 : 8.00 21021.50 82.12 0.00 0.00 0.00 0.00 0.00 00:31:53.456 [2024-10-21T10:16:30.051Z] =================================================================================================================== 00:31:53.456 [2024-10-21T10:16:30.051Z] Total : 21021.50 82.12 0.00 0.00 0.00 0.00 0.00 00:31:53.456 00:31:54.398 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:54.398 Nvme0n1 : 9.00 21523.22 84.08 0.00 0.00 0.00 0.00 0.00 00:31:54.398 [2024-10-21T10:16:30.993Z] =================================================================================================================== 00:31:54.398 [2024-10-21T10:16:30.993Z] Total : 21523.22 84.08 0.00 0.00 0.00 0.00 0.00 00:31:54.398 00:31:55.342 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:55.342 Nvme0n1 : 10.00 21930.90 85.67 0.00 0.00 0.00 0.00 0.00 00:31:55.342 [2024-10-21T10:16:31.937Z] =================================================================================================================== 00:31:55.342 [2024-10-21T10:16:31.937Z] Total : 21930.90 85.67 0.00 0.00 0.00 0.00 0.00 00:31:55.342 00:31:55.342 00:31:55.342 Latency(us) 00:31:55.342 [2024-10-21T10:16:31.937Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:55.342 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:55.342 Nvme0n1 : 10.01 21931.97 85.67 0.00 0.00 5833.11 3126.61 30365.01 00:31:55.342 [2024-10-21T10:16:31.937Z] =================================================================================================================== 00:31:55.342 [2024-10-21T10:16:31.937Z] Total : 21931.97 85.67 0.00 0.00 5833.11 3126.61 30365.01 00:31:55.342 { 00:31:55.342 "results": [ 00:31:55.342 { 00:31:55.342 "job": "Nvme0n1", 00:31:55.342 "core_mask": "0x2", 00:31:55.342 "workload": "randwrite", 00:31:55.342 "status": "finished", 00:31:55.342 "queue_depth": 128, 00:31:55.342 "io_size": 4096, 00:31:55.342 "runtime": 10.00535, 00:31:55.342 "iops": 21931.96639797708, 00:31:55.342 "mibps": 85.67174374209797, 00:31:55.342 "io_failed": 0, 00:31:55.342 "io_timeout": 0, 00:31:55.342 "avg_latency_us": 5833.113941586879, 00:31:55.342 "min_latency_us": 3126.6133333333332, 00:31:55.342 "max_latency_us": 30365.013333333332 00:31:55.342 } 00:31:55.342 ], 00:31:55.342 "core_count": 1 00:31:55.342 } 00:31:55.342 12:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1205539 00:31:55.342 12:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 1205539 ']' 00:31:55.342 12:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 1205539 00:31:55.342 12:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:31:55.342 12:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:55.342 12:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1205539 00:31:55.603 12:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:55.603 12:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:55.603 12:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1205539' 00:31:55.603 killing process with pid 1205539 00:31:55.603 12:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 1205539 00:31:55.603 Received shutdown signal, test time was about 10.000000 seconds 00:31:55.603 00:31:55.603 Latency(us) 00:31:55.603 [2024-10-21T10:16:32.198Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:55.603 [2024-10-21T10:16:32.198Z] =================================================================================================================== 00:31:55.603 [2024-10-21T10:16:32.198Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:55.603 12:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 1205539 00:31:55.603 12:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:55.864 12:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:55.864 12:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f2100bd1-1153-413b-b173-5c527aea8b20 00:31:55.864 12:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:31:56.125 12:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:31:56.125 12:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:31:56.125 12:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1201965 00:31:56.125 12:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1201965 00:31:56.125 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1201965 Killed "${NVMF_APP[@]}" "$@" 00:31:56.125 12:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:31:56.125 12:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:31:56.125 12:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:56.125 12:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:56.125 12:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:56.125 12:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=1207888 00:31:56.125 12:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 1207888 00:31:56.125 12:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:31:56.125 12:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1207888 ']' 00:31:56.125 12:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:56.125 12:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:56.125 12:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:56.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:56.125 12:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:56.125 12:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:56.125 [2024-10-21 12:16:32.659979] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:56.125 [2024-10-21 12:16:32.661015] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:31:56.125 [2024-10-21 12:16:32.661065] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:56.386 [2024-10-21 12:16:32.745715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:56.386 [2024-10-21 12:16:32.777078] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:56.386 [2024-10-21 12:16:32.777105] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:56.386 [2024-10-21 12:16:32.777111] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:56.386 [2024-10-21 12:16:32.777116] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:56.386 [2024-10-21 12:16:32.777120] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:56.386 [2024-10-21 12:16:32.777577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:56.386 [2024-10-21 12:16:32.828234] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:56.386 [2024-10-21 12:16:32.828446] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:56.957 12:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:56.957 12:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:31:56.957 12:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:56.957 12:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:56.957 12:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:56.957 12:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:56.957 12:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:57.218 [2024-10-21 12:16:33.647744] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:31:57.218 [2024-10-21 12:16:33.648046] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:31:57.218 [2024-10-21 12:16:33.648135] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:31:57.218 12:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:31:57.218 12:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev bbd01eea-d7b1-491b-a236-33a1ea38d630 00:31:57.218 12:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=bbd01eea-d7b1-491b-a236-33a1ea38d630 00:31:57.219 12:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:57.219 12:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:31:57.219 12:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:57.219 12:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:57.219 12:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:57.481 12:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b bbd01eea-d7b1-491b-a236-33a1ea38d630 -t 2000 00:31:57.481 [ 00:31:57.481 { 00:31:57.481 "name": "bbd01eea-d7b1-491b-a236-33a1ea38d630", 00:31:57.481 "aliases": [ 00:31:57.481 "lvs/lvol" 00:31:57.481 ], 00:31:57.481 "product_name": "Logical Volume", 00:31:57.481 "block_size": 4096, 00:31:57.481 "num_blocks": 38912, 00:31:57.481 "uuid": "bbd01eea-d7b1-491b-a236-33a1ea38d630", 00:31:57.481 "assigned_rate_limits": { 00:31:57.481 "rw_ios_per_sec": 0, 00:31:57.481 "rw_mbytes_per_sec": 0, 00:31:57.481 "r_mbytes_per_sec": 0, 00:31:57.481 "w_mbytes_per_sec": 0 00:31:57.481 }, 00:31:57.481 "claimed": false, 00:31:57.481 "zoned": false, 00:31:57.481 "supported_io_types": { 00:31:57.481 "read": true, 00:31:57.481 "write": true, 00:31:57.481 "unmap": true, 00:31:57.481 "flush": false, 00:31:57.481 "reset": true, 00:31:57.481 "nvme_admin": false, 00:31:57.481 "nvme_io": false, 00:31:57.481 "nvme_io_md": false, 00:31:57.481 "write_zeroes": true, 00:31:57.481 "zcopy": false, 00:31:57.481 "get_zone_info": false, 00:31:57.481 "zone_management": false, 00:31:57.481 "zone_append": false, 00:31:57.481 "compare": false, 00:31:57.481 "compare_and_write": false, 00:31:57.481 "abort": false, 00:31:57.481 "seek_hole": true, 00:31:57.481 "seek_data": true, 00:31:57.481 "copy": false, 00:31:57.481 "nvme_iov_md": false 00:31:57.481 }, 00:31:57.481 "driver_specific": { 00:31:57.481 "lvol": { 00:31:57.481 "lvol_store_uuid": "f2100bd1-1153-413b-b173-5c527aea8b20", 00:31:57.481 "base_bdev": "aio_bdev", 00:31:57.481 "thin_provision": false, 00:31:57.481 "num_allocated_clusters": 38, 00:31:57.481 "snapshot": false, 00:31:57.481 "clone": false, 00:31:57.481 "esnap_clone": false 00:31:57.481 } 00:31:57.481 } 00:31:57.481 } 00:31:57.481 ] 00:31:57.481 12:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:31:57.481 12:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f2100bd1-1153-413b-b173-5c527aea8b20 00:31:57.481 12:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:31:57.743 12:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:31:57.743 12:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:31:57.743 12:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f2100bd1-1153-413b-b173-5c527aea8b20 00:31:58.003 12:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:31:58.003 12:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:58.003 [2024-10-21 12:16:34.506051] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:31:58.003 12:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f2100bd1-1153-413b-b173-5c527aea8b20 00:31:58.003 12:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:31:58.003 12:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f2100bd1-1153-413b-b173-5c527aea8b20 00:31:58.003 12:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:58.003 12:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:58.003 12:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:58.003 12:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:58.003 12:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:58.003 12:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:58.003 12:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:58.003 12:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:31:58.003 12:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f2100bd1-1153-413b-b173-5c527aea8b20 00:31:58.264 request: 00:31:58.264 { 00:31:58.264 "uuid": "f2100bd1-1153-413b-b173-5c527aea8b20", 00:31:58.264 "method": "bdev_lvol_get_lvstores", 00:31:58.264 "req_id": 1 00:31:58.264 } 00:31:58.264 Got JSON-RPC error response 00:31:58.264 response: 00:31:58.264 { 00:31:58.264 "code": -19, 00:31:58.264 "message": "No such device" 00:31:58.264 } 00:31:58.264 12:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:31:58.264 12:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:58.264 12:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:58.264 12:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:58.264 12:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:58.525 aio_bdev 00:31:58.525 12:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev bbd01eea-d7b1-491b-a236-33a1ea38d630 00:31:58.526 12:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=bbd01eea-d7b1-491b-a236-33a1ea38d630 00:31:58.526 12:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:58.526 12:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:31:58.526 12:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:58.526 12:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:58.526 12:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:58.526 12:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b bbd01eea-d7b1-491b-a236-33a1ea38d630 -t 2000 00:31:58.786 [ 00:31:58.786 { 00:31:58.786 "name": "bbd01eea-d7b1-491b-a236-33a1ea38d630", 00:31:58.786 "aliases": [ 00:31:58.786 "lvs/lvol" 00:31:58.786 ], 00:31:58.786 "product_name": "Logical Volume", 00:31:58.786 "block_size": 4096, 00:31:58.786 "num_blocks": 38912, 00:31:58.786 "uuid": "bbd01eea-d7b1-491b-a236-33a1ea38d630", 00:31:58.786 "assigned_rate_limits": { 00:31:58.786 "rw_ios_per_sec": 0, 00:31:58.786 "rw_mbytes_per_sec": 0, 00:31:58.786 "r_mbytes_per_sec": 0, 00:31:58.786 "w_mbytes_per_sec": 0 00:31:58.786 }, 00:31:58.786 "claimed": false, 00:31:58.786 "zoned": false, 00:31:58.786 "supported_io_types": { 00:31:58.786 "read": true, 00:31:58.786 "write": true, 00:31:58.786 "unmap": true, 00:31:58.786 "flush": false, 00:31:58.786 "reset": true, 00:31:58.786 "nvme_admin": false, 00:31:58.786 "nvme_io": false, 00:31:58.786 "nvme_io_md": false, 00:31:58.786 "write_zeroes": true, 00:31:58.786 "zcopy": false, 00:31:58.786 "get_zone_info": false, 00:31:58.786 "zone_management": false, 00:31:58.786 "zone_append": false, 00:31:58.786 "compare": false, 00:31:58.786 "compare_and_write": false, 00:31:58.786 "abort": false, 00:31:58.786 "seek_hole": true, 00:31:58.786 "seek_data": true, 00:31:58.786 "copy": false, 00:31:58.786 "nvme_iov_md": false 00:31:58.786 }, 00:31:58.786 "driver_specific": { 00:31:58.786 "lvol": { 00:31:58.786 "lvol_store_uuid": "f2100bd1-1153-413b-b173-5c527aea8b20", 00:31:58.786 "base_bdev": "aio_bdev", 00:31:58.786 "thin_provision": false, 00:31:58.786 "num_allocated_clusters": 38, 00:31:58.786 "snapshot": false, 00:31:58.786 "clone": false, 00:31:58.786 "esnap_clone": false 00:31:58.786 } 00:31:58.786 } 00:31:58.786 } 00:31:58.786 ] 00:31:58.786 12:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:31:58.787 12:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f2100bd1-1153-413b-b173-5c527aea8b20 00:31:58.787 12:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:31:58.787 12:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:31:58.787 12:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f2100bd1-1153-413b-b173-5c527aea8b20 00:31:58.787 12:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:31:59.048 12:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:31:59.048 12:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bbd01eea-d7b1-491b-a236-33a1ea38d630 00:31:59.309 12:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f2100bd1-1153-413b-b173-5c527aea8b20 00:31:59.570 12:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:59.570 12:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:59.570 00:31:59.570 real 0m17.401s 00:31:59.570 user 0m35.230s 00:31:59.570 sys 0m3.151s 00:31:59.570 12:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:59.570 12:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:59.570 ************************************ 00:31:59.570 END TEST lvs_grow_dirty 00:31:59.570 ************************************ 00:31:59.832 12:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:31:59.832 12:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:31:59.832 12:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:31:59.832 12:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:31:59.832 12:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:31:59.832 12:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:31:59.832 12:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:31:59.832 12:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:31:59.832 12:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:31:59.832 nvmf_trace.0 00:31:59.832 12:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:31:59.832 12:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:31:59.832 12:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:59.832 12:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:31:59.832 12:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:59.832 12:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:31:59.832 12:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:59.832 12:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:59.832 rmmod nvme_tcp 00:31:59.832 rmmod nvme_fabrics 00:31:59.832 rmmod nvme_keyring 00:31:59.832 12:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:59.832 12:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:31:59.832 12:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:31:59.832 12:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 1207888 ']' 00:31:59.832 12:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 1207888 00:31:59.832 12:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 1207888 ']' 00:31:59.832 12:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 1207888 00:31:59.832 12:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:31:59.832 12:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:59.832 12:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1207888 00:31:59.832 12:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:59.832 12:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:59.832 12:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1207888' 00:31:59.832 killing process with pid 1207888 00:31:59.832 12:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 1207888 00:31:59.832 12:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 1207888 00:32:00.093 12:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:00.093 12:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:00.093 12:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:00.093 12:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:32:00.093 12:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:32:00.093 12:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:00.093 12:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:32:00.093 12:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:00.093 12:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:00.093 12:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:00.093 12:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:00.093 12:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:02.009 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:02.009 00:32:02.009 real 0m44.838s 00:32:02.009 user 0m53.939s 00:32:02.009 sys 0m10.731s 00:32:02.009 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:02.009 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:02.009 ************************************ 00:32:02.009 END TEST nvmf_lvs_grow 00:32:02.009 ************************************ 00:32:02.272 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:02.272 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:02.272 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:02.272 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:02.272 ************************************ 00:32:02.272 START TEST nvmf_bdev_io_wait 00:32:02.272 ************************************ 00:32:02.272 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:02.272 * Looking for test storage... 00:32:02.272 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:02.272 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:02.272 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:32:02.272 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:02.272 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:02.272 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:02.272 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:02.272 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:02.272 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:32:02.272 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:32:02.272 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:32:02.272 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:32:02.272 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:32:02.272 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:32:02.272 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:32:02.272 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:02.272 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:32:02.272 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:32:02.272 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:02.272 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:02.272 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:32:02.272 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:32:02.272 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:02.272 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:32:02.272 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:32:02.272 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:32:02.272 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:32:02.272 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:02.272 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:32:02.272 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:32:02.272 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:02.272 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:02.272 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:32:02.272 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:02.272 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:02.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.272 --rc genhtml_branch_coverage=1 00:32:02.272 --rc genhtml_function_coverage=1 00:32:02.272 --rc genhtml_legend=1 00:32:02.272 --rc geninfo_all_blocks=1 00:32:02.272 --rc geninfo_unexecuted_blocks=1 00:32:02.272 00:32:02.272 ' 00:32:02.272 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:02.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.272 --rc genhtml_branch_coverage=1 00:32:02.272 --rc genhtml_function_coverage=1 00:32:02.272 --rc genhtml_legend=1 00:32:02.272 --rc geninfo_all_blocks=1 00:32:02.272 --rc geninfo_unexecuted_blocks=1 00:32:02.272 00:32:02.272 ' 00:32:02.272 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:02.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.272 --rc genhtml_branch_coverage=1 00:32:02.272 --rc genhtml_function_coverage=1 00:32:02.272 --rc genhtml_legend=1 00:32:02.272 --rc geninfo_all_blocks=1 00:32:02.272 --rc geninfo_unexecuted_blocks=1 00:32:02.272 00:32:02.272 ' 00:32:02.272 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:02.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.272 --rc genhtml_branch_coverage=1 00:32:02.272 --rc genhtml_function_coverage=1 00:32:02.272 --rc genhtml_legend=1 00:32:02.272 --rc geninfo_all_blocks=1 00:32:02.272 --rc geninfo_unexecuted_blocks=1 00:32:02.272 00:32:02.272 ' 00:32:02.272 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:32:02.534 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:10.680 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:10.680 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:32:10.680 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:10.680 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:10.680 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:10.680 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:10.680 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:10.680 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:32:10.680 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:10.680 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:32:10.680 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:32:10.680 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:32:10.680 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:32:10.680 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:32:10.680 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:32:10.680 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:10.680 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:10.680 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:10.680 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:10.680 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:10.680 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:10.680 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:10.680 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:10.680 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:10.680 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:10.680 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:10.680 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:10.680 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:10.680 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:10.681 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:10.681 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:10.681 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:10.681 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:10.681 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:10.681 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:32:10.681 00:32:10.681 --- 10.0.0.2 ping statistics --- 00:32:10.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:10.681 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:10.681 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:10.681 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:32:10.681 00:32:10.681 --- 10.0.0.1 ping statistics --- 00:32:10.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:10.681 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=1212744 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 1212744 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 1212744 ']' 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:10.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:10.681 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:10.681 [2024-10-21 12:16:46.499941] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:10.681 [2024-10-21 12:16:46.501079] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:32:10.681 [2024-10-21 12:16:46.501131] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:10.681 [2024-10-21 12:16:46.592391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:10.681 [2024-10-21 12:16:46.647291] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:10.681 [2024-10-21 12:16:46.647355] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:10.681 [2024-10-21 12:16:46.647364] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:10.681 [2024-10-21 12:16:46.647371] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:10.681 [2024-10-21 12:16:46.647377] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:10.681 [2024-10-21 12:16:46.649603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:10.681 [2024-10-21 12:16:46.649764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:10.681 [2024-10-21 12:16:46.649926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:10.681 [2024-10-21 12:16:46.649927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:10.681 [2024-10-21 12:16:46.650277] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:10.943 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:10.943 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:32:10.943 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:10.943 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:10.943 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:10.943 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:10.943 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:32:10.943 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.943 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:10.943 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.943 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:32:10.943 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.943 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:10.943 [2024-10-21 12:16:47.438503] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:10.943 [2024-10-21 12:16:47.438844] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:10.943 [2024-10-21 12:16:47.438926] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:10.943 [2024-10-21 12:16:47.439074] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:10.943 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.943 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:10.943 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.943 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:10.943 [2024-10-21 12:16:47.450486] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:10.943 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.943 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:10.943 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.943 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:10.943 Malloc0 00:32:10.943 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.943 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:10.943 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.943 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:10.943 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.943 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:10.943 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.943 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:10.943 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.943 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:10.943 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.943 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:10.943 [2024-10-21 12:16:47.527171] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:10.943 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.943 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1212976 00:32:10.943 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1212978 00:32:10.943 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:32:10.943 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:32:10.943 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:32:10.943 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:32:10.943 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:10.943 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:10.943 { 00:32:10.943 "params": { 00:32:10.943 "name": "Nvme$subsystem", 00:32:10.943 "trtype": "$TEST_TRANSPORT", 00:32:10.943 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:10.943 "adrfam": "ipv4", 00:32:10.943 "trsvcid": "$NVMF_PORT", 00:32:10.943 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:10.943 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:10.943 "hdgst": ${hdgst:-false}, 00:32:10.943 "ddgst": ${ddgst:-false} 00:32:10.943 }, 00:32:10.943 "method": "bdev_nvme_attach_controller" 00:32:10.943 } 00:32:10.943 EOF 00:32:10.943 )") 00:32:10.943 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1212980 00:32:10.943 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:32:10.943 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:32:10.943 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:32:11.205 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:32:11.205 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:11.205 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:11.205 { 00:32:11.205 "params": { 00:32:11.205 "name": "Nvme$subsystem", 00:32:11.205 "trtype": "$TEST_TRANSPORT", 00:32:11.205 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:11.205 "adrfam": "ipv4", 00:32:11.205 "trsvcid": "$NVMF_PORT", 00:32:11.205 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:11.205 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:11.205 "hdgst": ${hdgst:-false}, 00:32:11.205 "ddgst": ${ddgst:-false} 00:32:11.205 }, 00:32:11.205 "method": "bdev_nvme_attach_controller" 00:32:11.205 } 00:32:11.205 EOF 00:32:11.205 )") 00:32:11.205 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1212983 00:32:11.205 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:32:11.205 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:32:11.205 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:32:11.205 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:32:11.205 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:32:11.205 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:32:11.205 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:11.205 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:11.205 { 00:32:11.205 "params": { 00:32:11.205 "name": "Nvme$subsystem", 00:32:11.205 "trtype": "$TEST_TRANSPORT", 00:32:11.205 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:11.205 "adrfam": "ipv4", 00:32:11.205 "trsvcid": "$NVMF_PORT", 00:32:11.205 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:11.205 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:11.205 "hdgst": ${hdgst:-false}, 00:32:11.205 "ddgst": ${ddgst:-false} 00:32:11.205 }, 00:32:11.205 "method": "bdev_nvme_attach_controller" 00:32:11.205 } 00:32:11.205 EOF 00:32:11.205 )") 00:32:11.205 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:32:11.205 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:32:11.205 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:32:11.205 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:32:11.205 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:32:11.205 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:11.205 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:11.205 { 00:32:11.205 "params": { 00:32:11.205 "name": "Nvme$subsystem", 00:32:11.205 "trtype": "$TEST_TRANSPORT", 00:32:11.205 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:11.205 "adrfam": "ipv4", 00:32:11.205 "trsvcid": "$NVMF_PORT", 00:32:11.205 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:11.205 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:11.205 "hdgst": ${hdgst:-false}, 00:32:11.205 "ddgst": ${ddgst:-false} 00:32:11.205 }, 00:32:11.205 "method": "bdev_nvme_attach_controller" 00:32:11.205 } 00:32:11.205 EOF 00:32:11.205 )") 00:32:11.205 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:32:11.205 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1212976 00:32:11.205 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:32:11.205 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:32:11.206 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:32:11.206 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:32:11.206 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:32:11.206 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:11.206 "params": { 00:32:11.206 "name": "Nvme1", 00:32:11.206 "trtype": "tcp", 00:32:11.206 "traddr": "10.0.0.2", 00:32:11.206 "adrfam": "ipv4", 00:32:11.206 "trsvcid": "4420", 00:32:11.206 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:11.206 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:11.206 "hdgst": false, 00:32:11.206 "ddgst": false 00:32:11.206 }, 00:32:11.206 "method": "bdev_nvme_attach_controller" 00:32:11.206 }' 00:32:11.206 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:32:11.206 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:32:11.206 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:11.206 "params": { 00:32:11.206 "name": "Nvme1", 00:32:11.206 "trtype": "tcp", 00:32:11.206 "traddr": "10.0.0.2", 00:32:11.206 "adrfam": "ipv4", 00:32:11.206 "trsvcid": "4420", 00:32:11.206 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:11.206 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:11.206 "hdgst": false, 00:32:11.206 "ddgst": false 00:32:11.206 }, 00:32:11.206 "method": "bdev_nvme_attach_controller" 00:32:11.206 }' 00:32:11.206 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:32:11.206 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:11.206 "params": { 00:32:11.206 "name": "Nvme1", 00:32:11.206 "trtype": "tcp", 00:32:11.206 "traddr": "10.0.0.2", 00:32:11.206 "adrfam": "ipv4", 00:32:11.206 "trsvcid": "4420", 00:32:11.206 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:11.206 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:11.206 "hdgst": false, 00:32:11.206 "ddgst": false 00:32:11.206 }, 00:32:11.206 "method": "bdev_nvme_attach_controller" 00:32:11.206 }' 00:32:11.206 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:32:11.206 12:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:11.206 "params": { 00:32:11.206 "name": "Nvme1", 00:32:11.206 "trtype": "tcp", 00:32:11.206 "traddr": "10.0.0.2", 00:32:11.206 "adrfam": "ipv4", 00:32:11.206 "trsvcid": "4420", 00:32:11.206 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:11.206 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:11.206 "hdgst": false, 00:32:11.206 "ddgst": false 00:32:11.206 }, 00:32:11.206 "method": "bdev_nvme_attach_controller" 00:32:11.206 }' 00:32:11.206 [2024-10-21 12:16:47.588001] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:32:11.206 [2024-10-21 12:16:47.588073] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:32:11.206 [2024-10-21 12:16:47.589053] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:32:11.206 [2024-10-21 12:16:47.589118] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:32:11.206 [2024-10-21 12:16:47.589491] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:32:11.206 [2024-10-21 12:16:47.589547] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:32:11.206 [2024-10-21 12:16:47.593672] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:32:11.206 [2024-10-21 12:16:47.593741] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:32:11.467 [2024-10-21 12:16:47.802937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:11.467 [2024-10-21 12:16:47.841532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:11.467 [2024-10-21 12:16:47.896363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:11.467 [2024-10-21 12:16:47.939380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:11.467 [2024-10-21 12:16:47.957255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:11.467 [2024-10-21 12:16:47.996304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:11.467 [2024-10-21 12:16:48.027319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:11.729 [2024-10-21 12:16:48.067372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:32:11.729 Running I/O for 1 seconds... 00:32:11.729 Running I/O for 1 seconds... 00:32:11.729 Running I/O for 1 seconds... 00:32:11.729 Running I/O for 1 seconds... 00:32:12.677 12750.00 IOPS, 49.80 MiB/s 00:32:12.677 Latency(us) 00:32:12.677 [2024-10-21T10:16:49.272Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:12.677 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:32:12.677 Nvme1n1 : 1.01 12806.87 50.03 0.00 0.00 9960.48 2293.76 13271.04 00:32:12.677 [2024-10-21T10:16:49.272Z] =================================================================================================================== 00:32:12.677 [2024-10-21T10:16:49.272Z] Total : 12806.87 50.03 0.00 0.00 9960.48 2293.76 13271.04 00:32:12.677 6216.00 IOPS, 24.28 MiB/s 00:32:12.677 Latency(us) 00:32:12.677 [2024-10-21T10:16:49.272Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:12.677 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:32:12.677 Nvme1n1 : 1.02 6245.45 24.40 0.00 0.00 20318.95 6062.08 29272.75 00:32:12.677 [2024-10-21T10:16:49.272Z] =================================================================================================================== 00:32:12.677 [2024-10-21T10:16:49.272Z] Total : 6245.45 24.40 0.00 0.00 20318.95 6062.08 29272.75 00:32:12.677 188368.00 IOPS, 735.81 MiB/s 00:32:12.677 Latency(us) 00:32:12.677 [2024-10-21T10:16:49.272Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:12.678 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:32:12.678 Nvme1n1 : 1.00 187989.31 734.33 0.00 0.00 677.19 302.08 1979.73 00:32:12.678 [2024-10-21T10:16:49.273Z] =================================================================================================================== 00:32:12.678 [2024-10-21T10:16:49.273Z] Total : 187989.31 734.33 0.00 0.00 677.19 302.08 1979.73 00:32:12.939 6160.00 IOPS, 24.06 MiB/s 00:32:12.939 Latency(us) 00:32:12.939 [2024-10-21T10:16:49.534Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:12.939 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:32:12.939 Nvme1n1 : 1.01 6242.59 24.39 0.00 0.00 20428.16 5652.48 39758.51 00:32:12.939 [2024-10-21T10:16:49.534Z] =================================================================================================================== 00:32:12.939 [2024-10-21T10:16:49.534Z] Total : 6242.59 24.39 0.00 0.00 20428.16 5652.48 39758.51 00:32:12.939 12:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1212978 00:32:12.939 12:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1212980 00:32:12.939 12:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1212983 00:32:12.939 12:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:12.939 12:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.939 12:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:12.939 12:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.939 12:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:32:12.939 12:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:32:12.939 12:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:12.939 12:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:32:12.939 12:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:12.939 12:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:32:12.939 12:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:12.939 12:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:12.939 rmmod nvme_tcp 00:32:12.939 rmmod nvme_fabrics 00:32:12.939 rmmod nvme_keyring 00:32:12.939 12:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:12.939 12:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:32:12.939 12:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:32:12.939 12:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 1212744 ']' 00:32:12.939 12:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 1212744 00:32:12.939 12:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 1212744 ']' 00:32:12.939 12:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 1212744 00:32:12.939 12:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:32:12.939 12:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:12.939 12:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1212744 00:32:13.200 12:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:13.200 12:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:13.200 12:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1212744' 00:32:13.200 killing process with pid 1212744 00:32:13.200 12:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 1212744 00:32:13.200 12:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 1212744 00:32:13.200 12:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:13.200 12:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:13.200 12:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:13.201 12:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:32:13.201 12:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:32:13.201 12:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:13.201 12:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:32:13.201 12:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:13.201 12:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:13.201 12:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:13.201 12:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:13.201 12:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:15.748 12:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:15.748 00:32:15.748 real 0m13.126s 00:32:15.748 user 0m15.644s 00:32:15.748 sys 0m7.891s 00:32:15.748 12:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:15.748 12:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:15.748 ************************************ 00:32:15.748 END TEST nvmf_bdev_io_wait 00:32:15.748 ************************************ 00:32:15.748 12:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:15.748 12:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:15.748 12:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:15.748 12:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:15.748 ************************************ 00:32:15.748 START TEST nvmf_queue_depth 00:32:15.748 ************************************ 00:32:15.748 12:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:15.748 * Looking for test storage... 00:32:15.748 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:15.748 12:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:15.748 12:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:32:15.748 12:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:15.748 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:15.748 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:15.748 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:15.748 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:15.748 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:32:15.748 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:32:15.748 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:32:15.748 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:32:15.748 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:32:15.748 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:32:15.748 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:32:15.748 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:15.748 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:32:15.748 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:32:15.748 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:15.748 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:15.748 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:32:15.748 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:32:15.748 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:15.748 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:32:15.748 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:32:15.748 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:32:15.748 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:32:15.748 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:15.748 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:32:15.748 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:32:15.748 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:15.748 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:15.748 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:32:15.748 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:15.748 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:15.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:15.748 --rc genhtml_branch_coverage=1 00:32:15.748 --rc genhtml_function_coverage=1 00:32:15.748 --rc genhtml_legend=1 00:32:15.748 --rc geninfo_all_blocks=1 00:32:15.748 --rc geninfo_unexecuted_blocks=1 00:32:15.748 00:32:15.748 ' 00:32:15.748 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:15.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:15.748 --rc genhtml_branch_coverage=1 00:32:15.748 --rc genhtml_function_coverage=1 00:32:15.748 --rc genhtml_legend=1 00:32:15.748 --rc geninfo_all_blocks=1 00:32:15.748 --rc geninfo_unexecuted_blocks=1 00:32:15.748 00:32:15.748 ' 00:32:15.748 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:15.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:15.749 --rc genhtml_branch_coverage=1 00:32:15.749 --rc genhtml_function_coverage=1 00:32:15.749 --rc genhtml_legend=1 00:32:15.749 --rc geninfo_all_blocks=1 00:32:15.749 --rc geninfo_unexecuted_blocks=1 00:32:15.749 00:32:15.749 ' 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:15.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:15.749 --rc genhtml_branch_coverage=1 00:32:15.749 --rc genhtml_function_coverage=1 00:32:15.749 --rc genhtml_legend=1 00:32:15.749 --rc geninfo_all_blocks=1 00:32:15.749 --rc geninfo_unexecuted_blocks=1 00:32:15.749 00:32:15.749 ' 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:32:15.749 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:23.895 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:23.895 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:32:23.895 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:23.895 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:23.895 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:23.895 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:23.895 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:23.895 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:32:23.895 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:23.895 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:32:23.895 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:32:23.895 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:32:23.895 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:32:23.895 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:32:23.895 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:32:23.895 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:23.895 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:23.895 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:23.895 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:23.895 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:23.895 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:23.895 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:23.895 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:23.895 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:23.895 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:23.895 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:23.895 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:23.895 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:23.896 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:23.896 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:23.896 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:23.896 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:23.896 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:23.896 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:32:23.896 00:32:23.896 --- 10.0.0.2 ping statistics --- 00:32:23.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:23.896 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:23.896 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:23.896 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:32:23.896 00:32:23.896 --- 10.0.0.1 ping statistics --- 00:32:23.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:23.896 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=1217594 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 1217594 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1217594 ']' 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:23.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:23.896 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:23.896 [2024-10-21 12:16:59.684445] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:23.896 [2024-10-21 12:16:59.685576] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:32:23.897 [2024-10-21 12:16:59.685628] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:23.897 [2024-10-21 12:16:59.775983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:23.897 [2024-10-21 12:16:59.827064] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:23.897 [2024-10-21 12:16:59.827115] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:23.897 [2024-10-21 12:16:59.827124] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:23.897 [2024-10-21 12:16:59.827131] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:23.897 [2024-10-21 12:16:59.827137] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:23.897 [2024-10-21 12:16:59.827872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:23.897 [2024-10-21 12:16:59.903780] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:23.897 [2024-10-21 12:16:59.904075] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:24.159 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:24.159 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:32:24.159 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:24.159 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:24.159 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:24.159 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:24.159 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:24.159 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.159 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:24.159 [2024-10-21 12:17:00.548776] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:24.159 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.159 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:24.159 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.159 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:24.159 Malloc0 00:32:24.159 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.159 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:24.159 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.159 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:24.159 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.159 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:24.159 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.159 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:24.159 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.159 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:24.159 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.159 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:24.159 [2024-10-21 12:17:00.628946] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:24.159 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.159 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1217688 00:32:24.159 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:24.159 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:32:24.159 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1217688 /var/tmp/bdevperf.sock 00:32:24.159 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1217688 ']' 00:32:24.159 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:24.159 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:24.159 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:24.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:24.159 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:24.159 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:24.159 [2024-10-21 12:17:00.689045] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:32:24.159 [2024-10-21 12:17:00.689119] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1217688 ] 00:32:24.421 [2024-10-21 12:17:00.771757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:24.421 [2024-10-21 12:17:00.825559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:24.992 12:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:24.992 12:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:32:24.992 12:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:24.992 12:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.992 12:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:25.252 NVMe0n1 00:32:25.252 12:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.252 12:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:25.252 Running I/O for 10 seconds... 00:32:27.137 9143.00 IOPS, 35.71 MiB/s [2024-10-21T10:17:05.114Z] 9216.00 IOPS, 36.00 MiB/s [2024-10-21T10:17:06.055Z] 9900.67 IOPS, 38.67 MiB/s [2024-10-21T10:17:06.997Z] 10754.50 IOPS, 42.01 MiB/s [2024-10-21T10:17:07.939Z] 11386.00 IOPS, 44.48 MiB/s [2024-10-21T10:17:08.878Z] 11786.00 IOPS, 46.04 MiB/s [2024-10-21T10:17:09.818Z] 12066.00 IOPS, 47.13 MiB/s [2024-10-21T10:17:10.762Z] 12307.75 IOPS, 48.08 MiB/s [2024-10-21T10:17:12.158Z] 12514.11 IOPS, 48.88 MiB/s [2024-10-21T10:17:12.158Z] 12671.40 IOPS, 49.50 MiB/s 00:32:35.563 Latency(us) 00:32:35.563 [2024-10-21T10:17:12.158Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:35.563 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:32:35.563 Verification LBA range: start 0x0 length 0x4000 00:32:35.563 NVMe0n1 : 10.05 12693.60 49.58 0.00 0.00 80366.06 15400.96 67283.63 00:32:35.563 [2024-10-21T10:17:12.158Z] =================================================================================================================== 00:32:35.563 [2024-10-21T10:17:12.158Z] Total : 12693.60 49.58 0.00 0.00 80366.06 15400.96 67283.63 00:32:35.563 { 00:32:35.563 "results": [ 00:32:35.563 { 00:32:35.563 "job": "NVMe0n1", 00:32:35.563 "core_mask": "0x1", 00:32:35.563 "workload": "verify", 00:32:35.563 "status": "finished", 00:32:35.563 "verify_range": { 00:32:35.563 "start": 0, 00:32:35.563 "length": 16384 00:32:35.563 }, 00:32:35.563 "queue_depth": 1024, 00:32:35.563 "io_size": 4096, 00:32:35.563 "runtime": 10.051207, 00:32:35.563 "iops": 12693.59988307872, 00:32:35.563 "mibps": 49.58437454327625, 00:32:35.563 "io_failed": 0, 00:32:35.563 "io_timeout": 0, 00:32:35.563 "avg_latency_us": 80366.06048354313, 00:32:35.563 "min_latency_us": 15400.96, 00:32:35.563 "max_latency_us": 67283.62666666666 00:32:35.563 } 00:32:35.563 ], 00:32:35.563 "core_count": 1 00:32:35.563 } 00:32:35.563 12:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1217688 00:32:35.563 12:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1217688 ']' 00:32:35.563 12:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1217688 00:32:35.563 12:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:32:35.563 12:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:35.563 12:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1217688 00:32:35.563 12:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:35.563 12:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:35.563 12:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1217688' 00:32:35.563 killing process with pid 1217688 00:32:35.563 12:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1217688 00:32:35.563 Received shutdown signal, test time was about 10.000000 seconds 00:32:35.563 00:32:35.563 Latency(us) 00:32:35.563 [2024-10-21T10:17:12.158Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:35.563 [2024-10-21T10:17:12.158Z] =================================================================================================================== 00:32:35.563 [2024-10-21T10:17:12.158Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:35.563 12:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1217688 00:32:35.563 12:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:32:35.563 12:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:32:35.563 12:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:35.563 12:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:32:35.563 12:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:35.563 12:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:32:35.563 12:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:35.563 12:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:35.563 rmmod nvme_tcp 00:32:35.563 rmmod nvme_fabrics 00:32:35.563 rmmod nvme_keyring 00:32:35.563 12:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:35.563 12:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:32:35.563 12:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:32:35.563 12:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 1217594 ']' 00:32:35.563 12:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 1217594 00:32:35.563 12:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1217594 ']' 00:32:35.563 12:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1217594 00:32:35.563 12:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:32:35.563 12:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:35.563 12:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1217594 00:32:35.563 12:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:35.563 12:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:35.564 12:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1217594' 00:32:35.564 killing process with pid 1217594 00:32:35.564 12:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1217594 00:32:35.564 12:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1217594 00:32:35.866 12:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:35.866 12:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:35.866 12:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:35.866 12:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:32:35.866 12:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:32:35.866 12:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:35.866 12:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:32:35.866 12:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:35.866 12:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:35.866 12:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:35.866 12:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:35.866 12:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:37.779 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:37.779 00:32:37.779 real 0m22.412s 00:32:37.779 user 0m24.554s 00:32:37.779 sys 0m7.421s 00:32:37.779 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:37.779 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:37.779 ************************************ 00:32:37.779 END TEST nvmf_queue_depth 00:32:37.779 ************************************ 00:32:37.779 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:32:37.779 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:37.779 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:37.779 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:37.779 ************************************ 00:32:37.779 START TEST nvmf_target_multipath 00:32:37.779 ************************************ 00:32:37.779 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:32:38.040 * Looking for test storage... 00:32:38.040 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:38.040 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:38.040 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:32:38.040 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:38.040 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:38.040 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:38.040 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:38.040 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:38.040 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:32:38.040 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:32:38.040 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:32:38.040 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:32:38.040 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:32:38.040 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:32:38.040 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:32:38.040 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:38.040 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:32:38.040 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:32:38.040 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:38.040 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:38.040 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:32:38.040 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:32:38.040 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:38.040 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:32:38.040 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:32:38.040 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:32:38.040 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:32:38.040 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:38.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:38.041 --rc genhtml_branch_coverage=1 00:32:38.041 --rc genhtml_function_coverage=1 00:32:38.041 --rc genhtml_legend=1 00:32:38.041 --rc geninfo_all_blocks=1 00:32:38.041 --rc geninfo_unexecuted_blocks=1 00:32:38.041 00:32:38.041 ' 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:38.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:38.041 --rc genhtml_branch_coverage=1 00:32:38.041 --rc genhtml_function_coverage=1 00:32:38.041 --rc genhtml_legend=1 00:32:38.041 --rc geninfo_all_blocks=1 00:32:38.041 --rc geninfo_unexecuted_blocks=1 00:32:38.041 00:32:38.041 ' 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:38.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:38.041 --rc genhtml_branch_coverage=1 00:32:38.041 --rc genhtml_function_coverage=1 00:32:38.041 --rc genhtml_legend=1 00:32:38.041 --rc geninfo_all_blocks=1 00:32:38.041 --rc geninfo_unexecuted_blocks=1 00:32:38.041 00:32:38.041 ' 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:38.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:38.041 --rc genhtml_branch_coverage=1 00:32:38.041 --rc genhtml_function_coverage=1 00:32:38.041 --rc genhtml_legend=1 00:32:38.041 --rc geninfo_all_blocks=1 00:32:38.041 --rc geninfo_unexecuted_blocks=1 00:32:38.041 00:32:38.041 ' 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:32:38.041 12:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:32:46.196 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:46.196 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:46.197 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:46.197 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:46.197 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:46.197 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:46.197 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:46.198 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:46.198 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:32:46.198 00:32:46.198 --- 10.0.0.2 ping statistics --- 00:32:46.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:46.198 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:32:46.198 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:46.198 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:46.198 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:32:46.198 00:32:46.198 --- 10.0.0.1 ping statistics --- 00:32:46.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:46.198 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:32:46.198 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:46.198 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:32:46.198 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:46.198 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:46.198 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:46.198 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:46.198 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:46.198 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:46.198 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:46.198 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:32:46.198 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:32:46.198 only one NIC for nvmf test 00:32:46.198 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:32:46.198 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:46.198 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:32:46.198 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:46.198 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:32:46.198 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:46.198 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:46.198 rmmod nvme_tcp 00:32:46.198 rmmod nvme_fabrics 00:32:46.198 rmmod nvme_keyring 00:32:46.198 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:46.198 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:32:46.198 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:32:46.198 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:32:46.198 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:46.198 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:46.198 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:46.198 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:32:46.198 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:32:46.198 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:46.198 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:32:46.198 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:46.198 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:46.198 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:46.198 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:46.198 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:47.584 12:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:47.584 12:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:32:47.584 12:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:32:47.584 12:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:47.584 12:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:32:47.584 12:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:47.584 12:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:32:47.584 12:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:47.584 12:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:47.584 12:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:47.584 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:32:47.584 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:32:47.584 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:32:47.584 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:47.584 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:47.584 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:47.584 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:32:47.584 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:32:47.584 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:47.584 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:32:47.584 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:47.584 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:47.584 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:47.584 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:47.584 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:47.584 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:47.584 00:32:47.584 real 0m9.667s 00:32:47.584 user 0m2.105s 00:32:47.584 sys 0m5.521s 00:32:47.584 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:47.584 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:32:47.584 ************************************ 00:32:47.584 END TEST nvmf_target_multipath 00:32:47.584 ************************************ 00:32:47.584 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:32:47.584 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:47.584 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:47.584 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:47.584 ************************************ 00:32:47.584 START TEST nvmf_zcopy 00:32:47.584 ************************************ 00:32:47.584 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:32:47.847 * Looking for test storage... 00:32:47.847 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:47.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:47.847 --rc genhtml_branch_coverage=1 00:32:47.847 --rc genhtml_function_coverage=1 00:32:47.847 --rc genhtml_legend=1 00:32:47.847 --rc geninfo_all_blocks=1 00:32:47.847 --rc geninfo_unexecuted_blocks=1 00:32:47.847 00:32:47.847 ' 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:47.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:47.847 --rc genhtml_branch_coverage=1 00:32:47.847 --rc genhtml_function_coverage=1 00:32:47.847 --rc genhtml_legend=1 00:32:47.847 --rc geninfo_all_blocks=1 00:32:47.847 --rc geninfo_unexecuted_blocks=1 00:32:47.847 00:32:47.847 ' 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:47.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:47.847 --rc genhtml_branch_coverage=1 00:32:47.847 --rc genhtml_function_coverage=1 00:32:47.847 --rc genhtml_legend=1 00:32:47.847 --rc geninfo_all_blocks=1 00:32:47.847 --rc geninfo_unexecuted_blocks=1 00:32:47.847 00:32:47.847 ' 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:47.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:47.847 --rc genhtml_branch_coverage=1 00:32:47.847 --rc genhtml_function_coverage=1 00:32:47.847 --rc genhtml_legend=1 00:32:47.847 --rc geninfo_all_blocks=1 00:32:47.847 --rc geninfo_unexecuted_blocks=1 00:32:47.847 00:32:47.847 ' 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:47.847 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:47.848 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:47.848 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:47.848 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:47.848 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:47.848 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:47.848 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:32:47.848 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:47.848 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:47.848 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:47.848 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:47.848 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:47.848 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:47.848 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:47.848 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:47.848 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:47.848 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:47.848 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:32:47.848 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:55.990 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:55.990 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:55.990 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:55.990 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:55.990 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:55.990 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.591 ms 00:32:55.990 00:32:55.990 --- 10.0.0.2 ping statistics --- 00:32:55.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:55.990 rtt min/avg/max/mdev = 0.591/0.591/0.591/0.000 ms 00:32:55.990 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:55.990 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:55.990 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:32:55.990 00:32:55.990 --- 10.0.0.1 ping statistics --- 00:32:55.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:55.991 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:32:55.991 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:55.991 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:32:55.991 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:55.991 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:55.991 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:55.991 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:55.991 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:55.991 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:55.991 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:55.991 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:32:55.991 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:55.991 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:55.991 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:55.991 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=1228163 00:32:55.991 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 1228163 00:32:55.991 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:55.991 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 1228163 ']' 00:32:55.991 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:55.991 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:55.991 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:55.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:55.991 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:55.991 12:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:55.991 [2024-10-21 12:17:31.651957] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:55.991 [2024-10-21 12:17:31.652920] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:32:55.991 [2024-10-21 12:17:31.652958] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:55.991 [2024-10-21 12:17:31.736604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:55.991 [2024-10-21 12:17:31.771648] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:55.991 [2024-10-21 12:17:31.771678] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:55.991 [2024-10-21 12:17:31.771687] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:55.991 [2024-10-21 12:17:31.771693] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:55.991 [2024-10-21 12:17:31.771699] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:55.991 [2024-10-21 12:17:31.772262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:55.991 [2024-10-21 12:17:31.826412] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:55.991 [2024-10-21 12:17:31.826663] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:55.991 12:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:55.991 12:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:32:55.991 12:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:55.991 12:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:55.991 12:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:55.991 12:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:55.991 12:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:32:55.991 12:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:32:55.991 12:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.991 12:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:55.991 [2024-10-21 12:17:32.481002] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:55.991 12:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.991 12:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:55.991 12:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.991 12:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:55.991 12:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.991 12:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:55.991 12:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.991 12:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:55.991 [2024-10-21 12:17:32.509219] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:55.991 12:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.991 12:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:55.991 12:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.991 12:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:55.991 12:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.991 12:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:32:55.991 12:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.991 12:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:55.991 malloc0 00:32:55.991 12:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.991 12:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:32:55.991 12:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.991 12:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:55.991 12:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.991 12:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:32:55.991 12:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:32:55.991 12:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:32:55.991 12:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:32:55.991 12:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:55.991 12:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:55.991 { 00:32:55.991 "params": { 00:32:55.991 "name": "Nvme$subsystem", 00:32:55.991 "trtype": "$TEST_TRANSPORT", 00:32:55.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:55.991 "adrfam": "ipv4", 00:32:55.991 "trsvcid": "$NVMF_PORT", 00:32:55.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:55.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:55.991 "hdgst": ${hdgst:-false}, 00:32:55.991 "ddgst": ${ddgst:-false} 00:32:55.991 }, 00:32:55.991 "method": "bdev_nvme_attach_controller" 00:32:55.991 } 00:32:55.991 EOF 00:32:55.991 )") 00:32:55.991 12:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:32:55.991 12:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:32:55.991 12:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:32:55.991 12:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:55.991 "params": { 00:32:55.991 "name": "Nvme1", 00:32:55.991 "trtype": "tcp", 00:32:55.991 "traddr": "10.0.0.2", 00:32:55.991 "adrfam": "ipv4", 00:32:55.991 "trsvcid": "4420", 00:32:55.991 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:55.991 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:55.991 "hdgst": false, 00:32:55.991 "ddgst": false 00:32:55.991 }, 00:32:55.991 "method": "bdev_nvme_attach_controller" 00:32:55.991 }' 00:32:56.252 [2024-10-21 12:17:32.608820] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:32:56.252 [2024-10-21 12:17:32.608880] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1228383 ] 00:32:56.252 [2024-10-21 12:17:32.689958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:56.252 [2024-10-21 12:17:32.743163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:56.513 Running I/O for 10 seconds... 00:32:58.399 6368.00 IOPS, 49.75 MiB/s [2024-10-21T10:17:36.379Z] 6471.50 IOPS, 50.56 MiB/s [2024-10-21T10:17:37.320Z] 6472.67 IOPS, 50.57 MiB/s [2024-10-21T10:17:38.261Z] 6504.50 IOPS, 50.82 MiB/s [2024-10-21T10:17:39.202Z] 7083.60 IOPS, 55.34 MiB/s [2024-10-21T10:17:40.164Z] 7494.17 IOPS, 58.55 MiB/s [2024-10-21T10:17:41.107Z] 7784.00 IOPS, 60.81 MiB/s [2024-10-21T10:17:42.048Z] 7996.25 IOPS, 62.47 MiB/s [2024-10-21T10:17:42.990Z] 8165.56 IOPS, 63.79 MiB/s [2024-10-21T10:17:43.251Z] 8300.40 IOPS, 64.85 MiB/s 00:33:06.656 Latency(us) 00:33:06.656 [2024-10-21T10:17:43.251Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:06.656 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:33:06.656 Verification LBA range: start 0x0 length 0x1000 00:33:06.656 Nvme1n1 : 10.05 8270.74 64.62 0.00 0.00 15372.27 2703.36 45438.29 00:33:06.656 [2024-10-21T10:17:43.251Z] =================================================================================================================== 00:33:06.656 [2024-10-21T10:17:43.251Z] Total : 8270.74 64.62 0.00 0.00 15372.27 2703.36 45438.29 00:33:06.656 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1230391 00:33:06.656 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:33:06.656 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:06.656 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:33:06.656 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:33:06.656 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:33:06.656 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:33:06.656 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:33:06.656 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:33:06.656 { 00:33:06.656 "params": { 00:33:06.656 "name": "Nvme$subsystem", 00:33:06.656 "trtype": "$TEST_TRANSPORT", 00:33:06.656 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:06.656 "adrfam": "ipv4", 00:33:06.656 "trsvcid": "$NVMF_PORT", 00:33:06.656 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:06.656 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:06.656 "hdgst": ${hdgst:-false}, 00:33:06.656 "ddgst": ${ddgst:-false} 00:33:06.656 }, 00:33:06.656 "method": "bdev_nvme_attach_controller" 00:33:06.656 } 00:33:06.656 EOF 00:33:06.656 )") 00:33:06.656 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:33:06.656 [2024-10-21 12:17:43.128582] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.656 [2024-10-21 12:17:43.128610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.656 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:33:06.656 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:33:06.656 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:33:06.656 "params": { 00:33:06.656 "name": "Nvme1", 00:33:06.656 "trtype": "tcp", 00:33:06.656 "traddr": "10.0.0.2", 00:33:06.656 "adrfam": "ipv4", 00:33:06.656 "trsvcid": "4420", 00:33:06.656 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:06.656 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:06.656 "hdgst": false, 00:33:06.656 "ddgst": false 00:33:06.656 }, 00:33:06.656 "method": "bdev_nvme_attach_controller" 00:33:06.656 }' 00:33:06.656 [2024-10-21 12:17:43.140556] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.656 [2024-10-21 12:17:43.140566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.656 [2024-10-21 12:17:43.152552] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.656 [2024-10-21 12:17:43.152561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.656 [2024-10-21 12:17:43.164552] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.656 [2024-10-21 12:17:43.164559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.656 [2024-10-21 12:17:43.171942] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:33:06.656 [2024-10-21 12:17:43.171990] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1230391 ] 00:33:06.656 [2024-10-21 12:17:43.176552] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.656 [2024-10-21 12:17:43.176560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.657 [2024-10-21 12:17:43.188552] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.657 [2024-10-21 12:17:43.188560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.657 [2024-10-21 12:17:43.200552] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.657 [2024-10-21 12:17:43.200560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.657 [2024-10-21 12:17:43.212552] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.657 [2024-10-21 12:17:43.212559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.657 [2024-10-21 12:17:43.224552] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.657 [2024-10-21 12:17:43.224559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.657 [2024-10-21 12:17:43.236552] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.657 [2024-10-21 12:17:43.236559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.657 [2024-10-21 12:17:43.245838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:06.657 [2024-10-21 12:17:43.248552] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.657 [2024-10-21 12:17:43.248560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.917 [2024-10-21 12:17:43.260553] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.917 [2024-10-21 12:17:43.260562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.917 [2024-10-21 12:17:43.272552] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.917 [2024-10-21 12:17:43.272561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.917 [2024-10-21 12:17:43.275629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:06.917 [2024-10-21 12:17:43.284551] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.917 [2024-10-21 12:17:43.284560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.917 [2024-10-21 12:17:43.296556] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.917 [2024-10-21 12:17:43.296568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.917 [2024-10-21 12:17:43.308554] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.917 [2024-10-21 12:17:43.308566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.918 [2024-10-21 12:17:43.320554] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.918 [2024-10-21 12:17:43.320563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.918 [2024-10-21 12:17:43.332553] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.918 [2024-10-21 12:17:43.332561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.918 [2024-10-21 12:17:43.344560] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.918 [2024-10-21 12:17:43.344577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.918 [2024-10-21 12:17:43.356554] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.918 [2024-10-21 12:17:43.356564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.918 [2024-10-21 12:17:43.368559] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.918 [2024-10-21 12:17:43.368571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.918 [2024-10-21 12:17:43.380552] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.918 [2024-10-21 12:17:43.380560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.918 [2024-10-21 12:17:43.392552] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.918 [2024-10-21 12:17:43.392559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.918 [2024-10-21 12:17:43.404552] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.918 [2024-10-21 12:17:43.404560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.918 [2024-10-21 12:17:43.416552] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.918 [2024-10-21 12:17:43.416561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.918 [2024-10-21 12:17:43.428554] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.918 [2024-10-21 12:17:43.428564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.918 [2024-10-21 12:17:43.440558] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.918 [2024-10-21 12:17:43.440573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.918 Running I/O for 5 seconds... 00:33:06.918 [2024-10-21 12:17:43.457034] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.918 [2024-10-21 12:17:43.457049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.918 [2024-10-21 12:17:43.471951] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.918 [2024-10-21 12:17:43.471967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.918 [2024-10-21 12:17:43.485024] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.918 [2024-10-21 12:17:43.485040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.918 [2024-10-21 12:17:43.499857] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.918 [2024-10-21 12:17:43.499873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.179 [2024-10-21 12:17:43.513052] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.179 [2024-10-21 12:17:43.513068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.179 [2024-10-21 12:17:43.528023] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.179 [2024-10-21 12:17:43.528039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.179 [2024-10-21 12:17:43.541209] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.179 [2024-10-21 12:17:43.541224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.179 [2024-10-21 12:17:43.555985] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.179 [2024-10-21 12:17:43.556000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.179 [2024-10-21 12:17:43.569021] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.179 [2024-10-21 12:17:43.569035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.179 [2024-10-21 12:17:43.583539] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.179 [2024-10-21 12:17:43.583554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.179 [2024-10-21 12:17:43.596610] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.179 [2024-10-21 12:17:43.596625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.179 [2024-10-21 12:17:43.608422] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.179 [2024-10-21 12:17:43.608437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.179 [2024-10-21 12:17:43.621438] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.179 [2024-10-21 12:17:43.621454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.179 [2024-10-21 12:17:43.635434] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.179 [2024-10-21 12:17:43.635449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.179 [2024-10-21 12:17:43.648922] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.179 [2024-10-21 12:17:43.648937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.179 [2024-10-21 12:17:43.663563] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.179 [2024-10-21 12:17:43.663578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.179 [2024-10-21 12:17:43.676326] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.179 [2024-10-21 12:17:43.676341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.179 [2024-10-21 12:17:43.688019] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.179 [2024-10-21 12:17:43.688034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.179 [2024-10-21 12:17:43.701116] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.179 [2024-10-21 12:17:43.701131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.179 [2024-10-21 12:17:43.715954] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.179 [2024-10-21 12:17:43.715969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.179 [2024-10-21 12:17:43.728822] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.179 [2024-10-21 12:17:43.728836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.179 [2024-10-21 12:17:43.744004] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.179 [2024-10-21 12:17:43.744020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.179 [2024-10-21 12:17:43.756989] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.179 [2024-10-21 12:17:43.757004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.179 [2024-10-21 12:17:43.771644] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.179 [2024-10-21 12:17:43.771660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.440 [2024-10-21 12:17:43.784484] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.440 [2024-10-21 12:17:43.784500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.440 [2024-10-21 12:17:43.796235] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.440 [2024-10-21 12:17:43.796251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.440 [2024-10-21 12:17:43.808996] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.440 [2024-10-21 12:17:43.809011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.440 [2024-10-21 12:17:43.823812] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.440 [2024-10-21 12:17:43.823831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.440 [2024-10-21 12:17:43.836758] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.440 [2024-10-21 12:17:43.836773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.440 [2024-10-21 12:17:43.851662] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.440 [2024-10-21 12:17:43.851677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.440 [2024-10-21 12:17:43.864589] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.440 [2024-10-21 12:17:43.864605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.440 [2024-10-21 12:17:43.876874] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.440 [2024-10-21 12:17:43.876889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.440 [2024-10-21 12:17:43.891762] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.440 [2024-10-21 12:17:43.891777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.440 [2024-10-21 12:17:43.904685] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.440 [2024-10-21 12:17:43.904701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.440 [2024-10-21 12:17:43.917396] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.440 [2024-10-21 12:17:43.917412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.441 [2024-10-21 12:17:43.931634] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.441 [2024-10-21 12:17:43.931650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.441 [2024-10-21 12:17:43.944069] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.441 [2024-10-21 12:17:43.944084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.441 [2024-10-21 12:17:43.956965] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.441 [2024-10-21 12:17:43.956980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.441 [2024-10-21 12:17:43.972001] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.441 [2024-10-21 12:17:43.972017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.441 [2024-10-21 12:17:43.984646] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.441 [2024-10-21 12:17:43.984661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.441 [2024-10-21 12:17:43.997277] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.441 [2024-10-21 12:17:43.997291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.441 [2024-10-21 12:17:44.011979] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.441 [2024-10-21 12:17:44.011994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.441 [2024-10-21 12:17:44.024976] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.441 [2024-10-21 12:17:44.024991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.703 [2024-10-21 12:17:44.040419] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.703 [2024-10-21 12:17:44.040436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.703 [2024-10-21 12:17:44.053302] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.703 [2024-10-21 12:17:44.053317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.703 [2024-10-21 12:17:44.067429] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.703 [2024-10-21 12:17:44.067444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.703 [2024-10-21 12:17:44.080225] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.703 [2024-10-21 12:17:44.080244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.703 [2024-10-21 12:17:44.092580] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.703 [2024-10-21 12:17:44.092596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.703 [2024-10-21 12:17:44.105901] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.703 [2024-10-21 12:17:44.105916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.703 [2024-10-21 12:17:44.119935] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.703 [2024-10-21 12:17:44.119951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.703 [2024-10-21 12:17:44.132536] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.703 [2024-10-21 12:17:44.132551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.703 [2024-10-21 12:17:44.145201] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.703 [2024-10-21 12:17:44.145217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.703 [2024-10-21 12:17:44.160030] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.703 [2024-10-21 12:17:44.160046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.703 [2024-10-21 12:17:44.173456] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.703 [2024-10-21 12:17:44.173470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.703 [2024-10-21 12:17:44.187545] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.703 [2024-10-21 12:17:44.187560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.703 [2024-10-21 12:17:44.200234] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.703 [2024-10-21 12:17:44.200250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.703 [2024-10-21 12:17:44.213592] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.703 [2024-10-21 12:17:44.213607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.703 [2024-10-21 12:17:44.228164] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.703 [2024-10-21 12:17:44.228179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.703 [2024-10-21 12:17:44.240921] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.703 [2024-10-21 12:17:44.240936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.703 [2024-10-21 12:17:44.255700] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.703 [2024-10-21 12:17:44.255715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.703 [2024-10-21 12:17:44.268669] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.703 [2024-10-21 12:17:44.268685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.703 [2024-10-21 12:17:44.281411] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.703 [2024-10-21 12:17:44.281426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.703 [2024-10-21 12:17:44.295889] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.703 [2024-10-21 12:17:44.295904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.964 [2024-10-21 12:17:44.308764] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.964 [2024-10-21 12:17:44.308778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.964 [2024-10-21 12:17:44.323623] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.964 [2024-10-21 12:17:44.323638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.964 [2024-10-21 12:17:44.336634] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.964 [2024-10-21 12:17:44.336657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.964 [2024-10-21 12:17:44.348700] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.964 [2024-10-21 12:17:44.348715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.964 [2024-10-21 12:17:44.363942] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.964 [2024-10-21 12:17:44.363958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.964 [2024-10-21 12:17:44.376700] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.964 [2024-10-21 12:17:44.376715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.964 [2024-10-21 12:17:44.391362] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.964 [2024-10-21 12:17:44.391378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.964 [2024-10-21 12:17:44.404483] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.964 [2024-10-21 12:17:44.404499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.964 [2024-10-21 12:17:44.417402] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.964 [2024-10-21 12:17:44.417417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.964 [2024-10-21 12:17:44.431614] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.964 [2024-10-21 12:17:44.431630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.964 [2024-10-21 12:17:44.444495] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.964 [2024-10-21 12:17:44.444512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.964 18810.00 IOPS, 146.95 MiB/s [2024-10-21T10:17:44.559Z] [2024-10-21 12:17:44.457333] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.964 [2024-10-21 12:17:44.457348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.964 [2024-10-21 12:17:44.472034] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.964 [2024-10-21 12:17:44.472050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.964 [2024-10-21 12:17:44.484985] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.964 [2024-10-21 12:17:44.484999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.964 [2024-10-21 12:17:44.500152] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.964 [2024-10-21 12:17:44.500167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.964 [2024-10-21 12:17:44.513145] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.964 [2024-10-21 12:17:44.513160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.964 [2024-10-21 12:17:44.527630] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.964 [2024-10-21 12:17:44.527645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.964 [2024-10-21 12:17:44.541047] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.964 [2024-10-21 12:17:44.541062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.964 [2024-10-21 12:17:44.556509] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.964 [2024-10-21 12:17:44.556525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.225 [2024-10-21 12:17:44.569076] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.225 [2024-10-21 12:17:44.569091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.225 [2024-10-21 12:17:44.583779] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.225 [2024-10-21 12:17:44.583794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.225 [2024-10-21 12:17:44.596440] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.225 [2024-10-21 12:17:44.596455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.225 [2024-10-21 12:17:44.609062] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.225 [2024-10-21 12:17:44.609076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.225 [2024-10-21 12:17:44.623928] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.225 [2024-10-21 12:17:44.623943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.225 [2024-10-21 12:17:44.636269] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.225 [2024-10-21 12:17:44.636284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.225 [2024-10-21 12:17:44.648652] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.225 [2024-10-21 12:17:44.648667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.225 [2024-10-21 12:17:44.661629] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.225 [2024-10-21 12:17:44.661644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.225 [2024-10-21 12:17:44.676596] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.225 [2024-10-21 12:17:44.676611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.225 [2024-10-21 12:17:44.689229] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.225 [2024-10-21 12:17:44.689244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.225 [2024-10-21 12:17:44.703888] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.225 [2024-10-21 12:17:44.703904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.225 [2024-10-21 12:17:44.716817] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.225 [2024-10-21 12:17:44.716832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.225 [2024-10-21 12:17:44.729064] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.225 [2024-10-21 12:17:44.729079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.225 [2024-10-21 12:17:44.743711] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.225 [2024-10-21 12:17:44.743725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.225 [2024-10-21 12:17:44.756857] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.225 [2024-10-21 12:17:44.756871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.225 [2024-10-21 12:17:44.772161] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.225 [2024-10-21 12:17:44.772177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.225 [2024-10-21 12:17:44.784613] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.225 [2024-10-21 12:17:44.784628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.225 [2024-10-21 12:17:44.796331] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.225 [2024-10-21 12:17:44.796347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.225 [2024-10-21 12:17:44.809617] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.225 [2024-10-21 12:17:44.809632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.486 [2024-10-21 12:17:44.823656] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.486 [2024-10-21 12:17:44.823672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.486 [2024-10-21 12:17:44.836195] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.486 [2024-10-21 12:17:44.836210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.486 [2024-10-21 12:17:44.848782] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.486 [2024-10-21 12:17:44.848797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.487 [2024-10-21 12:17:44.863824] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.487 [2024-10-21 12:17:44.863839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.487 [2024-10-21 12:17:44.876792] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.487 [2024-10-21 12:17:44.876807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.487 [2024-10-21 12:17:44.889573] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.487 [2024-10-21 12:17:44.889588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.487 [2024-10-21 12:17:44.903564] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.487 [2024-10-21 12:17:44.903580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.487 [2024-10-21 12:17:44.916587] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.487 [2024-10-21 12:17:44.916602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.487 [2024-10-21 12:17:44.929112] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.487 [2024-10-21 12:17:44.929127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.487 [2024-10-21 12:17:44.943750] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.487 [2024-10-21 12:17:44.943766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.487 [2024-10-21 12:17:44.956545] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.487 [2024-10-21 12:17:44.956560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.487 [2024-10-21 12:17:44.968889] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.487 [2024-10-21 12:17:44.968904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.487 [2024-10-21 12:17:44.983974] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.487 [2024-10-21 12:17:44.983989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.487 [2024-10-21 12:17:44.996779] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.487 [2024-10-21 12:17:44.996794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.487 [2024-10-21 12:17:45.011803] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.487 [2024-10-21 12:17:45.011819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.487 [2024-10-21 12:17:45.024556] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.487 [2024-10-21 12:17:45.024571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.487 [2024-10-21 12:17:45.036442] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.487 [2024-10-21 12:17:45.036457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.487 [2024-10-21 12:17:45.049180] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.487 [2024-10-21 12:17:45.049194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.487 [2024-10-21 12:17:45.064336] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.487 [2024-10-21 12:17:45.064352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.487 [2024-10-21 12:17:45.077201] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.487 [2024-10-21 12:17:45.077216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.748 [2024-10-21 12:17:45.091566] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.748 [2024-10-21 12:17:45.091583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.748 [2024-10-21 12:17:45.104648] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.748 [2024-10-21 12:17:45.104663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.748 [2024-10-21 12:17:45.116453] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.748 [2024-10-21 12:17:45.116468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.748 [2024-10-21 12:17:45.129648] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.748 [2024-10-21 12:17:45.129663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.748 [2024-10-21 12:17:45.143398] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.748 [2024-10-21 12:17:45.143413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.748 [2024-10-21 12:17:45.156735] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.748 [2024-10-21 12:17:45.156750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.748 [2024-10-21 12:17:45.168509] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.748 [2024-10-21 12:17:45.168524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.748 [2024-10-21 12:17:45.181112] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.748 [2024-10-21 12:17:45.181128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.748 [2024-10-21 12:17:45.195719] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.748 [2024-10-21 12:17:45.195735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.748 [2024-10-21 12:17:45.208657] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.748 [2024-10-21 12:17:45.208672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.748 [2024-10-21 12:17:45.221445] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.748 [2024-10-21 12:17:45.221460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.748 [2024-10-21 12:17:45.236004] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.748 [2024-10-21 12:17:45.236019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.748 [2024-10-21 12:17:45.248786] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.748 [2024-10-21 12:17:45.248800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.748 [2024-10-21 12:17:45.263654] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.748 [2024-10-21 12:17:45.263669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.748 [2024-10-21 12:17:45.276512] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.748 [2024-10-21 12:17:45.276527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.748 [2024-10-21 12:17:45.288972] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.748 [2024-10-21 12:17:45.288986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.748 [2024-10-21 12:17:45.303373] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.748 [2024-10-21 12:17:45.303388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.748 [2024-10-21 12:17:45.315909] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.748 [2024-10-21 12:17:45.315924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.748 [2024-10-21 12:17:45.328365] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.748 [2024-10-21 12:17:45.328380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.748 [2024-10-21 12:17:45.341308] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.748 [2024-10-21 12:17:45.341328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.008 [2024-10-21 12:17:45.356356] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.008 [2024-10-21 12:17:45.356372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.008 [2024-10-21 12:17:45.369145] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.008 [2024-10-21 12:17:45.369159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.009 [2024-10-21 12:17:45.383325] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.009 [2024-10-21 12:17:45.383341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.009 [2024-10-21 12:17:45.396226] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.009 [2024-10-21 12:17:45.396241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.009 [2024-10-21 12:17:45.408806] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.009 [2024-10-21 12:17:45.408821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.009 [2024-10-21 12:17:45.423546] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.009 [2024-10-21 12:17:45.423560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.009 [2024-10-21 12:17:45.436434] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.009 [2024-10-21 12:17:45.436449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.009 [2024-10-21 12:17:45.449016] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.009 [2024-10-21 12:17:45.449031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.009 18879.50 IOPS, 147.50 MiB/s [2024-10-21T10:17:45.604Z] [2024-10-21 12:17:45.463992] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.009 [2024-10-21 12:17:45.464007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.009 [2024-10-21 12:17:45.476412] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.009 [2024-10-21 12:17:45.476426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.009 [2024-10-21 12:17:45.489525] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.009 [2024-10-21 12:17:45.489540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.009 [2024-10-21 12:17:45.503574] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.009 [2024-10-21 12:17:45.503589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.009 [2024-10-21 12:17:45.516390] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.009 [2024-10-21 12:17:45.516405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.009 [2024-10-21 12:17:45.528907] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.009 [2024-10-21 12:17:45.528921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.009 [2024-10-21 12:17:45.543777] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.009 [2024-10-21 12:17:45.543792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.009 [2024-10-21 12:17:45.556639] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.009 [2024-10-21 12:17:45.556655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.009 [2024-10-21 12:17:45.568877] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.009 [2024-10-21 12:17:45.568893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.009 [2024-10-21 12:17:45.583477] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.009 [2024-10-21 12:17:45.583493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.009 [2024-10-21 12:17:45.596523] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.009 [2024-10-21 12:17:45.596543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.270 [2024-10-21 12:17:45.609499] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.270 [2024-10-21 12:17:45.609515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.270 [2024-10-21 12:17:45.623897] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.270 [2024-10-21 12:17:45.623912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.270 [2024-10-21 12:17:45.636973] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.270 [2024-10-21 12:17:45.636988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.270 [2024-10-21 12:17:45.651967] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.270 [2024-10-21 12:17:45.651983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.270 [2024-10-21 12:17:45.664892] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.270 [2024-10-21 12:17:45.664907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.270 [2024-10-21 12:17:45.680105] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.270 [2024-10-21 12:17:45.680120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.270 [2024-10-21 12:17:45.692538] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.270 [2024-10-21 12:17:45.692553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.270 [2024-10-21 12:17:45.704457] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.270 [2024-10-21 12:17:45.704472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.270 [2024-10-21 12:17:45.717377] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.270 [2024-10-21 12:17:45.717392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.270 [2024-10-21 12:17:45.731162] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.270 [2024-10-21 12:17:45.731178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.270 [2024-10-21 12:17:45.744038] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.270 [2024-10-21 12:17:45.744054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.270 [2024-10-21 12:17:45.756556] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.270 [2024-10-21 12:17:45.756572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.270 [2024-10-21 12:17:45.769292] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.270 [2024-10-21 12:17:45.769307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.270 [2024-10-21 12:17:45.784253] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.270 [2024-10-21 12:17:45.784268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.270 [2024-10-21 12:17:45.797043] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.270 [2024-10-21 12:17:45.797058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.270 [2024-10-21 12:17:45.811873] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.270 [2024-10-21 12:17:45.811888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.270 [2024-10-21 12:17:45.824535] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.270 [2024-10-21 12:17:45.824550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.270 [2024-10-21 12:17:45.836436] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.270 [2024-10-21 12:17:45.836452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.270 [2024-10-21 12:17:45.849580] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.270 [2024-10-21 12:17:45.849599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.270 [2024-10-21 12:17:45.863917] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.270 [2024-10-21 12:17:45.863932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.531 [2024-10-21 12:17:45.876908] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.531 [2024-10-21 12:17:45.876925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.531 [2024-10-21 12:17:45.891643] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.531 [2024-10-21 12:17:45.891659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.531 [2024-10-21 12:17:45.904418] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.531 [2024-10-21 12:17:45.904433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.531 [2024-10-21 12:17:45.917210] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.531 [2024-10-21 12:17:45.917225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.531 [2024-10-21 12:17:45.932176] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.531 [2024-10-21 12:17:45.932192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.531 [2024-10-21 12:17:45.944857] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.531 [2024-10-21 12:17:45.944872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.531 [2024-10-21 12:17:45.960075] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.531 [2024-10-21 12:17:45.960090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.531 [2024-10-21 12:17:45.972754] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.531 [2024-10-21 12:17:45.972769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.531 [2024-10-21 12:17:45.984433] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.531 [2024-10-21 12:17:45.984448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.531 [2024-10-21 12:17:45.997083] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.531 [2024-10-21 12:17:45.997097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.531 [2024-10-21 12:17:46.011997] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.531 [2024-10-21 12:17:46.012012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.531 [2024-10-21 12:17:46.025246] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.531 [2024-10-21 12:17:46.025260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.531 [2024-10-21 12:17:46.039841] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.531 [2024-10-21 12:17:46.039857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.531 [2024-10-21 12:17:46.052832] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.531 [2024-10-21 12:17:46.052847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.531 [2024-10-21 12:17:46.067510] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.531 [2024-10-21 12:17:46.067526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.531 [2024-10-21 12:17:46.080748] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.531 [2024-10-21 12:17:46.080762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.531 [2024-10-21 12:17:46.095442] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.531 [2024-10-21 12:17:46.095457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.531 [2024-10-21 12:17:46.108554] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.531 [2024-10-21 12:17:46.108573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.531 [2024-10-21 12:17:46.121688] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.531 [2024-10-21 12:17:46.121703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.791 [2024-10-21 12:17:46.135993] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.791 [2024-10-21 12:17:46.136009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.791 [2024-10-21 12:17:46.148833] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.791 [2024-10-21 12:17:46.148849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.791 [2024-10-21 12:17:46.164023] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.791 [2024-10-21 12:17:46.164038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.791 [2024-10-21 12:17:46.176800] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.791 [2024-10-21 12:17:46.176815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.791 [2024-10-21 12:17:46.191765] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.791 [2024-10-21 12:17:46.191780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.791 [2024-10-21 12:17:46.204224] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.791 [2024-10-21 12:17:46.204240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.791 [2024-10-21 12:17:46.217068] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.791 [2024-10-21 12:17:46.217083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.791 [2024-10-21 12:17:46.231907] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.791 [2024-10-21 12:17:46.231922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.791 [2024-10-21 12:17:46.244773] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.791 [2024-10-21 12:17:46.244789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.791 [2024-10-21 12:17:46.259905] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.791 [2024-10-21 12:17:46.259920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.791 [2024-10-21 12:17:46.272978] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.791 [2024-10-21 12:17:46.272992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.791 [2024-10-21 12:17:46.287745] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.791 [2024-10-21 12:17:46.287760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.791 [2024-10-21 12:17:46.300716] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.791 [2024-10-21 12:17:46.300731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.791 [2024-10-21 12:17:46.315450] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.792 [2024-10-21 12:17:46.315465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.792 [2024-10-21 12:17:46.328781] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.792 [2024-10-21 12:17:46.328796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.792 [2024-10-21 12:17:46.343767] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.792 [2024-10-21 12:17:46.343783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.792 [2024-10-21 12:17:46.356512] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.792 [2024-10-21 12:17:46.356527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.792 [2024-10-21 12:17:46.368862] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.792 [2024-10-21 12:17:46.368877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.792 [2024-10-21 12:17:46.383710] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.792 [2024-10-21 12:17:46.383725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.052 [2024-10-21 12:17:46.396427] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.052 [2024-10-21 12:17:46.396442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.052 [2024-10-21 12:17:46.409212] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.052 [2024-10-21 12:17:46.409227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.052 [2024-10-21 12:17:46.423563] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.052 [2024-10-21 12:17:46.423577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.052 [2024-10-21 12:17:46.436568] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.052 [2024-10-21 12:17:46.436583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.052 [2024-10-21 12:17:46.449047] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.052 [2024-10-21 12:17:46.449062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.052 18880.00 IOPS, 147.50 MiB/s [2024-10-21T10:17:46.647Z] [2024-10-21 12:17:46.463887] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.052 [2024-10-21 12:17:46.463902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.052 [2024-10-21 12:17:46.476523] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.052 [2024-10-21 12:17:46.476538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.052 [2024-10-21 12:17:46.489144] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.052 [2024-10-21 12:17:46.489159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.052 [2024-10-21 12:17:46.503842] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.052 [2024-10-21 12:17:46.503858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.052 [2024-10-21 12:17:46.516795] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.052 [2024-10-21 12:17:46.516810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.052 [2024-10-21 12:17:46.531441] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.052 [2024-10-21 12:17:46.531456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.052 [2024-10-21 12:17:46.544602] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.052 [2024-10-21 12:17:46.544617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.052 [2024-10-21 12:17:46.557388] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.052 [2024-10-21 12:17:46.557403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.052 [2024-10-21 12:17:46.571896] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.052 [2024-10-21 12:17:46.571911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.052 [2024-10-21 12:17:46.585061] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.052 [2024-10-21 12:17:46.585075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.052 [2024-10-21 12:17:46.599833] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.052 [2024-10-21 12:17:46.599849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.052 [2024-10-21 12:17:46.612617] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.052 [2024-10-21 12:17:46.612632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.052 [2024-10-21 12:17:46.625394] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.052 [2024-10-21 12:17:46.625409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.052 [2024-10-21 12:17:46.640364] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.052 [2024-10-21 12:17:46.640379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.313 [2024-10-21 12:17:46.653301] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.313 [2024-10-21 12:17:46.653316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.313 [2024-10-21 12:17:46.668123] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.313 [2024-10-21 12:17:46.668138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.313 [2024-10-21 12:17:46.680829] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.313 [2024-10-21 12:17:46.680843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.313 [2024-10-21 12:17:46.695774] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.313 [2024-10-21 12:17:46.695789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.313 [2024-10-21 12:17:46.708632] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.313 [2024-10-21 12:17:46.708647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.313 [2024-10-21 12:17:46.721095] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.313 [2024-10-21 12:17:46.721109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.313 [2024-10-21 12:17:46.735227] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.313 [2024-10-21 12:17:46.735241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.313 [2024-10-21 12:17:46.748412] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.313 [2024-10-21 12:17:46.748427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.313 [2024-10-21 12:17:46.761063] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.313 [2024-10-21 12:17:46.761078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.313 [2024-10-21 12:17:46.775801] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.313 [2024-10-21 12:17:46.775816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.313 [2024-10-21 12:17:46.789174] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.313 [2024-10-21 12:17:46.789188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.313 [2024-10-21 12:17:46.803639] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.314 [2024-10-21 12:17:46.803654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.314 [2024-10-21 12:17:46.816587] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.314 [2024-10-21 12:17:46.816603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.314 [2024-10-21 12:17:46.829418] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.314 [2024-10-21 12:17:46.829432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.314 [2024-10-21 12:17:46.843771] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.314 [2024-10-21 12:17:46.843786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.314 [2024-10-21 12:17:46.856229] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.314 [2024-10-21 12:17:46.856244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.314 [2024-10-21 12:17:46.869205] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.314 [2024-10-21 12:17:46.869220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.314 [2024-10-21 12:17:46.883895] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.314 [2024-10-21 12:17:46.883910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.314 [2024-10-21 12:17:46.897149] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.314 [2024-10-21 12:17:46.897164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.575 [2024-10-21 12:17:46.911910] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.575 [2024-10-21 12:17:46.911925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.575 [2024-10-21 12:17:46.924614] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.575 [2024-10-21 12:17:46.924630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.575 [2024-10-21 12:17:46.937607] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.575 [2024-10-21 12:17:46.937622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.575 [2024-10-21 12:17:46.952534] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.575 [2024-10-21 12:17:46.952550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.575 [2024-10-21 12:17:46.965214] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.575 [2024-10-21 12:17:46.965229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.575 [2024-10-21 12:17:46.979409] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.575 [2024-10-21 12:17:46.979424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.575 [2024-10-21 12:17:46.992363] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.575 [2024-10-21 12:17:46.992378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.575 [2024-10-21 12:17:47.004867] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.575 [2024-10-21 12:17:47.004882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.575 [2024-10-21 12:17:47.019194] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.575 [2024-10-21 12:17:47.019209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.575 [2024-10-21 12:17:47.031979] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.575 [2024-10-21 12:17:47.031995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.575 [2024-10-21 12:17:47.044470] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.575 [2024-10-21 12:17:47.044485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.575 [2024-10-21 12:17:47.057462] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.575 [2024-10-21 12:17:47.057476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.575 [2024-10-21 12:17:47.071570] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.575 [2024-10-21 12:17:47.071584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.575 [2024-10-21 12:17:47.084517] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.575 [2024-10-21 12:17:47.084532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.575 [2024-10-21 12:17:47.097194] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.575 [2024-10-21 12:17:47.097209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.575 [2024-10-21 12:17:47.111677] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.575 [2024-10-21 12:17:47.111693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.575 [2024-10-21 12:17:47.124310] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.575 [2024-10-21 12:17:47.124333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.575 [2024-10-21 12:17:47.137542] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.575 [2024-10-21 12:17:47.137557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.575 [2024-10-21 12:17:47.151495] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.575 [2024-10-21 12:17:47.151510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.575 [2024-10-21 12:17:47.164254] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.575 [2024-10-21 12:17:47.164269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.837 [2024-10-21 12:17:47.176699] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.837 [2024-10-21 12:17:47.176715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.837 [2024-10-21 12:17:47.189362] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.837 [2024-10-21 12:17:47.189377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.837 [2024-10-21 12:17:47.203275] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.837 [2024-10-21 12:17:47.203291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.837 [2024-10-21 12:17:47.216427] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.837 [2024-10-21 12:17:47.216442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.837 [2024-10-21 12:17:47.229031] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.837 [2024-10-21 12:17:47.229045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.837 [2024-10-21 12:17:47.244080] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.837 [2024-10-21 12:17:47.244097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.837 [2024-10-21 12:17:47.256797] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.837 [2024-10-21 12:17:47.256813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.837 [2024-10-21 12:17:47.271875] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.837 [2024-10-21 12:17:47.271890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.837 [2024-10-21 12:17:47.284563] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.837 [2024-10-21 12:17:47.284578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.837 [2024-10-21 12:17:47.297536] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.837 [2024-10-21 12:17:47.297551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.837 [2024-10-21 12:17:47.311509] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.837 [2024-10-21 12:17:47.311524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.837 [2024-10-21 12:17:47.324707] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.837 [2024-10-21 12:17:47.324722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.837 [2024-10-21 12:17:47.339760] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.837 [2024-10-21 12:17:47.339775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.837 [2024-10-21 12:17:47.353052] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.837 [2024-10-21 12:17:47.353067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.837 [2024-10-21 12:17:47.367942] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.837 [2024-10-21 12:17:47.367958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.837 [2024-10-21 12:17:47.380868] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.837 [2024-10-21 12:17:47.380887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.837 [2024-10-21 12:17:47.395387] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.837 [2024-10-21 12:17:47.395404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.837 [2024-10-21 12:17:47.408681] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.837 [2024-10-21 12:17:47.408697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.837 [2024-10-21 12:17:47.421030] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.837 [2024-10-21 12:17:47.421044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.097 [2024-10-21 12:17:47.435540] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.098 [2024-10-21 12:17:47.435557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.098 [2024-10-21 12:17:47.448587] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.098 [2024-10-21 12:17:47.448602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.098 18876.25 IOPS, 147.47 MiB/s [2024-10-21T10:17:47.693Z] [2024-10-21 12:17:47.461131] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.098 [2024-10-21 12:17:47.461146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.098 [2024-10-21 12:17:47.475877] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.098 [2024-10-21 12:17:47.475892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.098 [2024-10-21 12:17:47.489125] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.098 [2024-10-21 12:17:47.489140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.098 [2024-10-21 12:17:47.504307] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.098 [2024-10-21 12:17:47.504328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.098 [2024-10-21 12:17:47.517224] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.098 [2024-10-21 12:17:47.517239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.098 [2024-10-21 12:17:47.532006] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.098 [2024-10-21 12:17:47.532022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.098 [2024-10-21 12:17:47.544850] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.098 [2024-10-21 12:17:47.544865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.098 [2024-10-21 12:17:47.559773] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.098 [2024-10-21 12:17:47.559789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.098 [2024-10-21 12:17:47.572881] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.098 [2024-10-21 12:17:47.572896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.098 [2024-10-21 12:17:47.588101] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.098 [2024-10-21 12:17:47.588116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.098 [2024-10-21 12:17:47.600717] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.098 [2024-10-21 12:17:47.600733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.098 [2024-10-21 12:17:47.613262] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.098 [2024-10-21 12:17:47.613277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.098 [2024-10-21 12:17:47.627852] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.098 [2024-10-21 12:17:47.627868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.098 [2024-10-21 12:17:47.640877] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.098 [2024-10-21 12:17:47.640895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.098 [2024-10-21 12:17:47.655799] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.098 [2024-10-21 12:17:47.655814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.098 [2024-10-21 12:17:47.668919] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.098 [2024-10-21 12:17:47.668934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.098 [2024-10-21 12:17:47.683721] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.098 [2024-10-21 12:17:47.683736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.358 [2024-10-21 12:17:47.697262] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.358 [2024-10-21 12:17:47.697278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.358 [2024-10-21 12:17:47.711509] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.358 [2024-10-21 12:17:47.711524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.358 [2024-10-21 12:17:47.724429] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.358 [2024-10-21 12:17:47.724444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.358 [2024-10-21 12:17:47.737025] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.358 [2024-10-21 12:17:47.737040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.358 [2024-10-21 12:17:47.751842] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.358 [2024-10-21 12:17:47.751857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.358 [2024-10-21 12:17:47.764822] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.358 [2024-10-21 12:17:47.764837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.359 [2024-10-21 12:17:47.779807] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.359 [2024-10-21 12:17:47.779823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.359 [2024-10-21 12:17:47.792748] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.359 [2024-10-21 12:17:47.792763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.359 [2024-10-21 12:17:47.803919] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.359 [2024-10-21 12:17:47.803935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.359 [2024-10-21 12:17:47.816921] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.359 [2024-10-21 12:17:47.816936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.359 [2024-10-21 12:17:47.832132] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.359 [2024-10-21 12:17:47.832147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.359 [2024-10-21 12:17:47.844825] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.359 [2024-10-21 12:17:47.844840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.359 [2024-10-21 12:17:47.859838] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.359 [2024-10-21 12:17:47.859853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.359 [2024-10-21 12:17:47.872368] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.359 [2024-10-21 12:17:47.872383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.359 [2024-10-21 12:17:47.885266] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.359 [2024-10-21 12:17:47.885281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.359 [2024-10-21 12:17:47.900276] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.359 [2024-10-21 12:17:47.900291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.359 [2024-10-21 12:17:47.913050] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.359 [2024-10-21 12:17:47.913064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.359 [2024-10-21 12:17:47.927605] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.359 [2024-10-21 12:17:47.927621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.359 [2024-10-21 12:17:47.940427] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.359 [2024-10-21 12:17:47.940443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.359 [2024-10-21 12:17:47.952966] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.359 [2024-10-21 12:17:47.952981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.620 [2024-10-21 12:17:47.968402] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.620 [2024-10-21 12:17:47.968418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.620 [2024-10-21 12:17:47.981379] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.620 [2024-10-21 12:17:47.981394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.620 [2024-10-21 12:17:47.996184] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.620 [2024-10-21 12:17:47.996200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.620 [2024-10-21 12:17:48.009301] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.620 [2024-10-21 12:17:48.009316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.620 [2024-10-21 12:17:48.023077] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.620 [2024-10-21 12:17:48.023092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.620 [2024-10-21 12:17:48.036072] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.620 [2024-10-21 12:17:48.036087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.620 [2024-10-21 12:17:48.048261] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.620 [2024-10-21 12:17:48.048278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.620 [2024-10-21 12:17:48.061286] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.620 [2024-10-21 12:17:48.061301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.620 [2024-10-21 12:17:48.076163] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.620 [2024-10-21 12:17:48.076179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.620 [2024-10-21 12:17:48.089105] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.620 [2024-10-21 12:17:48.089120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.620 [2024-10-21 12:17:48.103679] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.620 [2024-10-21 12:17:48.103694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.620 [2024-10-21 12:17:48.116762] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.620 [2024-10-21 12:17:48.116778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.620 [2024-10-21 12:17:48.129527] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.620 [2024-10-21 12:17:48.129542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.620 [2024-10-21 12:17:48.144047] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.620 [2024-10-21 12:17:48.144062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.620 [2024-10-21 12:17:48.156969] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.620 [2024-10-21 12:17:48.156984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.620 [2024-10-21 12:17:48.172137] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.620 [2024-10-21 12:17:48.172151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.620 [2024-10-21 12:17:48.184615] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.620 [2024-10-21 12:17:48.184630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.620 [2024-10-21 12:17:48.197059] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.620 [2024-10-21 12:17:48.197074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.620 [2024-10-21 12:17:48.212147] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.620 [2024-10-21 12:17:48.212162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.882 [2024-10-21 12:17:48.225290] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.882 [2024-10-21 12:17:48.225305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.882 [2024-10-21 12:17:48.239504] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.882 [2024-10-21 12:17:48.239519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.882 [2024-10-21 12:17:48.252166] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.882 [2024-10-21 12:17:48.252181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.882 [2024-10-21 12:17:48.264739] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.882 [2024-10-21 12:17:48.264754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.882 [2024-10-21 12:17:48.279872] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.882 [2024-10-21 12:17:48.279888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.882 [2024-10-21 12:17:48.292539] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.882 [2024-10-21 12:17:48.292554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.882 [2024-10-21 12:17:48.305299] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.882 [2024-10-21 12:17:48.305313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.882 [2024-10-21 12:17:48.319347] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.882 [2024-10-21 12:17:48.319362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.882 [2024-10-21 12:17:48.332448] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.882 [2024-10-21 12:17:48.332463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.882 [2024-10-21 12:17:48.344643] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.882 [2024-10-21 12:17:48.344658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.882 [2024-10-21 12:17:48.357110] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.882 [2024-10-21 12:17:48.357124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.882 [2024-10-21 12:17:48.371270] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.882 [2024-10-21 12:17:48.371285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.882 [2024-10-21 12:17:48.384444] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.882 [2024-10-21 12:17:48.384459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.882 [2024-10-21 12:17:48.396440] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.882 [2024-10-21 12:17:48.396456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.882 [2024-10-21 12:17:48.409566] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.882 [2024-10-21 12:17:48.409581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.882 [2024-10-21 12:17:48.423622] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.882 [2024-10-21 12:17:48.423638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.882 [2024-10-21 12:17:48.436468] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.882 [2024-10-21 12:17:48.436483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.882 [2024-10-21 12:17:48.449042] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.882 [2024-10-21 12:17:48.449057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.882 18884.20 IOPS, 147.53 MiB/s [2024-10-21T10:17:48.477Z] [2024-10-21 12:17:48.464016] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.882 [2024-10-21 12:17:48.464032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.882 00:33:11.882 Latency(us) 00:33:11.882 [2024-10-21T10:17:48.477Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:11.882 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:33:11.882 Nvme1n1 : 5.01 18886.08 147.55 0.00 0.00 6771.53 2635.09 11359.57 00:33:11.882 [2024-10-21T10:17:48.477Z] =================================================================================================================== 00:33:11.882 [2024-10-21T10:17:48.477Z] Total : 18886.08 147.55 0.00 0.00 6771.53 2635.09 11359.57 00:33:11.882 [2024-10-21 12:17:48.472558] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.882 [2024-10-21 12:17:48.472573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.144 [2024-10-21 12:17:48.484556] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.144 [2024-10-21 12:17:48.484569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.144 [2024-10-21 12:17:48.496562] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.144 [2024-10-21 12:17:48.496576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.144 [2024-10-21 12:17:48.508556] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.144 [2024-10-21 12:17:48.508569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.144 [2024-10-21 12:17:48.520557] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.144 [2024-10-21 12:17:48.520569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.144 [2024-10-21 12:17:48.532553] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.144 [2024-10-21 12:17:48.532563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.144 [2024-10-21 12:17:48.544554] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.144 [2024-10-21 12:17:48.544562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.144 [2024-10-21 12:17:48.556556] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.144 [2024-10-21 12:17:48.556567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.144 [2024-10-21 12:17:48.568552] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.144 [2024-10-21 12:17:48.568560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1230391) - No such process 00:33:12.144 12:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1230391 00:33:12.144 12:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:12.144 12:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.144 12:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:12.144 12:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.144 12:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:12.144 12:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.144 12:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:12.144 delay0 00:33:12.144 12:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.144 12:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:33:12.144 12:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.144 12:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:12.144 12:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.144 12:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:33:12.404 [2024-10-21 12:17:48.762504] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:33:20.542 Initializing NVMe Controllers 00:33:20.542 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:20.542 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:20.542 Initialization complete. Launching workers. 00:33:20.542 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 6862 00:33:20.542 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 7147, failed to submit 35 00:33:20.542 success 6953, unsuccessful 194, failed 0 00:33:20.542 12:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:33:20.542 12:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:33:20.542 12:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:20.542 12:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:33:20.542 12:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:20.542 12:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:33:20.542 12:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:20.542 12:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:20.542 rmmod nvme_tcp 00:33:20.542 rmmod nvme_fabrics 00:33:20.542 rmmod nvme_keyring 00:33:20.542 12:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:20.542 12:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:33:20.542 12:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:33:20.542 12:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 1228163 ']' 00:33:20.542 12:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 1228163 00:33:20.542 12:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 1228163 ']' 00:33:20.542 12:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 1228163 00:33:20.542 12:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:33:20.542 12:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:20.542 12:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1228163 00:33:20.542 12:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:20.542 12:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:20.542 12:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1228163' 00:33:20.542 killing process with pid 1228163 00:33:20.542 12:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 1228163 00:33:20.542 12:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 1228163 00:33:20.542 12:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:20.542 12:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:20.542 12:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:20.542 12:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:33:20.542 12:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:33:20.542 12:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:20.542 12:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:33:20.542 12:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:20.542 12:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:20.542 12:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:20.542 12:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:20.542 12:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:21.927 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:21.927 00:33:21.927 real 0m34.084s 00:33:21.927 user 0m43.780s 00:33:21.927 sys 0m12.279s 00:33:21.927 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:21.927 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:21.927 ************************************ 00:33:21.927 END TEST nvmf_zcopy 00:33:21.927 ************************************ 00:33:21.927 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:21.927 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:33:21.927 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:21.927 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:21.927 ************************************ 00:33:21.927 START TEST nvmf_nmic 00:33:21.927 ************************************ 00:33:21.927 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:21.927 * Looking for test storage... 00:33:21.927 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:21.927 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:21.927 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:33:21.927 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:21.927 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:21.927 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:21.927 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:21.927 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:21.927 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:33:21.927 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:33:21.927 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:33:21.927 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:33:21.927 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:33:21.927 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:33:21.927 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:33:21.927 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:21.927 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:33:21.927 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:33:21.927 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:21.927 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:21.927 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:33:21.927 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:33:21.927 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:21.927 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:33:21.927 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:33:21.927 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:33:21.927 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:33:21.927 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:21.927 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:33:21.927 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:33:21.927 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:21.927 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:21.927 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:33:21.927 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:21.927 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:21.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:21.927 --rc genhtml_branch_coverage=1 00:33:21.927 --rc genhtml_function_coverage=1 00:33:21.927 --rc genhtml_legend=1 00:33:21.927 --rc geninfo_all_blocks=1 00:33:21.927 --rc geninfo_unexecuted_blocks=1 00:33:21.927 00:33:21.927 ' 00:33:21.927 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:21.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:21.927 --rc genhtml_branch_coverage=1 00:33:21.927 --rc genhtml_function_coverage=1 00:33:21.927 --rc genhtml_legend=1 00:33:21.927 --rc geninfo_all_blocks=1 00:33:21.927 --rc geninfo_unexecuted_blocks=1 00:33:21.927 00:33:21.927 ' 00:33:21.927 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:21.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:21.927 --rc genhtml_branch_coverage=1 00:33:21.927 --rc genhtml_function_coverage=1 00:33:21.927 --rc genhtml_legend=1 00:33:21.927 --rc geninfo_all_blocks=1 00:33:21.927 --rc geninfo_unexecuted_blocks=1 00:33:21.927 00:33:21.927 ' 00:33:21.927 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:21.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:21.927 --rc genhtml_branch_coverage=1 00:33:21.927 --rc genhtml_function_coverage=1 00:33:21.927 --rc genhtml_legend=1 00:33:21.927 --rc geninfo_all_blocks=1 00:33:21.927 --rc geninfo_unexecuted_blocks=1 00:33:21.927 00:33:21.927 ' 00:33:21.927 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:21.927 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:33:21.927 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:21.927 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:21.928 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:21.928 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:21.928 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:21.928 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:21.928 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:21.928 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:21.928 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:21.928 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:21.928 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:21.928 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:21.928 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:21.928 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:21.928 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:21.928 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:21.928 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:21.928 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:33:21.928 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:21.928 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:21.928 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:21.928 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.928 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.928 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.928 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:33:21.928 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.928 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:33:21.928 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:21.928 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:21.928 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:21.928 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:21.928 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:21.928 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:21.928 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:21.928 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:21.928 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:21.928 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:22.189 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:22.189 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:22.189 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:33:22.189 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:22.189 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:22.189 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:22.189 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:22.189 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:22.189 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:22.189 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:22.189 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:22.189 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:22.189 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:22.189 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:33:22.189 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:30.331 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:30.331 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:33:30.331 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:30.331 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:30.331 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:30.331 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:30.331 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:30.331 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:30.332 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:30.332 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:30.332 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:30.332 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:30.332 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:30.332 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:33:30.332 00:33:30.332 --- 10.0.0.2 ping statistics --- 00:33:30.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:30.332 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:30.332 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:30.332 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:33:30.332 00:33:30.332 --- 10.0.0.1 ping statistics --- 00:33:30.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:30.332 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:30.332 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:30.333 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:30.333 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:33:30.333 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:30.333 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:30.333 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:30.333 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=1237034 00:33:30.333 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 1237034 00:33:30.333 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 1237034 ']' 00:33:30.333 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:33:30.333 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:30.333 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:30.333 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:30.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:30.333 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:30.333 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:30.333 [2024-10-21 12:18:05.860606] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:30.333 [2024-10-21 12:18:05.861714] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:33:30.333 [2024-10-21 12:18:05.861769] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:30.333 [2024-10-21 12:18:05.955021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:30.333 [2024-10-21 12:18:06.010967] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:30.333 [2024-10-21 12:18:06.011028] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:30.333 [2024-10-21 12:18:06.011036] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:30.333 [2024-10-21 12:18:06.011043] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:30.333 [2024-10-21 12:18:06.011049] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:30.333 [2024-10-21 12:18:06.013447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:30.333 [2024-10-21 12:18:06.013730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:30.333 [2024-10-21 12:18:06.013888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:30.333 [2024-10-21 12:18:06.013893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:30.333 [2024-10-21 12:18:06.091289] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:30.333 [2024-10-21 12:18:06.091940] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:30.333 [2024-10-21 12:18:06.092510] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:30.333 [2024-10-21 12:18:06.092942] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:30.333 [2024-10-21 12:18:06.092987] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:30.333 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:30.333 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:33:30.333 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:30.333 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:30.333 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:30.333 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:30.333 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:30.333 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.333 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:30.333 [2024-10-21 12:18:06.710869] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:30.333 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.333 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:30.333 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.333 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:30.333 Malloc0 00:33:30.333 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.333 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:30.333 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.333 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:30.333 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.333 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:30.333 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.333 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:30.333 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.333 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:30.333 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.333 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:30.333 [2024-10-21 12:18:06.787120] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:30.333 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.333 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:33:30.333 test case1: single bdev can't be used in multiple subsystems 00:33:30.333 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:33:30.333 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.333 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:30.333 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.333 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:30.333 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.333 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:30.333 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.333 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:33:30.333 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:33:30.333 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.333 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:30.333 [2024-10-21 12:18:06.814487] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:33:30.333 [2024-10-21 12:18:06.814511] subsystem.c:2151:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:33:30.333 [2024-10-21 12:18:06.814519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.333 request: 00:33:30.333 { 00:33:30.333 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:33:30.333 "namespace": { 00:33:30.333 "bdev_name": "Malloc0", 00:33:30.333 "no_auto_visible": false 00:33:30.333 }, 00:33:30.333 "method": "nvmf_subsystem_add_ns", 00:33:30.333 "req_id": 1 00:33:30.333 } 00:33:30.333 Got JSON-RPC error response 00:33:30.333 response: 00:33:30.333 { 00:33:30.333 "code": -32602, 00:33:30.333 "message": "Invalid parameters" 00:33:30.333 } 00:33:30.333 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:30.333 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:33:30.333 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:33:30.333 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:33:30.333 Adding namespace failed - expected result. 00:33:30.333 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:33:30.333 test case2: host connect to nvmf target in multiple paths 00:33:30.333 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:30.333 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.333 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:30.333 [2024-10-21 12:18:06.826621] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:30.333 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.333 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:30.905 12:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:33:31.167 12:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:33:31.167 12:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:33:31.167 12:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:33:31.167 12:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:33:31.167 12:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:33:33.081 12:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:33:33.081 12:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:33:33.081 12:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:33:33.081 12:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:33:33.081 12:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:33:33.081 12:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:33:33.081 12:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:33:33.081 [global] 00:33:33.081 thread=1 00:33:33.081 invalidate=1 00:33:33.081 rw=write 00:33:33.081 time_based=1 00:33:33.081 runtime=1 00:33:33.081 ioengine=libaio 00:33:33.081 direct=1 00:33:33.081 bs=4096 00:33:33.081 iodepth=1 00:33:33.081 norandommap=0 00:33:33.081 numjobs=1 00:33:33.081 00:33:33.081 verify_dump=1 00:33:33.081 verify_backlog=512 00:33:33.081 verify_state_save=0 00:33:33.081 do_verify=1 00:33:33.081 verify=crc32c-intel 00:33:33.081 [job0] 00:33:33.081 filename=/dev/nvme0n1 00:33:33.365 Could not set queue depth (nvme0n1) 00:33:33.628 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:33.628 fio-3.35 00:33:33.628 Starting 1 thread 00:33:35.013 00:33:35.013 job0: (groupid=0, jobs=1): err= 0: pid=1238272: Mon Oct 21 12:18:11 2024 00:33:35.013 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:33:35.013 slat (nsec): min=6930, max=59583, avg=28022.67, stdev=3960.07 00:33:35.013 clat (usec): min=466, max=1190, avg=922.06, stdev=109.53 00:33:35.013 lat (usec): min=494, max=1218, avg=950.08, stdev=109.86 00:33:35.013 clat percentiles (usec): 00:33:35.013 | 1.00th=[ 594], 5.00th=[ 709], 10.00th=[ 766], 20.00th=[ 840], 00:33:35.013 | 30.00th=[ 889], 40.00th=[ 930], 50.00th=[ 947], 60.00th=[ 963], 00:33:35.013 | 70.00th=[ 988], 80.00th=[ 1004], 90.00th=[ 1037], 95.00th=[ 1057], 00:33:35.013 | 99.00th=[ 1123], 99.50th=[ 1139], 99.90th=[ 1188], 99.95th=[ 1188], 00:33:35.013 | 99.99th=[ 1188] 00:33:35.013 write: IOPS=895, BW=3580KiB/s (3666kB/s)(3584KiB/1001msec); 0 zone resets 00:33:35.013 slat (nsec): min=9546, max=70390, avg=32905.50, stdev=9684.51 00:33:35.013 clat (usec): min=198, max=857, avg=525.40, stdev=120.68 00:33:35.013 lat (usec): min=227, max=892, avg=558.31, stdev=123.87 00:33:35.013 clat percentiles (usec): 00:33:35.013 | 1.00th=[ 223], 5.00th=[ 306], 10.00th=[ 351], 20.00th=[ 424], 00:33:35.013 | 30.00th=[ 461], 40.00th=[ 506], 50.00th=[ 537], 60.00th=[ 570], 00:33:35.013 | 70.00th=[ 586], 80.00th=[ 627], 90.00th=[ 685], 95.00th=[ 717], 00:33:35.013 | 99.00th=[ 758], 99.50th=[ 783], 99.90th=[ 857], 99.95th=[ 857], 00:33:35.013 | 99.99th=[ 857] 00:33:35.013 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:33:35.013 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:35.013 lat (usec) : 250=1.42%, 500=23.15%, 750=41.62%, 1000=25.99% 00:33:35.013 lat (msec) : 2=7.81% 00:33:35.013 cpu : usr=2.80%, sys=5.90%, ctx=1410, majf=0, minf=1 00:33:35.013 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:35.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:35.013 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:35.013 issued rwts: total=512,896,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:35.013 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:35.013 00:33:35.013 Run status group 0 (all jobs): 00:33:35.013 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:33:35.013 WRITE: bw=3580KiB/s (3666kB/s), 3580KiB/s-3580KiB/s (3666kB/s-3666kB/s), io=3584KiB (3670kB), run=1001-1001msec 00:33:35.013 00:33:35.013 Disk stats (read/write): 00:33:35.013 nvme0n1: ios=567/713, merge=0/0, ticks=1145/294, in_queue=1439, util=96.89% 00:33:35.013 12:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:35.013 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:33:35.013 12:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:35.013 12:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:33:35.013 12:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:33:35.013 12:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:35.013 12:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:33:35.013 12:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:35.013 12:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:33:35.013 12:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:33:35.013 12:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:33:35.013 12:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:35.013 12:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:33:35.013 12:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:35.013 12:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:33:35.013 12:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:35.013 12:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:35.013 rmmod nvme_tcp 00:33:35.013 rmmod nvme_fabrics 00:33:35.013 rmmod nvme_keyring 00:33:35.013 12:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:35.013 12:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:33:35.013 12:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:33:35.013 12:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 1237034 ']' 00:33:35.013 12:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 1237034 00:33:35.013 12:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 1237034 ']' 00:33:35.013 12:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 1237034 00:33:35.013 12:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:33:35.013 12:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:35.013 12:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1237034 00:33:35.013 12:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:35.013 12:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:35.013 12:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1237034' 00:33:35.013 killing process with pid 1237034 00:33:35.013 12:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 1237034 00:33:35.013 12:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 1237034 00:33:35.275 12:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:35.275 12:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:35.275 12:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:35.275 12:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:33:35.275 12:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:33:35.275 12:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:35.275 12:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:33:35.275 12:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:35.275 12:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:35.275 12:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:35.275 12:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:35.275 12:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:37.247 12:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:37.247 00:33:37.247 real 0m15.432s 00:33:37.247 user 0m37.716s 00:33:37.247 sys 0m7.247s 00:33:37.247 12:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:37.247 12:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:37.247 ************************************ 00:33:37.247 END TEST nvmf_nmic 00:33:37.247 ************************************ 00:33:37.247 12:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:33:37.247 12:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:33:37.247 12:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:37.247 12:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:37.247 ************************************ 00:33:37.247 START TEST nvmf_fio_target 00:33:37.247 ************************************ 00:33:37.247 12:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:33:37.513 * Looking for test storage... 00:33:37.513 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:37.513 12:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:37.513 12:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:33:37.513 12:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:37.513 12:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:37.513 12:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:37.513 12:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:37.513 12:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:37.513 12:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:33:37.513 12:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:33:37.513 12:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:33:37.513 12:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:33:37.513 12:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:33:37.513 12:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:33:37.513 12:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:33:37.513 12:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:37.513 12:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:33:37.513 12:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:33:37.513 12:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:37.513 12:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:37.513 12:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:33:37.513 12:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:33:37.513 12:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:37.513 12:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:33:37.513 12:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:33:37.513 12:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:33:37.513 12:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:33:37.513 12:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:37.513 12:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:33:37.513 12:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:33:37.513 12:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:37.513 12:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:37.513 12:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:33:37.513 12:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:37.513 12:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:37.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:37.513 --rc genhtml_branch_coverage=1 00:33:37.513 --rc genhtml_function_coverage=1 00:33:37.513 --rc genhtml_legend=1 00:33:37.513 --rc geninfo_all_blocks=1 00:33:37.513 --rc geninfo_unexecuted_blocks=1 00:33:37.513 00:33:37.513 ' 00:33:37.513 12:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:37.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:37.513 --rc genhtml_branch_coverage=1 00:33:37.513 --rc genhtml_function_coverage=1 00:33:37.513 --rc genhtml_legend=1 00:33:37.513 --rc geninfo_all_blocks=1 00:33:37.513 --rc geninfo_unexecuted_blocks=1 00:33:37.513 00:33:37.513 ' 00:33:37.513 12:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:37.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:37.513 --rc genhtml_branch_coverage=1 00:33:37.513 --rc genhtml_function_coverage=1 00:33:37.513 --rc genhtml_legend=1 00:33:37.513 --rc geninfo_all_blocks=1 00:33:37.513 --rc geninfo_unexecuted_blocks=1 00:33:37.513 00:33:37.513 ' 00:33:37.513 12:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:37.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:37.513 --rc genhtml_branch_coverage=1 00:33:37.513 --rc genhtml_function_coverage=1 00:33:37.513 --rc genhtml_legend=1 00:33:37.513 --rc geninfo_all_blocks=1 00:33:37.513 --rc geninfo_unexecuted_blocks=1 00:33:37.513 00:33:37.513 ' 00:33:37.513 12:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:37.513 12:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:33:37.513 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:37.513 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:37.513 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:37.513 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:37.514 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:37.514 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:37.514 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:37.514 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:37.514 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:37.514 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:37.514 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:37.514 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:37.514 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:37.514 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:37.514 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:37.514 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:37.514 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:37.514 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:33:37.514 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:37.514 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:37.514 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:37.514 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.514 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.514 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.514 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:33:37.514 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.514 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:33:37.514 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:37.514 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:37.514 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:37.514 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:37.514 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:37.514 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:37.514 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:37.514 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:37.514 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:37.514 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:37.514 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:37.514 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:37.514 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:37.514 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:33:37.514 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:37.514 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:37.514 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:37.514 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:37.514 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:37.514 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:37.514 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:37.514 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:37.514 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:37.514 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:37.514 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:33:37.514 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:45.661 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:45.661 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:45.661 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:45.661 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:45.661 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:45.662 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:45.662 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:45.662 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:45.662 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:45.662 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:45.662 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:45.662 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:45.662 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:45.662 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:45.662 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:45.662 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:45.662 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:45.662 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:45.662 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:45.662 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:45.662 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:45.662 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:45.662 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:45.662 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:45.662 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:45.662 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:45.662 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.533 ms 00:33:45.662 00:33:45.662 --- 10.0.0.2 ping statistics --- 00:33:45.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:45.662 rtt min/avg/max/mdev = 0.533/0.533/0.533/0.000 ms 00:33:45.662 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:45.662 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:45.662 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:33:45.662 00:33:45.662 --- 10.0.0.1 ping statistics --- 00:33:45.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:45.662 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:33:45.662 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:45.662 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:33:45.662 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:45.662 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:45.662 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:45.662 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:45.662 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:45.662 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:45.662 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:45.662 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:33:45.662 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:45.662 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:45.662 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:45.662 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=1242848 00:33:45.662 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 1242848 00:33:45.662 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:33:45.662 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 1242848 ']' 00:33:45.662 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:45.662 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:45.662 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:45.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:45.662 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:45.662 12:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:45.662 [2024-10-21 12:18:21.559661] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:45.662 [2024-10-21 12:18:21.560792] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:33:45.662 [2024-10-21 12:18:21.560847] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:45.662 [2024-10-21 12:18:21.649084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:45.662 [2024-10-21 12:18:21.701609] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:45.662 [2024-10-21 12:18:21.701663] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:45.662 [2024-10-21 12:18:21.701672] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:45.662 [2024-10-21 12:18:21.701680] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:45.662 [2024-10-21 12:18:21.701686] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:45.662 [2024-10-21 12:18:21.704093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:45.662 [2024-10-21 12:18:21.704256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:45.662 [2024-10-21 12:18:21.704416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:45.662 [2024-10-21 12:18:21.704416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:45.662 [2024-10-21 12:18:21.780983] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:45.662 [2024-10-21 12:18:21.782070] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:45.662 [2024-10-21 12:18:21.782439] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:45.662 [2024-10-21 12:18:21.782630] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:45.662 [2024-10-21 12:18:21.782688] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:45.923 12:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:45.923 12:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:33:45.923 12:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:45.923 12:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:45.923 12:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:45.923 12:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:45.923 12:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:46.184 [2024-10-21 12:18:22.573451] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:46.184 12:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:46.444 12:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:33:46.444 12:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:46.444 12:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:33:46.444 12:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:46.705 12:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:33:46.705 12:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:46.965 12:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:33:46.965 12:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:33:47.226 12:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:47.226 12:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:33:47.226 12:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:47.487 12:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:33:47.487 12:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:47.749 12:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:33:47.749 12:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:33:48.010 12:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:48.010 12:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:33:48.010 12:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:48.273 12:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:33:48.273 12:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:33:48.533 12:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:48.533 [2024-10-21 12:18:25.093414] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:48.534 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:33:48.794 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:33:49.055 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:49.627 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:33:49.627 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:33:49.627 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:33:49.627 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:33:49.627 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:33:49.627 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:33:51.538 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:33:51.538 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:33:51.539 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:33:51.539 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:33:51.539 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:33:51.539 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:33:51.539 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:33:51.539 [global] 00:33:51.539 thread=1 00:33:51.539 invalidate=1 00:33:51.539 rw=write 00:33:51.539 time_based=1 00:33:51.539 runtime=1 00:33:51.539 ioengine=libaio 00:33:51.539 direct=1 00:33:51.539 bs=4096 00:33:51.539 iodepth=1 00:33:51.539 norandommap=0 00:33:51.539 numjobs=1 00:33:51.539 00:33:51.539 verify_dump=1 00:33:51.539 verify_backlog=512 00:33:51.539 verify_state_save=0 00:33:51.539 do_verify=1 00:33:51.539 verify=crc32c-intel 00:33:51.539 [job0] 00:33:51.539 filename=/dev/nvme0n1 00:33:51.539 [job1] 00:33:51.539 filename=/dev/nvme0n2 00:33:51.539 [job2] 00:33:51.539 filename=/dev/nvme0n3 00:33:51.539 [job3] 00:33:51.539 filename=/dev/nvme0n4 00:33:51.539 Could not set queue depth (nvme0n1) 00:33:51.539 Could not set queue depth (nvme0n2) 00:33:51.539 Could not set queue depth (nvme0n3) 00:33:51.539 Could not set queue depth (nvme0n4) 00:33:51.800 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:51.800 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:51.800 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:51.800 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:51.800 fio-3.35 00:33:51.800 Starting 4 threads 00:33:53.198 00:33:53.198 job0: (groupid=0, jobs=1): err= 0: pid=1244419: Mon Oct 21 12:18:29 2024 00:33:53.198 read: IOPS=16, BW=67.5KiB/s (69.1kB/s)(68.0KiB/1008msec) 00:33:53.198 slat (nsec): min=26268, max=27639, avg=26885.88, stdev=355.62 00:33:53.198 clat (usec): min=40876, max=42199, avg=41562.17, stdev=549.92 00:33:53.198 lat (usec): min=40903, max=42226, avg=41589.06, stdev=549.87 00:33:53.198 clat percentiles (usec): 00:33:53.198 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:33:53.198 | 30.00th=[41157], 40.00th=[41157], 50.00th=[42206], 60.00th=[42206], 00:33:53.198 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:53.198 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:53.198 | 99.99th=[42206] 00:33:53.198 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:33:53.198 slat (nsec): min=7634, max=70049, avg=32519.97, stdev=8340.94 00:33:53.198 clat (usec): min=212, max=971, avg=548.84, stdev=156.86 00:33:53.198 lat (usec): min=247, max=1005, avg=581.36, stdev=158.56 00:33:53.198 clat percentiles (usec): 00:33:53.198 | 1.00th=[ 277], 5.00th=[ 306], 10.00th=[ 326], 20.00th=[ 396], 00:33:53.198 | 30.00th=[ 445], 40.00th=[ 498], 50.00th=[ 553], 60.00th=[ 603], 00:33:53.198 | 70.00th=[ 652], 80.00th=[ 701], 90.00th=[ 758], 95.00th=[ 799], 00:33:53.198 | 99.00th=[ 848], 99.50th=[ 881], 99.90th=[ 971], 99.95th=[ 971], 00:33:53.198 | 99.99th=[ 971] 00:33:53.198 bw ( KiB/s): min= 4096, max= 4096, per=46.16%, avg=4096.00, stdev= 0.00, samples=1 00:33:53.198 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:53.198 lat (usec) : 250=0.19%, 500=39.13%, 750=45.75%, 1000=11.72% 00:33:53.198 lat (msec) : 50=3.21% 00:33:53.198 cpu : usr=1.09%, sys=2.09%, ctx=529, majf=0, minf=1 00:33:53.198 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:53.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.198 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.198 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.198 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:53.198 job1: (groupid=0, jobs=1): err= 0: pid=1244420: Mon Oct 21 12:18:29 2024 00:33:53.198 read: IOPS=17, BW=69.9KiB/s (71.6kB/s)(72.0KiB/1030msec) 00:33:53.198 slat (nsec): min=25518, max=26197, avg=25769.61, stdev=191.92 00:33:53.198 clat (usec): min=1192, max=42089, avg=39662.49, stdev=9603.14 00:33:53.198 lat (usec): min=1217, max=42115, avg=39688.26, stdev=9603.15 00:33:53.198 clat percentiles (usec): 00:33:53.198 | 1.00th=[ 1188], 5.00th=[ 1188], 10.00th=[41157], 20.00th=[41681], 00:33:53.198 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:33:53.198 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:53.198 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:53.198 | 99.99th=[42206] 00:33:53.198 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:33:53.198 slat (nsec): min=9757, max=64309, avg=32134.92, stdev=7506.60 00:33:53.198 clat (usec): min=174, max=979, avg=575.51, stdev=149.23 00:33:53.198 lat (usec): min=208, max=1011, avg=607.65, stdev=151.28 00:33:53.198 clat percentiles (usec): 00:33:53.198 | 1.00th=[ 247], 5.00th=[ 318], 10.00th=[ 371], 20.00th=[ 453], 00:33:53.198 | 30.00th=[ 494], 40.00th=[ 537], 50.00th=[ 586], 60.00th=[ 619], 00:33:53.198 | 70.00th=[ 660], 80.00th=[ 709], 90.00th=[ 766], 95.00th=[ 807], 00:33:53.198 | 99.00th=[ 881], 99.50th=[ 955], 99.90th=[ 979], 99.95th=[ 979], 00:33:53.198 | 99.99th=[ 979] 00:33:53.198 bw ( KiB/s): min= 4096, max= 4096, per=46.16%, avg=4096.00, stdev= 0.00, samples=1 00:33:53.198 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:53.198 lat (usec) : 250=1.13%, 500=29.43%, 750=53.77%, 1000=12.26% 00:33:53.198 lat (msec) : 2=0.19%, 50=3.21% 00:33:53.198 cpu : usr=0.58%, sys=1.75%, ctx=530, majf=0, minf=1 00:33:53.198 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:53.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.198 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.198 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.198 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:53.198 job2: (groupid=0, jobs=1): err= 0: pid=1244421: Mon Oct 21 12:18:29 2024 00:33:53.198 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:33:53.198 slat (nsec): min=8716, max=61539, avg=26168.38, stdev=3492.37 00:33:53.198 clat (usec): min=699, max=1428, avg=1138.86, stdev=120.39 00:33:53.198 lat (usec): min=725, max=1453, avg=1165.03, stdev=120.67 00:33:53.198 clat percentiles (usec): 00:33:53.198 | 1.00th=[ 807], 5.00th=[ 930], 10.00th=[ 988], 20.00th=[ 1045], 00:33:53.198 | 30.00th=[ 1074], 40.00th=[ 1123], 50.00th=[ 1156], 60.00th=[ 1172], 00:33:53.198 | 70.00th=[ 1205], 80.00th=[ 1237], 90.00th=[ 1287], 95.00th=[ 1319], 00:33:53.199 | 99.00th=[ 1385], 99.50th=[ 1401], 99.90th=[ 1434], 99.95th=[ 1434], 00:33:53.199 | 99.99th=[ 1434] 00:33:53.199 write: IOPS=600, BW=2402KiB/s (2459kB/s)(2404KiB/1001msec); 0 zone resets 00:33:53.199 slat (nsec): min=9733, max=52899, avg=30567.43, stdev=8724.73 00:33:53.199 clat (usec): min=249, max=1078, avg=626.31, stdev=131.42 00:33:53.199 lat (usec): min=259, max=1112, avg=656.88, stdev=134.26 00:33:53.199 clat percentiles (usec): 00:33:53.199 | 1.00th=[ 322], 5.00th=[ 420], 10.00th=[ 453], 20.00th=[ 515], 00:33:53.199 | 30.00th=[ 562], 40.00th=[ 594], 50.00th=[ 627], 60.00th=[ 660], 00:33:53.199 | 70.00th=[ 693], 80.00th=[ 734], 90.00th=[ 791], 95.00th=[ 848], 00:33:53.199 | 99.00th=[ 947], 99.50th=[ 979], 99.90th=[ 1074], 99.95th=[ 1074], 00:33:53.199 | 99.99th=[ 1074] 00:33:53.199 bw ( KiB/s): min= 4096, max= 4096, per=46.16%, avg=4096.00, stdev= 0.00, samples=1 00:33:53.199 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:53.199 lat (usec) : 250=0.09%, 500=9.25%, 750=35.76%, 1000=14.38% 00:33:53.199 lat (msec) : 2=40.52% 00:33:53.199 cpu : usr=1.50%, sys=3.50%, ctx=1113, majf=0, minf=1 00:33:53.199 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:53.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.199 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.199 issued rwts: total=512,601,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.199 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:53.199 job3: (groupid=0, jobs=1): err= 0: pid=1244422: Mon Oct 21 12:18:29 2024 00:33:53.199 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:33:53.199 slat (nsec): min=7209, max=60171, avg=26573.16, stdev=4356.27 00:33:53.199 clat (usec): min=687, max=1497, avg=1075.09, stdev=123.29 00:33:53.199 lat (usec): min=714, max=1523, avg=1101.66, stdev=123.65 00:33:53.199 clat percentiles (usec): 00:33:53.199 | 1.00th=[ 766], 5.00th=[ 840], 10.00th=[ 930], 20.00th=[ 988], 00:33:53.199 | 30.00th=[ 1029], 40.00th=[ 1057], 50.00th=[ 1074], 60.00th=[ 1106], 00:33:53.199 | 70.00th=[ 1139], 80.00th=[ 1172], 90.00th=[ 1221], 95.00th=[ 1270], 00:33:53.199 | 99.00th=[ 1385], 99.50th=[ 1418], 99.90th=[ 1500], 99.95th=[ 1500], 00:33:53.199 | 99.99th=[ 1500] 00:33:53.199 write: IOPS=659, BW=2637KiB/s (2701kB/s)(2640KiB/1001msec); 0 zone resets 00:33:53.199 slat (nsec): min=9673, max=56191, avg=31797.87, stdev=7569.60 00:33:53.199 clat (usec): min=253, max=1132, avg=614.21, stdev=143.30 00:33:53.199 lat (usec): min=273, max=1148, avg=646.01, stdev=145.41 00:33:53.199 clat percentiles (usec): 00:33:53.199 | 1.00th=[ 297], 5.00th=[ 359], 10.00th=[ 400], 20.00th=[ 486], 00:33:53.199 | 30.00th=[ 553], 40.00th=[ 594], 50.00th=[ 627], 60.00th=[ 660], 00:33:53.199 | 70.00th=[ 693], 80.00th=[ 734], 90.00th=[ 783], 95.00th=[ 832], 00:33:53.199 | 99.00th=[ 930], 99.50th=[ 971], 99.90th=[ 1139], 99.95th=[ 1139], 00:33:53.199 | 99.99th=[ 1139] 00:33:53.199 bw ( KiB/s): min= 4096, max= 4096, per=46.16%, avg=4096.00, stdev= 0.00, samples=1 00:33:53.199 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:53.199 lat (usec) : 500=12.80%, 750=35.32%, 1000=17.83% 00:33:53.199 lat (msec) : 2=34.04% 00:33:53.199 cpu : usr=1.60%, sys=3.80%, ctx=1173, majf=0, minf=1 00:33:53.199 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:53.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.199 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.199 issued rwts: total=512,660,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.199 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:53.199 00:33:53.199 Run status group 0 (all jobs): 00:33:53.199 READ: bw=4113KiB/s (4211kB/s), 67.5KiB/s-2046KiB/s (69.1kB/s-2095kB/s), io=4236KiB (4338kB), run=1001-1030msec 00:33:53.199 WRITE: bw=8874KiB/s (9087kB/s), 1988KiB/s-2637KiB/s (2036kB/s-2701kB/s), io=9140KiB (9359kB), run=1001-1030msec 00:33:53.199 00:33:53.199 Disk stats (read/write): 00:33:53.199 nvme0n1: ios=62/512, merge=0/0, ticks=535/215, in_queue=750, util=86.97% 00:33:53.199 nvme0n2: ios=50/512, merge=0/0, ticks=878/268, in_queue=1146, util=92.66% 00:33:53.199 nvme0n3: ios=429/512, merge=0/0, ticks=471/306, in_queue=777, util=88.56% 00:33:53.199 nvme0n4: ios=455/512, merge=0/0, ticks=467/301, in_queue=768, util=89.60% 00:33:53.199 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:33:53.199 [global] 00:33:53.199 thread=1 00:33:53.199 invalidate=1 00:33:53.199 rw=randwrite 00:33:53.199 time_based=1 00:33:53.199 runtime=1 00:33:53.199 ioengine=libaio 00:33:53.199 direct=1 00:33:53.199 bs=4096 00:33:53.199 iodepth=1 00:33:53.199 norandommap=0 00:33:53.199 numjobs=1 00:33:53.199 00:33:53.199 verify_dump=1 00:33:53.199 verify_backlog=512 00:33:53.199 verify_state_save=0 00:33:53.199 do_verify=1 00:33:53.199 verify=crc32c-intel 00:33:53.199 [job0] 00:33:53.199 filename=/dev/nvme0n1 00:33:53.199 [job1] 00:33:53.199 filename=/dev/nvme0n2 00:33:53.199 [job2] 00:33:53.199 filename=/dev/nvme0n3 00:33:53.199 [job3] 00:33:53.199 filename=/dev/nvme0n4 00:33:53.199 Could not set queue depth (nvme0n1) 00:33:53.199 Could not set queue depth (nvme0n2) 00:33:53.199 Could not set queue depth (nvme0n3) 00:33:53.199 Could not set queue depth (nvme0n4) 00:33:53.460 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:53.460 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:53.460 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:53.460 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:53.460 fio-3.35 00:33:53.460 Starting 4 threads 00:33:54.848 00:33:54.848 job0: (groupid=0, jobs=1): err= 0: pid=1244947: Mon Oct 21 12:18:31 2024 00:33:54.848 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:33:54.848 slat (nsec): min=6610, max=57057, avg=26683.70, stdev=3967.35 00:33:54.848 clat (usec): min=690, max=1767, avg=1102.04, stdev=135.01 00:33:54.848 lat (usec): min=708, max=1798, avg=1128.72, stdev=135.67 00:33:54.848 clat percentiles (usec): 00:33:54.848 | 1.00th=[ 717], 5.00th=[ 816], 10.00th=[ 930], 20.00th=[ 1012], 00:33:54.849 | 30.00th=[ 1057], 40.00th=[ 1090], 50.00th=[ 1123], 60.00th=[ 1139], 00:33:54.849 | 70.00th=[ 1172], 80.00th=[ 1205], 90.00th=[ 1254], 95.00th=[ 1287], 00:33:54.849 | 99.00th=[ 1369], 99.50th=[ 1385], 99.90th=[ 1762], 99.95th=[ 1762], 00:33:54.849 | 99.99th=[ 1762] 00:33:54.849 write: IOPS=648, BW=2593KiB/s (2656kB/s)(2596KiB/1001msec); 0 zone resets 00:33:54.849 slat (nsec): min=8932, max=67948, avg=29979.57, stdev=8492.27 00:33:54.849 clat (usec): min=183, max=961, avg=606.14, stdev=134.26 00:33:54.849 lat (usec): min=209, max=993, avg=636.12, stdev=137.13 00:33:54.849 clat percentiles (usec): 00:33:54.849 | 1.00th=[ 245], 5.00th=[ 375], 10.00th=[ 441], 20.00th=[ 494], 00:33:54.849 | 30.00th=[ 545], 40.00th=[ 578], 50.00th=[ 611], 60.00th=[ 644], 00:33:54.849 | 70.00th=[ 685], 80.00th=[ 725], 90.00th=[ 766], 95.00th=[ 807], 00:33:54.849 | 99.00th=[ 914], 99.50th=[ 930], 99.90th=[ 963], 99.95th=[ 963], 00:33:54.849 | 99.99th=[ 963] 00:33:54.849 bw ( KiB/s): min= 4096, max= 4096, per=42.56%, avg=4096.00, stdev= 0.00, samples=1 00:33:54.849 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:54.849 lat (usec) : 250=0.60%, 500=10.94%, 750=38.16%, 1000=14.30% 00:33:54.849 lat (msec) : 2=36.00% 00:33:54.849 cpu : usr=2.50%, sys=4.40%, ctx=1161, majf=0, minf=1 00:33:54.849 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:54.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.849 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.849 issued rwts: total=512,649,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.849 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:54.849 job1: (groupid=0, jobs=1): err= 0: pid=1244948: Mon Oct 21 12:18:31 2024 00:33:54.849 read: IOPS=237, BW=951KiB/s (974kB/s)(952KiB/1001msec) 00:33:54.849 slat (nsec): min=9384, max=46332, avg=26995.37, stdev=4716.37 00:33:54.849 clat (usec): min=674, max=42056, avg=2976.67, stdev=8594.98 00:33:54.849 lat (usec): min=701, max=42085, avg=3003.66, stdev=8595.14 00:33:54.849 clat percentiles (usec): 00:33:54.849 | 1.00th=[ 742], 5.00th=[ 857], 10.00th=[ 938], 20.00th=[ 988], 00:33:54.849 | 30.00th=[ 1045], 40.00th=[ 1074], 50.00th=[ 1106], 60.00th=[ 1139], 00:33:54.849 | 70.00th=[ 1172], 80.00th=[ 1221], 90.00th=[ 1254], 95.00th=[ 1385], 00:33:54.849 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:54.849 | 99.99th=[42206] 00:33:54.849 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:33:54.849 slat (nsec): min=10329, max=61665, avg=30313.28, stdev=10202.93 00:33:54.849 clat (usec): min=198, max=964, avg=516.10, stdev=160.76 00:33:54.849 lat (usec): min=236, max=985, avg=546.41, stdev=160.12 00:33:54.849 clat percentiles (usec): 00:33:54.849 | 1.00th=[ 231], 5.00th=[ 289], 10.00th=[ 318], 20.00th=[ 355], 00:33:54.849 | 30.00th=[ 404], 40.00th=[ 469], 50.00th=[ 510], 60.00th=[ 553], 00:33:54.849 | 70.00th=[ 603], 80.00th=[ 644], 90.00th=[ 734], 95.00th=[ 816], 00:33:54.849 | 99.00th=[ 906], 99.50th=[ 947], 99.90th=[ 963], 99.95th=[ 963], 00:33:54.849 | 99.99th=[ 963] 00:33:54.849 bw ( KiB/s): min= 4096, max= 4096, per=42.56%, avg=4096.00, stdev= 0.00, samples=1 00:33:54.849 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:54.849 lat (usec) : 250=1.20%, 500=31.20%, 750=29.73%, 1000=13.20% 00:33:54.849 lat (msec) : 2=23.20%, 50=1.47% 00:33:54.849 cpu : usr=1.00%, sys=2.40%, ctx=751, majf=0, minf=1 00:33:54.849 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:54.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.849 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.849 issued rwts: total=238,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.849 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:54.849 job2: (groupid=0, jobs=1): err= 0: pid=1244949: Mon Oct 21 12:18:31 2024 00:33:54.849 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:33:54.849 slat (nsec): min=6791, max=76664, avg=27351.61, stdev=5453.78 00:33:54.849 clat (usec): min=594, max=1432, avg=970.36, stdev=122.31 00:33:54.849 lat (usec): min=623, max=1460, avg=997.71, stdev=123.59 00:33:54.849 clat percentiles (usec): 00:33:54.849 | 1.00th=[ 652], 5.00th=[ 758], 10.00th=[ 799], 20.00th=[ 881], 00:33:54.849 | 30.00th=[ 922], 40.00th=[ 947], 50.00th=[ 979], 60.00th=[ 1012], 00:33:54.849 | 70.00th=[ 1045], 80.00th=[ 1074], 90.00th=[ 1106], 95.00th=[ 1156], 00:33:54.849 | 99.00th=[ 1205], 99.50th=[ 1221], 99.90th=[ 1434], 99.95th=[ 1434], 00:33:54.849 | 99.99th=[ 1434] 00:33:54.849 write: IOPS=739, BW=2957KiB/s (3028kB/s)(2960KiB/1001msec); 0 zone resets 00:33:54.849 slat (nsec): min=9331, max=70024, avg=30382.13, stdev=10622.69 00:33:54.849 clat (usec): min=228, max=1080, avg=617.94, stdev=131.62 00:33:54.849 lat (usec): min=239, max=1115, avg=648.33, stdev=135.97 00:33:54.849 clat percentiles (usec): 00:33:54.849 | 1.00th=[ 310], 5.00th=[ 388], 10.00th=[ 441], 20.00th=[ 506], 00:33:54.849 | 30.00th=[ 562], 40.00th=[ 594], 50.00th=[ 619], 60.00th=[ 652], 00:33:54.849 | 70.00th=[ 693], 80.00th=[ 734], 90.00th=[ 775], 95.00th=[ 807], 00:33:54.849 | 99.00th=[ 930], 99.50th=[ 979], 99.90th=[ 1074], 99.95th=[ 1074], 00:33:54.849 | 99.99th=[ 1074] 00:33:54.849 bw ( KiB/s): min= 4096, max= 4096, per=42.56%, avg=4096.00, stdev= 0.00, samples=1 00:33:54.849 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:54.849 lat (usec) : 250=0.08%, 500=11.10%, 750=40.73%, 1000=30.19% 00:33:54.849 lat (msec) : 2=17.89% 00:33:54.849 cpu : usr=1.50%, sys=5.90%, ctx=1254, majf=0, minf=1 00:33:54.849 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:54.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.849 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.849 issued rwts: total=512,740,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.849 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:54.849 job3: (groupid=0, jobs=1): err= 0: pid=1244950: Mon Oct 21 12:18:31 2024 00:33:54.849 read: IOPS=246, BW=985KiB/s (1009kB/s)(988KiB/1003msec) 00:33:54.849 slat (nsec): min=25295, max=60126, avg=26104.74, stdev=2356.87 00:33:54.849 clat (usec): min=486, max=41959, avg=2785.89, stdev=8358.35 00:33:54.849 lat (usec): min=512, max=41985, avg=2812.00, stdev=8358.34 00:33:54.849 clat percentiles (usec): 00:33:54.849 | 1.00th=[ 578], 5.00th=[ 725], 10.00th=[ 807], 20.00th=[ 881], 00:33:54.849 | 30.00th=[ 930], 40.00th=[ 955], 50.00th=[ 988], 60.00th=[ 1020], 00:33:54.849 | 70.00th=[ 1057], 80.00th=[ 1123], 90.00th=[ 1254], 95.00th=[ 1450], 00:33:54.849 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:33:54.849 | 99.99th=[42206] 00:33:54.849 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:33:54.849 slat (nsec): min=10050, max=66537, avg=32902.61, stdev=6349.65 00:33:54.849 clat (usec): min=208, max=1113, avg=555.49, stdev=165.81 00:33:54.849 lat (usec): min=241, max=1145, avg=588.39, stdev=166.51 00:33:54.849 clat percentiles (usec): 00:33:54.849 | 1.00th=[ 262], 5.00th=[ 314], 10.00th=[ 338], 20.00th=[ 404], 00:33:54.849 | 30.00th=[ 457], 40.00th=[ 502], 50.00th=[ 545], 60.00th=[ 594], 00:33:54.849 | 70.00th=[ 635], 80.00th=[ 701], 90.00th=[ 775], 95.00th=[ 832], 00:33:54.849 | 99.00th=[ 971], 99.50th=[ 1037], 99.90th=[ 1106], 99.95th=[ 1106], 00:33:54.849 | 99.99th=[ 1106] 00:33:54.849 bw ( KiB/s): min= 4096, max= 4096, per=42.56%, avg=4096.00, stdev= 0.00, samples=1 00:33:54.849 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:54.849 lat (usec) : 250=0.66%, 500=26.48%, 750=32.81%, 1000=23.98% 00:33:54.849 lat (msec) : 2=14.62%, 50=1.45% 00:33:54.849 cpu : usr=1.80%, sys=1.70%, ctx=760, majf=0, minf=1 00:33:54.849 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:54.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.849 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.849 issued rwts: total=247,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.849 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:54.849 00:33:54.849 Run status group 0 (all jobs): 00:33:54.849 READ: bw=6018KiB/s (6162kB/s), 951KiB/s-2046KiB/s (974kB/s-2095kB/s), io=6036KiB (6181kB), run=1001-1003msec 00:33:54.849 WRITE: bw=9623KiB/s (9854kB/s), 2042KiB/s-2957KiB/s (2091kB/s-3028kB/s), io=9652KiB (9884kB), run=1001-1003msec 00:33:54.849 00:33:54.849 Disk stats (read/write): 00:33:54.849 nvme0n1: ios=500/512, merge=0/0, ticks=781/240, in_queue=1021, util=94.59% 00:33:54.849 nvme0n2: ios=122/512, merge=0/0, ticks=1407/239, in_queue=1646, util=96.53% 00:33:54.849 nvme0n3: ios=519/512, merge=0/0, ticks=1371/255, in_queue=1626, util=96.62% 00:33:54.849 nvme0n4: ios=265/512, merge=0/0, ticks=1426/269, in_queue=1695, util=96.36% 00:33:54.849 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:33:54.849 [global] 00:33:54.849 thread=1 00:33:54.849 invalidate=1 00:33:54.849 rw=write 00:33:54.849 time_based=1 00:33:54.849 runtime=1 00:33:54.849 ioengine=libaio 00:33:54.849 direct=1 00:33:54.849 bs=4096 00:33:54.849 iodepth=128 00:33:54.849 norandommap=0 00:33:54.849 numjobs=1 00:33:54.849 00:33:54.849 verify_dump=1 00:33:54.849 verify_backlog=512 00:33:54.849 verify_state_save=0 00:33:54.849 do_verify=1 00:33:54.849 verify=crc32c-intel 00:33:54.849 [job0] 00:33:54.849 filename=/dev/nvme0n1 00:33:54.849 [job1] 00:33:54.849 filename=/dev/nvme0n2 00:33:54.849 [job2] 00:33:54.849 filename=/dev/nvme0n3 00:33:54.849 [job3] 00:33:54.849 filename=/dev/nvme0n4 00:33:54.849 Could not set queue depth (nvme0n1) 00:33:54.849 Could not set queue depth (nvme0n2) 00:33:54.849 Could not set queue depth (nvme0n3) 00:33:54.849 Could not set queue depth (nvme0n4) 00:33:55.111 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:55.111 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:55.111 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:55.111 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:55.111 fio-3.35 00:33:55.111 Starting 4 threads 00:33:56.499 00:33:56.499 job0: (groupid=0, jobs=1): err= 0: pid=1245474: Mon Oct 21 12:18:32 2024 00:33:56.499 read: IOPS=5724, BW=22.4MiB/s (23.4MB/s)(23.4MiB/1048msec) 00:33:56.499 slat (nsec): min=905, max=17158k, avg=81459.87, stdev=655593.35 00:33:56.499 clat (usec): min=2898, max=58327, avg=12177.88, stdev=8303.26 00:33:56.499 lat (usec): min=2904, max=59782, avg=12259.34, stdev=8334.26 00:33:56.499 clat percentiles (usec): 00:33:56.499 | 1.00th=[ 4228], 5.00th=[ 5800], 10.00th=[ 6194], 20.00th=[ 6980], 00:33:56.499 | 30.00th=[ 8586], 40.00th=[ 9372], 50.00th=[10421], 60.00th=[11731], 00:33:56.499 | 70.00th=[12518], 80.00th=[13435], 90.00th=[16712], 95.00th=[28967], 00:33:56.499 | 99.00th=[50594], 99.50th=[51119], 99.90th=[58459], 99.95th=[58459], 00:33:56.499 | 99.99th=[58459] 00:33:56.499 write: IOPS=5862, BW=22.9MiB/s (24.0MB/s)(24.0MiB/1048msec); 0 zone resets 00:33:56.499 slat (nsec): min=1603, max=8880.4k, avg=71640.21, stdev=520613.86 00:33:56.499 clat (usec): min=669, max=30512, avg=9674.58, stdev=3615.87 00:33:56.499 lat (usec): min=851, max=30521, avg=9746.22, stdev=3652.98 00:33:56.499 clat percentiles (usec): 00:33:56.499 | 1.00th=[ 3097], 5.00th=[ 4686], 10.00th=[ 5538], 20.00th=[ 6521], 00:33:56.499 | 30.00th=[ 7898], 40.00th=[ 8717], 50.00th=[ 9241], 60.00th=[10028], 00:33:56.499 | 70.00th=[11469], 80.00th=[11994], 90.00th=[13566], 95.00th=[15795], 00:33:56.499 | 99.00th=[19792], 99.50th=[28181], 99.90th=[28181], 99.95th=[28181], 00:33:56.499 | 99.99th=[30540] 00:33:56.499 bw ( KiB/s): min=22816, max=26336, per=25.17%, avg=24576.00, stdev=2489.02, samples=2 00:33:56.499 iops : min= 5704, max= 6584, avg=6144.00, stdev=622.25, samples=2 00:33:56.499 lat (usec) : 750=0.01%, 1000=0.05% 00:33:56.499 lat (msec) : 2=0.36%, 4=1.72%, 10=51.31%, 20=42.31%, 50=3.34% 00:33:56.499 lat (msec) : 100=0.91% 00:33:56.499 cpu : usr=4.78%, sys=6.30%, ctx=323, majf=0, minf=1 00:33:56.499 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:33:56.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.499 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:56.499 issued rwts: total=5999,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:56.499 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:56.499 job1: (groupid=0, jobs=1): err= 0: pid=1245475: Mon Oct 21 12:18:32 2024 00:33:56.499 read: IOPS=8183, BW=32.0MiB/s (33.5MB/s)(32.2MiB/1007msec) 00:33:56.499 slat (nsec): min=951, max=10561k, avg=56739.51, stdev=427049.62 00:33:56.499 clat (usec): min=3532, max=26728, avg=7620.49, stdev=2414.52 00:33:56.499 lat (usec): min=3537, max=26813, avg=7677.23, stdev=2440.80 00:33:56.499 clat percentiles (usec): 00:33:56.499 | 1.00th=[ 4228], 5.00th=[ 4686], 10.00th=[ 5145], 20.00th=[ 5932], 00:33:56.499 | 30.00th=[ 6390], 40.00th=[ 6718], 50.00th=[ 6980], 60.00th=[ 7504], 00:33:56.499 | 70.00th=[ 8356], 80.00th=[ 9110], 90.00th=[10159], 95.00th=[12125], 00:33:56.499 | 99.00th=[16450], 99.50th=[18220], 99.90th=[24249], 99.95th=[24511], 00:33:56.499 | 99.99th=[26608] 00:33:56.499 write: IOPS=8643, BW=33.8MiB/s (35.4MB/s)(34.0MiB/1007msec); 0 zone resets 00:33:56.499 slat (nsec): min=1591, max=12191k, avg=56331.85, stdev=424269.69 00:33:56.499 clat (usec): min=1174, max=20216, avg=7458.20, stdev=2324.51 00:33:56.499 lat (usec): min=1184, max=20220, avg=7514.53, stdev=2332.53 00:33:56.499 clat percentiles (usec): 00:33:56.499 | 1.00th=[ 4146], 5.00th=[ 4686], 10.00th=[ 5014], 20.00th=[ 5538], 00:33:56.499 | 30.00th=[ 5997], 40.00th=[ 6521], 50.00th=[ 7046], 60.00th=[ 7504], 00:33:56.499 | 70.00th=[ 8455], 80.00th=[ 9241], 90.00th=[10159], 95.00th=[11469], 00:33:56.499 | 99.00th=[17171], 99.50th=[17433], 99.90th=[17695], 99.95th=[17695], 00:33:56.499 | 99.99th=[20317] 00:33:56.499 bw ( KiB/s): min=32768, max=36240, per=35.34%, avg=34504.00, stdev=2455.07, samples=2 00:33:56.499 iops : min= 8192, max= 9060, avg=8626.00, stdev=613.77, samples=2 00:33:56.499 lat (msec) : 2=0.01%, 4=0.65%, 10=87.95%, 20=11.18%, 50=0.21% 00:33:56.499 cpu : usr=5.86%, sys=7.85%, ctx=502, majf=0, minf=2 00:33:56.499 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:33:56.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.499 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:56.499 issued rwts: total=8241,8704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:56.499 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:56.499 job2: (groupid=0, jobs=1): err= 0: pid=1245476: Mon Oct 21 12:18:32 2024 00:33:56.499 read: IOPS=6101, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1007msec) 00:33:56.499 slat (nsec): min=914, max=8281.0k, avg=71406.89, stdev=509070.57 00:33:56.499 clat (usec): min=4754, max=29822, avg=9528.84, stdev=2776.78 00:33:56.499 lat (usec): min=4756, max=31305, avg=9600.24, stdev=2802.58 00:33:56.499 clat percentiles (usec): 00:33:56.499 | 1.00th=[ 5276], 5.00th=[ 5997], 10.00th=[ 6521], 20.00th=[ 7111], 00:33:56.499 | 30.00th=[ 7767], 40.00th=[ 8455], 50.00th=[ 9241], 60.00th=[ 9896], 00:33:56.499 | 70.00th=[10421], 80.00th=[11338], 90.00th=[13304], 95.00th=[14353], 00:33:56.499 | 99.00th=[17695], 99.50th=[18220], 99.90th=[29754], 99.95th=[29754], 00:33:56.499 | 99.99th=[29754] 00:33:56.499 write: IOPS=6588, BW=25.7MiB/s (27.0MB/s)(25.9MiB/1007msec); 0 zone resets 00:33:56.499 slat (nsec): min=1591, max=8388.9k, avg=78722.49, stdev=491770.12 00:33:56.499 clat (usec): min=828, max=47106, avg=10405.60, stdev=6239.48 00:33:56.499 lat (usec): min=837, max=47111, avg=10484.32, stdev=6283.31 00:33:56.499 clat percentiles (usec): 00:33:56.499 | 1.00th=[ 4621], 5.00th=[ 5276], 10.00th=[ 5604], 20.00th=[ 6783], 00:33:56.499 | 30.00th=[ 7439], 40.00th=[ 8029], 50.00th=[ 8717], 60.00th=[ 9896], 00:33:56.499 | 70.00th=[10683], 80.00th=[11207], 90.00th=[16057], 95.00th=[21365], 00:33:56.499 | 99.00th=[38536], 99.50th=[41681], 99.90th=[46924], 99.95th=[46924], 00:33:56.499 | 99.99th=[46924] 00:33:56.499 bw ( KiB/s): min=24576, max=27488, per=26.66%, avg=26032.00, stdev=2059.09, samples=2 00:33:56.499 iops : min= 6144, max= 6872, avg=6508.00, stdev=514.77, samples=2 00:33:56.499 lat (usec) : 1000=0.02% 00:33:56.499 lat (msec) : 2=0.05%, 4=0.09%, 10=61.01%, 20=35.38%, 50=3.45% 00:33:56.499 cpu : usr=5.37%, sys=6.16%, ctx=435, majf=0, minf=1 00:33:56.499 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:33:56.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.499 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:56.499 issued rwts: total=6144,6635,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:56.499 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:56.499 job3: (groupid=0, jobs=1): err= 0: pid=1245477: Mon Oct 21 12:18:32 2024 00:33:56.499 read: IOPS=3921, BW=15.3MiB/s (16.1MB/s)(15.4MiB/1008msec) 00:33:56.499 slat (nsec): min=972, max=12376k, avg=115444.26, stdev=786665.34 00:33:56.499 clat (usec): min=1361, max=64099, avg=16457.82, stdev=9364.18 00:33:56.499 lat (usec): min=1369, max=64104, avg=16573.26, stdev=9410.78 00:33:56.499 clat percentiles (usec): 00:33:56.499 | 1.00th=[ 2180], 5.00th=[ 4047], 10.00th=[ 6849], 20.00th=[ 8979], 00:33:56.499 | 30.00th=[10290], 40.00th=[12387], 50.00th=[14615], 60.00th=[17957], 00:33:56.499 | 70.00th=[19268], 80.00th=[22152], 90.00th=[28705], 95.00th=[35914], 00:33:56.499 | 99.00th=[45876], 99.50th=[45876], 99.90th=[60031], 99.95th=[60031], 00:33:56.499 | 99.99th=[64226] 00:33:56.499 write: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec); 0 zone resets 00:33:56.499 slat (nsec): min=1675, max=10760k, avg=114174.39, stdev=683316.87 00:33:56.499 clat (usec): min=1263, max=36962, avg=15195.91, stdev=7731.58 00:33:56.499 lat (usec): min=1273, max=36983, avg=15310.08, stdev=7803.60 00:33:56.499 clat percentiles (usec): 00:33:56.499 | 1.00th=[ 3621], 5.00th=[ 6325], 10.00th=[ 6915], 20.00th=[ 8225], 00:33:56.499 | 30.00th=[10028], 40.00th=[11338], 50.00th=[13960], 60.00th=[16450], 00:33:56.499 | 70.00th=[17957], 80.00th=[19268], 90.00th=[28181], 95.00th=[32900], 00:33:56.499 | 99.00th=[35914], 99.50th=[36439], 99.90th=[36963], 99.95th=[36963], 00:33:56.499 | 99.99th=[36963] 00:33:56.499 bw ( KiB/s): min=13208, max=19560, per=16.78%, avg=16384.00, stdev=4491.54, samples=2 00:33:56.499 iops : min= 3302, max= 4890, avg=4096.00, stdev=1122.89, samples=2 00:33:56.499 lat (msec) : 2=0.68%, 4=2.52%, 10=24.16%, 20=50.99%, 50=21.56% 00:33:56.499 lat (msec) : 100=0.09% 00:33:56.499 cpu : usr=3.28%, sys=3.67%, ctx=322, majf=0, minf=1 00:33:56.499 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:33:56.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.500 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:56.500 issued rwts: total=3953,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:56.500 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:56.500 00:33:56.500 Run status group 0 (all jobs): 00:33:56.500 READ: bw=90.7MiB/s (95.1MB/s), 15.3MiB/s-32.0MiB/s (16.1MB/s-33.5MB/s), io=95.1MiB (99.7MB), run=1007-1048msec 00:33:56.500 WRITE: bw=95.3MiB/s (100.0MB/s), 15.9MiB/s-33.8MiB/s (16.6MB/s-35.4MB/s), io=99.9MiB (105MB), run=1007-1048msec 00:33:56.500 00:33:56.500 Disk stats (read/write): 00:33:56.500 nvme0n1: ios=4914/5120, merge=0/0, ticks=33070/26864, in_queue=59934, util=97.60% 00:33:56.500 nvme0n2: ios=6705/7167, merge=0/0, ticks=49340/52103, in_queue=101443, util=89.30% 00:33:56.500 nvme0n3: ios=5176/5569, merge=0/0, ticks=40748/45188, in_queue=85936, util=90.51% 00:33:56.500 nvme0n4: ios=3395/3584, merge=0/0, ticks=28208/27121, in_queue=55329, util=98.93% 00:33:56.500 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:33:56.500 [global] 00:33:56.500 thread=1 00:33:56.500 invalidate=1 00:33:56.500 rw=randwrite 00:33:56.500 time_based=1 00:33:56.500 runtime=1 00:33:56.500 ioengine=libaio 00:33:56.500 direct=1 00:33:56.500 bs=4096 00:33:56.500 iodepth=128 00:33:56.500 norandommap=0 00:33:56.500 numjobs=1 00:33:56.500 00:33:56.500 verify_dump=1 00:33:56.500 verify_backlog=512 00:33:56.500 verify_state_save=0 00:33:56.500 do_verify=1 00:33:56.500 verify=crc32c-intel 00:33:56.500 [job0] 00:33:56.500 filename=/dev/nvme0n1 00:33:56.500 [job1] 00:33:56.500 filename=/dev/nvme0n2 00:33:56.500 [job2] 00:33:56.500 filename=/dev/nvme0n3 00:33:56.500 [job3] 00:33:56.500 filename=/dev/nvme0n4 00:33:56.500 Could not set queue depth (nvme0n1) 00:33:56.500 Could not set queue depth (nvme0n2) 00:33:56.500 Could not set queue depth (nvme0n3) 00:33:56.500 Could not set queue depth (nvme0n4) 00:33:57.070 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:57.070 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:57.070 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:57.070 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:57.070 fio-3.35 00:33:57.070 Starting 4 threads 00:33:58.455 00:33:58.455 job0: (groupid=0, jobs=1): err= 0: pid=1245957: Mon Oct 21 12:18:34 2024 00:33:58.455 read: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec) 00:33:58.455 slat (nsec): min=897, max=18329k, avg=119687.18, stdev=887161.38 00:33:58.455 clat (usec): min=3837, max=51029, avg=14290.37, stdev=6583.32 00:33:58.455 lat (usec): min=3842, max=51032, avg=14410.06, stdev=6674.61 00:33:58.455 clat percentiles (usec): 00:33:58.455 | 1.00th=[ 6063], 5.00th=[ 6259], 10.00th=[ 6652], 20.00th=[ 7635], 00:33:58.455 | 30.00th=[ 8160], 40.00th=[13960], 50.00th=[15008], 60.00th=[15795], 00:33:58.455 | 70.00th=[17433], 80.00th=[19006], 90.00th=[20841], 95.00th=[22152], 00:33:58.455 | 99.00th=[39060], 99.50th=[49021], 99.90th=[51119], 99.95th=[51119], 00:33:58.455 | 99.99th=[51119] 00:33:58.455 write: IOPS=4365, BW=17.1MiB/s (17.9MB/s)(17.2MiB/1011msec); 0 zone resets 00:33:58.455 slat (nsec): min=1590, max=12199k, avg=108201.14, stdev=659393.61 00:33:58.455 clat (usec): min=1320, max=81322, avg=15783.61, stdev=12409.17 00:33:58.455 lat (usec): min=1329, max=81330, avg=15891.81, stdev=12488.22 00:33:58.455 clat percentiles (usec): 00:33:58.455 | 1.00th=[ 4359], 5.00th=[ 5342], 10.00th=[ 6587], 20.00th=[ 7177], 00:33:58.455 | 30.00th=[ 9503], 40.00th=[10683], 50.00th=[14484], 60.00th=[15270], 00:33:58.455 | 70.00th=[17171], 80.00th=[18744], 90.00th=[24249], 95.00th=[40109], 00:33:58.455 | 99.00th=[73925], 99.50th=[74974], 99.90th=[81265], 99.95th=[81265], 00:33:58.455 | 99.99th=[81265] 00:33:58.455 bw ( KiB/s): min=13816, max=20472, per=17.32%, avg=17144.00, stdev=4706.50, samples=2 00:33:58.455 iops : min= 3454, max= 5118, avg=4286.00, stdev=1176.63, samples=2 00:33:58.455 lat (msec) : 2=0.11%, 4=0.51%, 10=33.69%, 20=52.50%, 50=11.06% 00:33:58.455 lat (msec) : 100=2.14% 00:33:58.455 cpu : usr=3.17%, sys=4.75%, ctx=343, majf=0, minf=1 00:33:58.455 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:33:58.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.455 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:58.455 issued rwts: total=4096,4414,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.455 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:58.455 job1: (groupid=0, jobs=1): err= 0: pid=1245969: Mon Oct 21 12:18:34 2024 00:33:58.455 read: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec) 00:33:58.455 slat (nsec): min=929, max=5118.5k, avg=74226.26, stdev=410795.44 00:33:58.455 clat (usec): min=5828, max=19845, avg=9564.02, stdev=1863.98 00:33:58.455 lat (usec): min=5831, max=19869, avg=9638.25, stdev=1898.73 00:33:58.455 clat percentiles (usec): 00:33:58.455 | 1.00th=[ 6587], 5.00th=[ 7111], 10.00th=[ 7767], 20.00th=[ 8291], 00:33:58.455 | 30.00th=[ 8586], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9372], 00:33:58.455 | 70.00th=[ 9896], 80.00th=[10945], 90.00th=[12125], 95.00th=[13698], 00:33:58.455 | 99.00th=[15139], 99.50th=[15664], 99.90th=[16909], 99.95th=[17695], 00:33:58.455 | 99.99th=[19792] 00:33:58.455 write: IOPS=6985, BW=27.3MiB/s (28.6MB/s)(27.3MiB/1002msec); 0 zone resets 00:33:58.455 slat (nsec): min=1579, max=5634.0k, avg=68484.03, stdev=429282.90 00:33:58.455 clat (usec): min=756, max=18761, avg=9008.69, stdev=1968.56 00:33:58.455 lat (usec): min=3325, max=18782, avg=9077.17, stdev=2015.44 00:33:58.455 clat percentiles (usec): 00:33:58.455 | 1.00th=[ 5014], 5.00th=[ 6915], 10.00th=[ 7177], 20.00th=[ 7832], 00:33:58.455 | 30.00th=[ 8029], 40.00th=[ 8160], 50.00th=[ 8291], 60.00th=[ 8586], 00:33:58.455 | 70.00th=[ 9241], 80.00th=[11076], 90.00th=[11863], 95.00th=[12649], 00:33:58.455 | 99.00th=[15926], 99.50th=[16450], 99.90th=[17433], 99.95th=[17433], 00:33:58.455 | 99.99th=[18744] 00:33:58.455 bw ( KiB/s): min=24576, max=30400, per=27.77%, avg=27488.00, stdev=4118.19, samples=2 00:33:58.455 iops : min= 6144, max= 7600, avg=6872.00, stdev=1029.55, samples=2 00:33:58.455 lat (usec) : 1000=0.01% 00:33:58.455 lat (msec) : 4=0.31%, 10=72.73%, 20=26.96% 00:33:58.455 cpu : usr=3.50%, sys=5.29%, ctx=536, majf=0, minf=1 00:33:58.455 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:33:58.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.455 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:58.455 issued rwts: total=6656,6999,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.455 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:58.455 job2: (groupid=0, jobs=1): err= 0: pid=1245985: Mon Oct 21 12:18:34 2024 00:33:58.455 read: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec) 00:33:58.455 slat (nsec): min=989, max=12065k, avg=87940.88, stdev=629036.66 00:33:58.455 clat (usec): min=3168, max=31069, avg=11163.30, stdev=4944.36 00:33:58.455 lat (usec): min=3175, max=31078, avg=11251.24, stdev=4993.79 00:33:58.455 clat percentiles (usec): 00:33:58.455 | 1.00th=[ 4490], 5.00th=[ 5997], 10.00th=[ 6915], 20.00th=[ 7504], 00:33:58.455 | 30.00th=[ 8356], 40.00th=[ 8717], 50.00th=[ 9241], 60.00th=[10421], 00:33:58.455 | 70.00th=[12125], 80.00th=[14615], 90.00th=[18220], 95.00th=[22938], 00:33:58.455 | 99.00th=[27395], 99.50th=[28443], 99.90th=[30278], 99.95th=[31065], 00:33:58.455 | 99.99th=[31065] 00:33:58.455 write: IOPS=5812, BW=22.7MiB/s (23.8MB/s)(22.8MiB/1006msec); 0 zone resets 00:33:58.455 slat (nsec): min=1716, max=13409k, avg=80914.27, stdev=585897.44 00:33:58.455 clat (usec): min=1165, max=62669, avg=11046.72, stdev=7337.76 00:33:58.455 lat (usec): min=1175, max=62678, avg=11127.63, stdev=7375.50 00:33:58.455 clat percentiles (usec): 00:33:58.455 | 1.00th=[ 4686], 5.00th=[ 5473], 10.00th=[ 5866], 20.00th=[ 6652], 00:33:58.455 | 30.00th=[ 7373], 40.00th=[ 7963], 50.00th=[ 8848], 60.00th=[10552], 00:33:58.455 | 70.00th=[13173], 80.00th=[14615], 90.00th=[15270], 95.00th=[17171], 00:33:58.455 | 99.00th=[53216], 99.50th=[59507], 99.90th=[62653], 99.95th=[62653], 00:33:58.455 | 99.99th=[62653] 00:33:58.455 bw ( KiB/s): min=17528, max=28224, per=23.11%, avg=22876.00, stdev=7563.21, samples=2 00:33:58.455 iops : min= 4382, max= 7056, avg=5719.00, stdev=1890.80, samples=2 00:33:58.455 lat (msec) : 2=0.03%, 4=0.51%, 10=56.90%, 20=36.00%, 50=5.93% 00:33:58.455 lat (msec) : 100=0.62% 00:33:58.455 cpu : usr=4.78%, sys=6.07%, ctx=339, majf=0, minf=2 00:33:58.455 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:33:58.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.455 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:58.455 issued rwts: total=5632,5847,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.455 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:58.455 job3: (groupid=0, jobs=1): err= 0: pid=1245992: Mon Oct 21 12:18:34 2024 00:33:58.455 read: IOPS=7649, BW=29.9MiB/s (31.3MB/s)(30.0MiB/1004msec) 00:33:58.455 slat (nsec): min=988, max=11767k, avg=66158.26, stdev=523584.81 00:33:58.455 clat (usec): min=2642, max=24001, avg=8422.06, stdev=2137.97 00:33:58.455 lat (usec): min=3637, max=24009, avg=8488.22, stdev=2174.29 00:33:58.455 clat percentiles (usec): 00:33:58.456 | 1.00th=[ 4817], 5.00th=[ 5735], 10.00th=[ 6194], 20.00th=[ 6980], 00:33:58.456 | 30.00th=[ 7308], 40.00th=[ 7504], 50.00th=[ 7898], 60.00th=[ 8225], 00:33:58.456 | 70.00th=[ 8848], 80.00th=[10028], 90.00th=[11338], 95.00th=[12518], 00:33:58.456 | 99.00th=[15664], 99.50th=[16450], 99.90th=[16450], 99.95th=[16581], 00:33:58.456 | 99.99th=[23987] 00:33:58.456 write: IOPS=7726, BW=30.2MiB/s (31.6MB/s)(30.3MiB/1004msec); 0 zone resets 00:33:58.456 slat (nsec): min=1632, max=13670k, avg=58409.06, stdev=446563.62 00:33:58.456 clat (usec): min=1145, max=23980, avg=8089.50, stdev=2812.58 00:33:58.456 lat (usec): min=1156, max=23989, avg=8147.91, stdev=2819.99 00:33:58.456 clat percentiles (usec): 00:33:58.456 | 1.00th=[ 4228], 5.00th=[ 5211], 10.00th=[ 5538], 20.00th=[ 6194], 00:33:58.456 | 30.00th=[ 6783], 40.00th=[ 7242], 50.00th=[ 7504], 60.00th=[ 7767], 00:33:58.456 | 70.00th=[ 8029], 80.00th=[10028], 90.00th=[10683], 95.00th=[13698], 00:33:58.456 | 99.00th=[20579], 99.50th=[21365], 99.90th=[21627], 99.95th=[21890], 00:33:58.456 | 99.99th=[23987] 00:33:58.456 bw ( KiB/s): min=28728, max=32712, per=31.04%, avg=30720.00, stdev=2817.11, samples=2 00:33:58.456 iops : min= 7182, max= 8178, avg=7680.00, stdev=704.28, samples=2 00:33:58.456 lat (msec) : 2=0.01%, 4=0.36%, 10=78.88%, 20=20.20%, 50=0.54% 00:33:58.456 cpu : usr=4.79%, sys=8.08%, ctx=457, majf=0, minf=2 00:33:58.456 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:33:58.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.456 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:58.456 issued rwts: total=7680,7757,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.456 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:58.456 00:33:58.456 Run status group 0 (all jobs): 00:33:58.456 READ: bw=93.0MiB/s (97.5MB/s), 15.8MiB/s-29.9MiB/s (16.6MB/s-31.3MB/s), io=94.0MiB (98.6MB), run=1002-1011msec 00:33:58.456 WRITE: bw=96.7MiB/s (101MB/s), 17.1MiB/s-30.2MiB/s (17.9MB/s-31.6MB/s), io=97.7MiB (102MB), run=1002-1011msec 00:33:58.456 00:33:58.456 Disk stats (read/write): 00:33:58.456 nvme0n1: ios=3604/4096, merge=0/0, ticks=33845/43819, in_queue=77664, util=85.77% 00:33:58.456 nvme0n2: ios=5560/5632, merge=0/0, ticks=21766/19151, in_queue=40917, util=89.60% 00:33:58.456 nvme0n3: ios=4598/4615, merge=0/0, ticks=43929/49096, in_queue=93025, util=92.83% 00:33:58.456 nvme0n4: ios=6201/6559, merge=0/0, ticks=49209/52076, in_queue=101285, util=94.23% 00:33:58.456 12:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:33:58.456 12:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1246051 00:33:58.456 12:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:33:58.456 12:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:33:58.456 [global] 00:33:58.456 thread=1 00:33:58.456 invalidate=1 00:33:58.456 rw=read 00:33:58.456 time_based=1 00:33:58.456 runtime=10 00:33:58.456 ioengine=libaio 00:33:58.456 direct=1 00:33:58.456 bs=4096 00:33:58.456 iodepth=1 00:33:58.456 norandommap=1 00:33:58.456 numjobs=1 00:33:58.456 00:33:58.456 [job0] 00:33:58.456 filename=/dev/nvme0n1 00:33:58.456 [job1] 00:33:58.456 filename=/dev/nvme0n2 00:33:58.456 [job2] 00:33:58.456 filename=/dev/nvme0n3 00:33:58.456 [job3] 00:33:58.456 filename=/dev/nvme0n4 00:33:58.456 Could not set queue depth (nvme0n1) 00:33:58.456 Could not set queue depth (nvme0n2) 00:33:58.456 Could not set queue depth (nvme0n3) 00:33:58.456 Could not set queue depth (nvme0n4) 00:33:58.456 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:58.456 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:58.456 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:58.456 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:58.456 fio-3.35 00:33:58.456 Starting 4 threads 00:34:01.760 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:34:01.760 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=9416704, buflen=4096 00:34:01.760 fio: pid=1246454, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:01.760 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:34:01.760 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=11100160, buflen=4096 00:34:01.760 fio: pid=1246448, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:01.760 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:01.760 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:34:01.760 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=376832, buflen=4096 00:34:01.760 fio: pid=1246416, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:01.760 12:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:01.760 12:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:34:01.760 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=4063232, buflen=4096 00:34:01.760 fio: pid=1246431, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:02.022 12:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:02.022 12:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:34:02.022 00:34:02.022 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1246416: Mon Oct 21 12:18:38 2024 00:34:02.022 read: IOPS=31, BW=124KiB/s (127kB/s)(368KiB/2974msec) 00:34:02.022 slat (usec): min=7, max=11654, avg=150.07, stdev=1205.88 00:34:02.022 clat (usec): min=450, max=41972, avg=31931.84, stdev=16983.59 00:34:02.022 lat (usec): min=476, max=52945, avg=32083.25, stdev=17097.37 00:34:02.022 clat percentiles (usec): 00:34:02.022 | 1.00th=[ 449], 5.00th=[ 734], 10.00th=[ 873], 20.00th=[ 1106], 00:34:02.022 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:02.022 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:34:02.022 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:02.022 | 99.99th=[42206] 00:34:02.022 bw ( KiB/s): min= 96, max= 184, per=1.67%, avg=129.60, stdev=34.59, samples=5 00:34:02.022 iops : min= 24, max= 46, avg=32.40, stdev= 8.65, samples=5 00:34:02.022 lat (usec) : 500=1.08%, 750=4.30%, 1000=11.83% 00:34:02.022 lat (msec) : 2=5.38%, 50=76.34% 00:34:02.022 cpu : usr=0.13%, sys=0.00%, ctx=94, majf=0, minf=1 00:34:02.022 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:02.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.022 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.022 issued rwts: total=93,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.022 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:02.022 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1246431: Mon Oct 21 12:18:38 2024 00:34:02.022 read: IOPS=315, BW=1261KiB/s (1291kB/s)(3968KiB/3147msec) 00:34:02.022 slat (usec): min=10, max=10880, avg=49.52, stdev=422.42 00:34:02.022 clat (usec): min=603, max=42079, avg=3093.53, stdev=8622.79 00:34:02.022 lat (usec): min=633, max=42105, avg=3132.14, stdev=8623.42 00:34:02.022 clat percentiles (usec): 00:34:02.022 | 1.00th=[ 889], 5.00th=[ 979], 10.00th=[ 1029], 20.00th=[ 1074], 00:34:02.022 | 30.00th=[ 1106], 40.00th=[ 1139], 50.00th=[ 1156], 60.00th=[ 1188], 00:34:02.022 | 70.00th=[ 1205], 80.00th=[ 1237], 90.00th=[ 1303], 95.00th=[ 1500], 00:34:02.022 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:34:02.022 | 99.99th=[42206] 00:34:02.022 bw ( KiB/s): min= 696, max= 2024, per=16.70%, avg=1293.83, stdev=587.63, samples=6 00:34:02.022 iops : min= 174, max= 506, avg=323.33, stdev=147.00, samples=6 00:34:02.022 lat (usec) : 750=0.20%, 1000=6.14% 00:34:02.022 lat (msec) : 2=88.62%, 4=0.10%, 50=4.83% 00:34:02.022 cpu : usr=0.22%, sys=1.18%, ctx=997, majf=0, minf=2 00:34:02.022 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:02.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.022 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.022 issued rwts: total=993,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.022 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:02.022 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1246448: Mon Oct 21 12:18:38 2024 00:34:02.022 read: IOPS=974, BW=3896KiB/s (3990kB/s)(10.6MiB/2782msec) 00:34:02.022 slat (nsec): min=6671, max=62618, avg=26939.36, stdev=2265.34 00:34:02.022 clat (usec): min=426, max=1366, avg=985.29, stdev=89.18 00:34:02.022 lat (usec): min=453, max=1393, avg=1012.23, stdev=89.21 00:34:02.022 clat percentiles (usec): 00:34:02.022 | 1.00th=[ 742], 5.00th=[ 840], 10.00th=[ 881], 20.00th=[ 930], 00:34:02.022 | 30.00th=[ 955], 40.00th=[ 971], 50.00th=[ 979], 60.00th=[ 996], 00:34:02.022 | 70.00th=[ 1020], 80.00th=[ 1057], 90.00th=[ 1090], 95.00th=[ 1139], 00:34:02.022 | 99.00th=[ 1221], 99.50th=[ 1237], 99.90th=[ 1287], 99.95th=[ 1287], 00:34:02.022 | 99.99th=[ 1369] 00:34:02.022 bw ( KiB/s): min= 3784, max= 4008, per=50.63%, avg=3921.60, stdev=97.23, samples=5 00:34:02.022 iops : min= 946, max= 1002, avg=980.40, stdev=24.31, samples=5 00:34:02.023 lat (usec) : 500=0.04%, 750=1.07%, 1000=59.39% 00:34:02.023 lat (msec) : 2=39.47% 00:34:02.023 cpu : usr=1.69%, sys=4.03%, ctx=2711, majf=0, minf=2 00:34:02.023 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:02.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.023 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.023 issued rwts: total=2711,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.023 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:02.023 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1246454: Mon Oct 21 12:18:38 2024 00:34:02.023 read: IOPS=885, BW=3540KiB/s (3625kB/s)(9196KiB/2598msec) 00:34:02.023 slat (nsec): min=6905, max=79229, avg=25255.80, stdev=7685.05 00:34:02.023 clat (usec): min=479, max=41930, avg=1089.23, stdev=3034.90 00:34:02.023 lat (usec): min=488, max=41960, avg=1114.48, stdev=3035.21 00:34:02.023 clat percentiles (usec): 00:34:02.023 | 1.00th=[ 545], 5.00th=[ 619], 10.00th=[ 652], 20.00th=[ 693], 00:34:02.023 | 30.00th=[ 734], 40.00th=[ 775], 50.00th=[ 807], 60.00th=[ 840], 00:34:02.023 | 70.00th=[ 898], 80.00th=[ 1090], 90.00th=[ 1188], 95.00th=[ 1254], 00:34:02.023 | 99.00th=[ 1336], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:34:02.023 | 99.99th=[41681] 00:34:02.023 bw ( KiB/s): min= 2016, max= 5032, per=46.90%, avg=3632.00, stdev=1490.18, samples=5 00:34:02.023 iops : min= 504, max= 1258, avg=908.00, stdev=372.55, samples=5 00:34:02.023 lat (usec) : 500=0.17%, 750=33.13%, 1000=41.43% 00:34:02.023 lat (msec) : 2=24.65%, 50=0.57% 00:34:02.023 cpu : usr=0.77%, sys=2.81%, ctx=2301, majf=0, minf=1 00:34:02.023 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:02.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.023 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.023 issued rwts: total=2300,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.023 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:02.023 00:34:02.023 Run status group 0 (all jobs): 00:34:02.023 READ: bw=7745KiB/s (7930kB/s), 124KiB/s-3896KiB/s (127kB/s-3990kB/s), io=23.8MiB (25.0MB), run=2598-3147msec 00:34:02.023 00:34:02.023 Disk stats (read/write): 00:34:02.023 nvme0n1: ios=89/0, merge=0/0, ticks=2815/0, in_queue=2815, util=94.39% 00:34:02.023 nvme0n2: ios=990/0, merge=0/0, ticks=2992/0, in_queue=2992, util=95.48% 00:34:02.023 nvme0n3: ios=2536/0, merge=0/0, ticks=2406/0, in_queue=2406, util=96.03% 00:34:02.023 nvme0n4: ios=2300/0, merge=0/0, ticks=2456/0, in_queue=2456, util=96.39% 00:34:02.023 12:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:02.023 12:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:34:02.287 12:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:02.287 12:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:34:02.552 12:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:02.552 12:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:34:02.552 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:02.552 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:34:02.813 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:34:02.813 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1246051 00:34:02.813 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:34:02.813 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:02.813 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:02.813 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:02.813 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:34:02.813 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:34:02.813 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:02.813 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:34:02.813 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:02.813 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:34:02.813 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:34:02.813 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:34:02.813 nvmf hotplug test: fio failed as expected 00:34:02.813 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:03.075 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:34:03.075 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:34:03.075 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:34:03.075 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:34:03.075 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:34:03.075 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:03.075 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:34:03.075 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:03.075 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:34:03.075 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:03.075 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:03.075 rmmod nvme_tcp 00:34:03.075 rmmod nvme_fabrics 00:34:03.075 rmmod nvme_keyring 00:34:03.075 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:03.075 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:34:03.075 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:34:03.075 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 1242848 ']' 00:34:03.075 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 1242848 00:34:03.075 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 1242848 ']' 00:34:03.075 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 1242848 00:34:03.075 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:34:03.075 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:03.075 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1242848 00:34:03.336 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:03.336 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:03.336 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1242848' 00:34:03.336 killing process with pid 1242848 00:34:03.336 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 1242848 00:34:03.336 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 1242848 00:34:03.336 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:03.336 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:03.336 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:03.336 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:34:03.336 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:34:03.336 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:34:03.336 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:03.336 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:03.336 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:03.336 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:03.336 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:03.336 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:05.885 12:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:05.885 00:34:05.885 real 0m28.096s 00:34:05.885 user 2m12.013s 00:34:05.885 sys 0m12.233s 00:34:05.885 12:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:05.885 12:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:05.886 ************************************ 00:34:05.886 END TEST nvmf_fio_target 00:34:05.886 ************************************ 00:34:05.886 12:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:05.886 12:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:34:05.886 12:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:05.886 12:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:05.886 ************************************ 00:34:05.886 START TEST nvmf_bdevio 00:34:05.886 ************************************ 00:34:05.886 12:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:05.886 * Looking for test storage... 00:34:05.886 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:05.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:05.886 --rc genhtml_branch_coverage=1 00:34:05.886 --rc genhtml_function_coverage=1 00:34:05.886 --rc genhtml_legend=1 00:34:05.886 --rc geninfo_all_blocks=1 00:34:05.886 --rc geninfo_unexecuted_blocks=1 00:34:05.886 00:34:05.886 ' 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:05.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:05.886 --rc genhtml_branch_coverage=1 00:34:05.886 --rc genhtml_function_coverage=1 00:34:05.886 --rc genhtml_legend=1 00:34:05.886 --rc geninfo_all_blocks=1 00:34:05.886 --rc geninfo_unexecuted_blocks=1 00:34:05.886 00:34:05.886 ' 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:05.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:05.886 --rc genhtml_branch_coverage=1 00:34:05.886 --rc genhtml_function_coverage=1 00:34:05.886 --rc genhtml_legend=1 00:34:05.886 --rc geninfo_all_blocks=1 00:34:05.886 --rc geninfo_unexecuted_blocks=1 00:34:05.886 00:34:05.886 ' 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:05.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:05.886 --rc genhtml_branch_coverage=1 00:34:05.886 --rc genhtml_function_coverage=1 00:34:05.886 --rc genhtml_legend=1 00:34:05.886 --rc geninfo_all_blocks=1 00:34:05.886 --rc geninfo_unexecuted_blocks=1 00:34:05.886 00:34:05.886 ' 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.886 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:34:05.887 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:05.887 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:05.887 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:05.887 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:05.887 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:05.887 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:05.887 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:05.887 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:05.887 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:05.887 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:05.887 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:05.887 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:05.887 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:34:05.887 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:05.887 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:05.887 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:05.887 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:05.887 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:05.887 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:05.887 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:05.887 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:05.887 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:05.887 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:05.887 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:34:05.887 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:14.032 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:14.032 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:34:14.032 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:14.032 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:14.032 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:14.032 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:14.032 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:14.032 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:34:14.032 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:14.032 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:34:14.032 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:34:14.032 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:34:14.032 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:34:14.032 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:34:14.032 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:34:14.032 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:14.032 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:14.032 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:14.032 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:14.032 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:14.032 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:14.032 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:14.033 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:14.033 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:14.033 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:14.033 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:14.033 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:14.033 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:34:14.033 00:34:14.033 --- 10.0.0.2 ping statistics --- 00:34:14.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:14.033 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:14.033 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:14.033 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:34:14.033 00:34:14.033 --- 10.0.0.1 ping statistics --- 00:34:14.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:14.033 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=1251457 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 1251457 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 1251457 ']' 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:14.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:14.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:14.034 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:14.034 [2024-10-21 12:18:49.806649] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:14.034 [2024-10-21 12:18:49.807773] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:34:14.034 [2024-10-21 12:18:49.807828] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:14.034 [2024-10-21 12:18:49.896391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:14.034 [2024-10-21 12:18:49.949118] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:14.034 [2024-10-21 12:18:49.949172] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:14.034 [2024-10-21 12:18:49.949180] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:14.034 [2024-10-21 12:18:49.949188] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:14.034 [2024-10-21 12:18:49.949194] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:14.034 [2024-10-21 12:18:49.951565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:14.034 [2024-10-21 12:18:49.951797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:14.034 [2024-10-21 12:18:49.951954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:14.034 [2024-10-21 12:18:49.951955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:14.034 [2024-10-21 12:18:50.034961] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:14.034 [2024-10-21 12:18:50.035529] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:14.034 [2024-10-21 12:18:50.035919] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:14.034 [2024-10-21 12:18:50.036389] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:14.034 [2024-10-21 12:18:50.036432] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:14.034 12:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:14.034 12:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:34:14.034 12:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:14.034 12:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:14.034 12:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:14.295 12:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:14.295 12:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:14.295 12:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.295 12:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:14.295 [2024-10-21 12:18:50.676961] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:14.295 12:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.295 12:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:14.295 12:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.295 12:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:14.295 Malloc0 00:34:14.295 12:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.295 12:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:14.295 12:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.295 12:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:14.295 12:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.295 12:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:14.295 12:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.295 12:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:14.295 12:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.295 12:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:14.295 12:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.295 12:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:14.295 [2024-10-21 12:18:50.765312] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:14.295 12:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.295 12:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:34:14.295 12:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:34:14.295 12:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:34:14.295 12:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:34:14.295 12:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:14.295 12:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:14.295 { 00:34:14.295 "params": { 00:34:14.295 "name": "Nvme$subsystem", 00:34:14.295 "trtype": "$TEST_TRANSPORT", 00:34:14.295 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:14.295 "adrfam": "ipv4", 00:34:14.295 "trsvcid": "$NVMF_PORT", 00:34:14.295 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:14.295 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:14.295 "hdgst": ${hdgst:-false}, 00:34:14.295 "ddgst": ${ddgst:-false} 00:34:14.295 }, 00:34:14.295 "method": "bdev_nvme_attach_controller" 00:34:14.295 } 00:34:14.295 EOF 00:34:14.295 )") 00:34:14.295 12:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:34:14.295 12:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:34:14.295 12:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:34:14.295 12:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:34:14.295 "params": { 00:34:14.295 "name": "Nvme1", 00:34:14.295 "trtype": "tcp", 00:34:14.295 "traddr": "10.0.0.2", 00:34:14.295 "adrfam": "ipv4", 00:34:14.295 "trsvcid": "4420", 00:34:14.295 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:14.295 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:14.295 "hdgst": false, 00:34:14.295 "ddgst": false 00:34:14.295 }, 00:34:14.295 "method": "bdev_nvme_attach_controller" 00:34:14.295 }' 00:34:14.295 [2024-10-21 12:18:50.821220] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:34:14.295 [2024-10-21 12:18:50.821293] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1251579 ] 00:34:14.556 [2024-10-21 12:18:50.905119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:14.556 [2024-10-21 12:18:50.961604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:14.556 [2024-10-21 12:18:50.961860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:14.556 [2024-10-21 12:18:50.961860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:14.818 I/O targets: 00:34:14.818 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:34:14.818 00:34:14.818 00:34:14.818 CUnit - A unit testing framework for C - Version 2.1-3 00:34:14.818 http://cunit.sourceforge.net/ 00:34:14.818 00:34:14.818 00:34:14.818 Suite: bdevio tests on: Nvme1n1 00:34:14.818 Test: blockdev write read block ...passed 00:34:14.818 Test: blockdev write zeroes read block ...passed 00:34:14.818 Test: blockdev write zeroes read no split ...passed 00:34:14.818 Test: blockdev write zeroes read split ...passed 00:34:14.818 Test: blockdev write zeroes read split partial ...passed 00:34:14.818 Test: blockdev reset ...[2024-10-21 12:18:51.326405] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.818 [2024-10-21 12:18:51.326501] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa854d0 (9): Bad file descriptor 00:34:14.818 [2024-10-21 12:18:51.331291] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:14.818 passed 00:34:14.818 Test: blockdev write read 8 blocks ...passed 00:34:14.818 Test: blockdev write read size > 128k ...passed 00:34:14.818 Test: blockdev write read invalid size ...passed 00:34:14.818 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:14.818 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:14.818 Test: blockdev write read max offset ...passed 00:34:15.078 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:15.078 Test: blockdev writev readv 8 blocks ...passed 00:34:15.078 Test: blockdev writev readv 30 x 1block ...passed 00:34:15.078 Test: blockdev writev readv block ...passed 00:34:15.078 Test: blockdev writev readv size > 128k ...passed 00:34:15.078 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:15.078 Test: blockdev comparev and writev ...[2024-10-21 12:18:51.514041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:15.078 [2024-10-21 12:18:51.514094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:15.078 [2024-10-21 12:18:51.514112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:15.078 [2024-10-21 12:18:51.514121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:15.078 [2024-10-21 12:18:51.514760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:15.078 [2024-10-21 12:18:51.514777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:15.078 [2024-10-21 12:18:51.514791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:15.078 [2024-10-21 12:18:51.514800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:15.078 [2024-10-21 12:18:51.515442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:15.079 [2024-10-21 12:18:51.515457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:15.079 [2024-10-21 12:18:51.515471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:15.079 [2024-10-21 12:18:51.515479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:15.079 [2024-10-21 12:18:51.516100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:15.079 [2024-10-21 12:18:51.516114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:15.079 [2024-10-21 12:18:51.516128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:15.079 [2024-10-21 12:18:51.516136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:15.079 passed 00:34:15.079 Test: blockdev nvme passthru rw ...passed 00:34:15.079 Test: blockdev nvme passthru vendor specific ...[2024-10-21 12:18:51.600267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:15.079 [2024-10-21 12:18:51.600291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:15.079 [2024-10-21 12:18:51.600681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:15.079 [2024-10-21 12:18:51.600697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:15.079 [2024-10-21 12:18:51.601093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:15.079 [2024-10-21 12:18:51.601107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:15.079 [2024-10-21 12:18:51.601503] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:15.079 [2024-10-21 12:18:51.601519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:15.079 passed 00:34:15.079 Test: blockdev nvme admin passthru ...passed 00:34:15.079 Test: blockdev copy ...passed 00:34:15.079 00:34:15.079 Run Summary: Type Total Ran Passed Failed Inactive 00:34:15.079 suites 1 1 n/a 0 0 00:34:15.079 tests 23 23 23 0 0 00:34:15.079 asserts 152 152 152 0 n/a 00:34:15.079 00:34:15.079 Elapsed time = 0.995 seconds 00:34:15.339 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:15.339 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.339 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:15.339 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.339 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:34:15.339 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:34:15.340 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:15.340 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:34:15.340 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:15.340 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:34:15.340 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:15.340 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:15.340 rmmod nvme_tcp 00:34:15.340 rmmod nvme_fabrics 00:34:15.340 rmmod nvme_keyring 00:34:15.340 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:15.340 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:34:15.340 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:34:15.340 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 1251457 ']' 00:34:15.340 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 1251457 00:34:15.340 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 1251457 ']' 00:34:15.340 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 1251457 00:34:15.340 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:34:15.340 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:15.340 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1251457 00:34:15.600 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:34:15.600 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:34:15.600 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1251457' 00:34:15.600 killing process with pid 1251457 00:34:15.600 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 1251457 00:34:15.600 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 1251457 00:34:15.600 12:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:15.600 12:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:15.600 12:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:15.600 12:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:34:15.600 12:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:34:15.600 12:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:15.600 12:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:34:15.600 12:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:15.600 12:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:15.600 12:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:15.600 12:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:15.600 12:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:18.147 12:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:18.147 00:34:18.147 real 0m12.208s 00:34:18.147 user 0m9.029s 00:34:18.147 sys 0m6.547s 00:34:18.147 12:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:18.147 12:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:18.147 ************************************ 00:34:18.147 END TEST nvmf_bdevio 00:34:18.147 ************************************ 00:34:18.147 12:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:34:18.147 00:34:18.147 real 4m59.631s 00:34:18.147 user 10m8.189s 00:34:18.147 sys 2m4.255s 00:34:18.147 12:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:18.147 12:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:18.147 ************************************ 00:34:18.147 END TEST nvmf_target_core_interrupt_mode 00:34:18.147 ************************************ 00:34:18.147 12:18:54 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:18.147 12:18:54 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:34:18.147 12:18:54 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:18.147 12:18:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:18.147 ************************************ 00:34:18.147 START TEST nvmf_interrupt 00:34:18.147 ************************************ 00:34:18.147 12:18:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:18.147 * Looking for test storage... 00:34:18.147 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:18.147 12:18:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:18.147 12:18:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:34:18.147 12:18:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:18.147 12:18:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:18.147 12:18:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:18.147 12:18:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:18.147 12:18:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:18.147 12:18:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:34:18.147 12:18:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:34:18.147 12:18:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:34:18.147 12:18:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:34:18.147 12:18:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:34:18.147 12:18:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:34:18.147 12:18:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:34:18.147 12:18:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:18.147 12:18:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:34:18.147 12:18:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:34:18.147 12:18:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:18.147 12:18:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:18.147 12:18:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:34:18.147 12:18:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:34:18.147 12:18:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:18.147 12:18:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:18.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:18.148 --rc genhtml_branch_coverage=1 00:34:18.148 --rc genhtml_function_coverage=1 00:34:18.148 --rc genhtml_legend=1 00:34:18.148 --rc geninfo_all_blocks=1 00:34:18.148 --rc geninfo_unexecuted_blocks=1 00:34:18.148 00:34:18.148 ' 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:18.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:18.148 --rc genhtml_branch_coverage=1 00:34:18.148 --rc genhtml_function_coverage=1 00:34:18.148 --rc genhtml_legend=1 00:34:18.148 --rc geninfo_all_blocks=1 00:34:18.148 --rc geninfo_unexecuted_blocks=1 00:34:18.148 00:34:18.148 ' 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:18.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:18.148 --rc genhtml_branch_coverage=1 00:34:18.148 --rc genhtml_function_coverage=1 00:34:18.148 --rc genhtml_legend=1 00:34:18.148 --rc geninfo_all_blocks=1 00:34:18.148 --rc geninfo_unexecuted_blocks=1 00:34:18.148 00:34:18.148 ' 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:18.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:18.148 --rc genhtml_branch_coverage=1 00:34:18.148 --rc genhtml_function_coverage=1 00:34:18.148 --rc genhtml_legend=1 00:34:18.148 --rc geninfo_all_blocks=1 00:34:18.148 --rc geninfo_unexecuted_blocks=1 00:34:18.148 00:34:18.148 ' 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:34:18.148 12:18:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:26.303 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:26.303 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:34:26.303 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:26.303 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:26.303 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:26.303 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:26.303 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:26.303 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:34:26.303 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:26.303 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:34:26.303 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:34:26.303 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:34:26.303 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:34:26.303 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:34:26.303 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:34:26.303 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:26.303 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:26.303 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:26.303 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:26.303 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:26.303 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:26.303 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:26.303 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:26.303 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:26.303 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:26.303 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:26.303 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:26.303 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:26.303 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:26.303 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:26.303 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:26.303 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:26.303 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:26.303 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:26.303 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:26.303 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:26.303 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:26.303 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:26.303 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:26.303 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:26.303 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:26.303 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:26.303 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:26.304 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:26.304 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:26.304 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # is_hw=yes 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:26.304 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:26.304 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:34:26.304 00:34:26.304 --- 10.0.0.2 ping statistics --- 00:34:26.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:26.304 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:26.304 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:26.304 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:34:26.304 00:34:26.304 --- 10.0.0.1 ping statistics --- 00:34:26.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:26.304 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@448 -- # return 0 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # nvmfpid=1255937 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # waitforlisten 1255937 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@831 -- # '[' -z 1255937 ']' 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:26.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:26.304 12:19:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:26.304 [2024-10-21 12:19:01.908469] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:26.304 [2024-10-21 12:19:01.909437] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:34:26.304 [2024-10-21 12:19:01.909476] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:26.304 [2024-10-21 12:19:01.991791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:26.304 [2024-10-21 12:19:02.027863] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:26.304 [2024-10-21 12:19:02.027895] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:26.304 [2024-10-21 12:19:02.027904] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:26.304 [2024-10-21 12:19:02.027910] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:26.304 [2024-10-21 12:19:02.027916] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:26.304 [2024-10-21 12:19:02.029061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:26.304 [2024-10-21 12:19:02.029064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:26.304 [2024-10-21 12:19:02.083912] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:26.304 [2024-10-21 12:19:02.084409] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:26.304 [2024-10-21 12:19:02.084752] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:26.304 12:19:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:26.304 12:19:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # return 0 00:34:26.304 12:19:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:26.304 12:19:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:26.304 12:19:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:26.304 12:19:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:26.304 12:19:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:34:26.304 12:19:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:34:26.304 12:19:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:34:26.304 12:19:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:34:26.304 5000+0 records in 00:34:26.304 5000+0 records out 00:34:26.304 10240000 bytes (10 MB, 9.8 MiB) copied, 0.017996 s, 569 MB/s 00:34:26.304 12:19:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:34:26.304 12:19:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.304 12:19:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:26.304 AIO0 00:34:26.304 12:19:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.304 12:19:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:34:26.304 12:19:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.305 12:19:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:26.305 [2024-10-21 12:19:02.793944] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:26.305 12:19:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.305 12:19:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:26.305 12:19:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.305 12:19:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:26.305 12:19:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.305 12:19:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:34:26.305 12:19:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.305 12:19:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:26.305 12:19:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.305 12:19:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:26.305 12:19:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.305 12:19:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:26.305 [2024-10-21 12:19:02.834274] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:26.305 12:19:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.305 12:19:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:34:26.305 12:19:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1255937 0 00:34:26.305 12:19:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1255937 0 idle 00:34:26.305 12:19:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1255937 00:34:26.305 12:19:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:26.305 12:19:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:26.305 12:19:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:26.305 12:19:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:26.305 12:19:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:26.305 12:19:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:26.305 12:19:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:26.305 12:19:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:26.305 12:19:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:26.305 12:19:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1255937 -w 256 00:34:26.305 12:19:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:26.566 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1255937 root 20 0 128.2g 41472 31104 S 0.0 0.0 0:00.26 reactor_0' 00:34:26.566 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1255937 root 20 0 128.2g 41472 31104 S 0.0 0.0 0:00.26 reactor_0 00:34:26.566 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:26.566 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:26.566 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:26.566 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:26.566 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:26.566 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:26.566 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:26.566 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:26.566 12:19:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:34:26.566 12:19:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1255937 1 00:34:26.566 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1255937 1 idle 00:34:26.566 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1255937 00:34:26.567 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:26.567 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:26.567 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:26.567 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:26.567 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:26.567 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:26.567 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:26.567 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:26.567 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:26.567 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1255937 -w 256 00:34:26.567 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:26.828 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1255956 root 20 0 128.2g 41472 31104 S 0.0 0.0 0:00.00 reactor_1' 00:34:26.828 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1255956 root 20 0 128.2g 41472 31104 S 0.0 0.0 0:00.00 reactor_1 00:34:26.828 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:26.828 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:26.828 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:26.828 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:26.828 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:26.828 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:26.828 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:26.828 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:26.828 12:19:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:34:26.828 12:19:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1256299 00:34:26.828 12:19:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:34:26.828 12:19:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:34:26.828 12:19:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:26.828 12:19:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1255937 0 00:34:26.828 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1255937 0 busy 00:34:26.828 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1255937 00:34:26.828 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:26.828 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:26.828 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:34:26.828 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:26.828 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:34:26.828 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:26.828 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:26.828 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:26.828 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1255937 -w 256 00:34:26.828 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:26.828 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1255937 root 20 0 128.2g 41472 31104 S 13.3 0.0 0:00.28 reactor_0' 00:34:26.828 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1255937 root 20 0 128.2g 41472 31104 S 13.3 0.0 0:00.28 reactor_0 00:34:26.828 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:26.828 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:26.828 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=13.3 00:34:26.828 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=13 00:34:26.828 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:26.828 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:26.828 12:19:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:34:28.214 12:19:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:34:28.214 12:19:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:28.215 12:19:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1255937 -w 256 00:34:28.215 12:19:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:28.215 12:19:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1255937 root 20 0 128.2g 42624 31104 R 99.9 0.0 0:02.49 reactor_0' 00:34:28.215 12:19:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1255937 root 20 0 128.2g 42624 31104 R 99.9 0.0 0:02.49 reactor_0 00:34:28.215 12:19:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:28.215 12:19:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:28.215 12:19:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:34:28.215 12:19:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:34:28.215 12:19:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:28.215 12:19:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:28.215 12:19:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:34:28.215 12:19:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:28.215 12:19:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:34:28.215 12:19:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:34:28.215 12:19:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1255937 1 00:34:28.215 12:19:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1255937 1 busy 00:34:28.215 12:19:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1255937 00:34:28.215 12:19:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:28.215 12:19:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:28.215 12:19:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:34:28.215 12:19:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:28.215 12:19:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:34:28.215 12:19:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:28.215 12:19:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:28.215 12:19:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:28.215 12:19:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1255937 -w 256 00:34:28.215 12:19:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:28.215 12:19:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1255956 root 20 0 128.2g 42624 31104 R 99.9 0.0 0:01.30 reactor_1' 00:34:28.215 12:19:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1255956 root 20 0 128.2g 42624 31104 R 99.9 0.0 0:01.30 reactor_1 00:34:28.215 12:19:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:28.215 12:19:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:28.215 12:19:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:34:28.215 12:19:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:34:28.215 12:19:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:28.215 12:19:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:28.215 12:19:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:34:28.215 12:19:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:28.215 12:19:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1256299 00:34:38.213 Initializing NVMe Controllers 00:34:38.213 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:38.213 Controller IO queue size 256, less than required. 00:34:38.213 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:38.213 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:34:38.213 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:34:38.213 Initialization complete. Launching workers. 00:34:38.213 ======================================================== 00:34:38.213 Latency(us) 00:34:38.213 Device Information : IOPS MiB/s Average min max 00:34:38.213 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 18934.90 73.96 13524.45 3964.29 30329.07 00:34:38.213 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 20488.60 80.03 12496.46 7802.69 28372.67 00:34:38.213 ======================================================== 00:34:38.213 Total : 39423.50 154.00 12990.20 3964.29 30329.07 00:34:38.213 00:34:38.213 [2024-10-21 12:19:13.506089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130aaa0 is same with the state(6) to be set 00:34:38.213 12:19:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:34:38.213 12:19:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1255937 0 00:34:38.213 12:19:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1255937 0 idle 00:34:38.213 12:19:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1255937 00:34:38.213 12:19:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:38.213 12:19:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:38.213 12:19:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:38.213 12:19:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:38.213 12:19:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:38.213 12:19:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:38.213 12:19:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:38.213 12:19:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:38.213 12:19:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:38.213 12:19:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1255937 -w 256 00:34:38.213 12:19:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:38.213 12:19:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1255937 root 20 0 128.2g 42624 31104 S 0.0 0.0 0:20.26 reactor_0' 00:34:38.213 12:19:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1255937 root 20 0 128.2g 42624 31104 S 0.0 0.0 0:20.26 reactor_0 00:34:38.213 12:19:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:38.213 12:19:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:38.213 12:19:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:38.213 12:19:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:38.213 12:19:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:38.213 12:19:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:38.213 12:19:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:38.213 12:19:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:38.213 12:19:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:34:38.213 12:19:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1255937 1 00:34:38.213 12:19:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1255937 1 idle 00:34:38.213 12:19:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1255937 00:34:38.213 12:19:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:38.213 12:19:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:38.213 12:19:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:38.213 12:19:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:38.213 12:19:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:38.213 12:19:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:38.213 12:19:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:38.213 12:19:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:38.213 12:19:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:38.213 12:19:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1255937 -w 256 00:34:38.213 12:19:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:38.213 12:19:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1255956 root 20 0 128.2g 42624 31104 S 0.0 0.0 0:10.01 reactor_1' 00:34:38.213 12:19:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1255956 root 20 0 128.2g 42624 31104 S 0.0 0.0 0:10.01 reactor_1 00:34:38.213 12:19:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:38.213 12:19:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:38.213 12:19:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:38.213 12:19:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:38.213 12:19:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:38.213 12:19:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:38.213 12:19:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:38.213 12:19:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:38.213 12:19:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:38.213 12:19:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:34:38.213 12:19:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1198 -- # local i=0 00:34:38.213 12:19:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:34:38.213 12:19:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:34:38.213 12:19:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1205 -- # sleep 2 00:34:40.126 12:19:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:34:40.126 12:19:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:34:40.126 12:19:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:34:40.126 12:19:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:34:40.126 12:19:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:34:40.126 12:19:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # return 0 00:34:40.126 12:19:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:34:40.126 12:19:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1255937 0 00:34:40.126 12:19:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1255937 0 idle 00:34:40.126 12:19:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1255937 00:34:40.126 12:19:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:40.126 12:19:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:40.126 12:19:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:40.126 12:19:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:40.127 12:19:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:40.127 12:19:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:40.127 12:19:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:40.127 12:19:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:40.127 12:19:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:40.127 12:19:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1255937 -w 256 00:34:40.127 12:19:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:40.388 12:19:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1255937 root 20 0 128.2g 77184 31104 S 0.0 0.1 0:20.61 reactor_0' 00:34:40.388 12:19:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1255937 root 20 0 128.2g 77184 31104 S 0.0 0.1 0:20.61 reactor_0 00:34:40.388 12:19:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:40.388 12:19:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:40.388 12:19:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:40.388 12:19:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:40.388 12:19:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:40.388 12:19:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:40.388 12:19:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:40.388 12:19:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:40.388 12:19:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:34:40.388 12:19:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1255937 1 00:34:40.388 12:19:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1255937 1 idle 00:34:40.388 12:19:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1255937 00:34:40.388 12:19:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:40.388 12:19:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:40.388 12:19:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:40.388 12:19:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:40.388 12:19:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:40.388 12:19:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:40.388 12:19:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:40.388 12:19:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:40.388 12:19:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:40.388 12:19:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1255937 -w 256 00:34:40.388 12:19:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:40.388 12:19:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1255956 root 20 0 128.2g 77184 31104 S 0.0 0.1 0:10.13 reactor_1' 00:34:40.388 12:19:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1255956 root 20 0 128.2g 77184 31104 S 0.0 0.1 0:10.13 reactor_1 00:34:40.388 12:19:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:40.388 12:19:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:40.388 12:19:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:40.388 12:19:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:40.388 12:19:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:40.388 12:19:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:40.388 12:19:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:40.388 12:19:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:40.388 12:19:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:40.649 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:40.649 12:19:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:40.649 12:19:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1219 -- # local i=0 00:34:40.649 12:19:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:34:40.649 12:19:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:40.649 12:19:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:34:40.649 12:19:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:40.649 12:19:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # return 0 00:34:40.649 12:19:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:34:40.649 12:19:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:34:40.649 12:19:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:40.649 12:19:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:34:40.649 12:19:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:40.649 12:19:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:34:40.649 12:19:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:40.649 12:19:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:40.649 rmmod nvme_tcp 00:34:40.649 rmmod nvme_fabrics 00:34:40.649 rmmod nvme_keyring 00:34:40.649 12:19:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:40.649 12:19:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:34:40.649 12:19:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:34:40.649 12:19:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@515 -- # '[' -n 1255937 ']' 00:34:40.649 12:19:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # killprocess 1255937 00:34:40.649 12:19:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@950 -- # '[' -z 1255937 ']' 00:34:40.649 12:19:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # kill -0 1255937 00:34:40.649 12:19:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # uname 00:34:40.649 12:19:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:40.649 12:19:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1255937 00:34:40.649 12:19:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:40.649 12:19:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:40.649 12:19:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1255937' 00:34:40.649 killing process with pid 1255937 00:34:40.649 12:19:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@969 -- # kill 1255937 00:34:40.649 12:19:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@974 -- # wait 1255937 00:34:40.909 12:19:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:40.909 12:19:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:40.909 12:19:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:40.909 12:19:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:34:40.909 12:19:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-restore 00:34:40.909 12:19:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-save 00:34:40.909 12:19:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:40.909 12:19:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:40.909 12:19:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:40.909 12:19:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:40.909 12:19:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:40.909 12:19:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:43.456 12:19:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:43.456 00:34:43.456 real 0m25.165s 00:34:43.456 user 0m40.451s 00:34:43.456 sys 0m9.450s 00:34:43.456 12:19:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:43.456 12:19:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:43.456 ************************************ 00:34:43.456 END TEST nvmf_interrupt 00:34:43.456 ************************************ 00:34:43.456 00:34:43.456 real 29m48.896s 00:34:43.456 user 61m16.365s 00:34:43.456 sys 10m12.184s 00:34:43.456 12:19:19 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:43.456 12:19:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:43.456 ************************************ 00:34:43.456 END TEST nvmf_tcp 00:34:43.456 ************************************ 00:34:43.456 12:19:19 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:34:43.456 12:19:19 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:43.456 12:19:19 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:43.456 12:19:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:43.456 12:19:19 -- common/autotest_common.sh@10 -- # set +x 00:34:43.456 ************************************ 00:34:43.456 START TEST spdkcli_nvmf_tcp 00:34:43.456 ************************************ 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:43.456 * Looking for test storage... 00:34:43.456 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:43.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:43.456 --rc genhtml_branch_coverage=1 00:34:43.456 --rc genhtml_function_coverage=1 00:34:43.456 --rc genhtml_legend=1 00:34:43.456 --rc geninfo_all_blocks=1 00:34:43.456 --rc geninfo_unexecuted_blocks=1 00:34:43.456 00:34:43.456 ' 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:43.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:43.456 --rc genhtml_branch_coverage=1 00:34:43.456 --rc genhtml_function_coverage=1 00:34:43.456 --rc genhtml_legend=1 00:34:43.456 --rc geninfo_all_blocks=1 00:34:43.456 --rc geninfo_unexecuted_blocks=1 00:34:43.456 00:34:43.456 ' 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:43.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:43.456 --rc genhtml_branch_coverage=1 00:34:43.456 --rc genhtml_function_coverage=1 00:34:43.456 --rc genhtml_legend=1 00:34:43.456 --rc geninfo_all_blocks=1 00:34:43.456 --rc geninfo_unexecuted_blocks=1 00:34:43.456 00:34:43.456 ' 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:43.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:43.456 --rc genhtml_branch_coverage=1 00:34:43.456 --rc genhtml_function_coverage=1 00:34:43.456 --rc genhtml_legend=1 00:34:43.456 --rc geninfo_all_blocks=1 00:34:43.456 --rc geninfo_unexecuted_blocks=1 00:34:43.456 00:34:43.456 ' 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:43.456 12:19:19 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:43.457 12:19:19 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:43.457 12:19:19 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:43.457 12:19:19 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:43.457 12:19:19 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:43.457 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:43.457 12:19:19 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:43.457 12:19:19 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:43.457 12:19:19 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:43.457 12:19:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:43.457 12:19:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:43.457 12:19:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:43.457 12:19:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:43.457 12:19:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:43.457 12:19:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:43.457 12:19:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:43.457 12:19:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1259492 00:34:43.457 12:19:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1259492 00:34:43.457 12:19:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 1259492 ']' 00:34:43.457 12:19:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:43.457 12:19:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:43.457 12:19:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:43.457 12:19:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:43.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:43.457 12:19:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:43.457 12:19:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:43.457 [2024-10-21 12:19:19.899620] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:34:43.457 [2024-10-21 12:19:19.899692] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1259492 ] 00:34:43.457 [2024-10-21 12:19:19.981672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:43.457 [2024-10-21 12:19:20.043524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:43.457 [2024-10-21 12:19:20.043569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:44.397 12:19:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:44.397 12:19:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:34:44.397 12:19:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:44.397 12:19:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:44.397 12:19:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:44.397 12:19:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:44.397 12:19:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:44.397 12:19:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:44.397 12:19:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:44.397 12:19:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:44.397 12:19:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:44.397 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:44.397 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:44.397 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:44.397 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:44.397 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:44.397 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:44.397 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:44.397 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:44.397 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:44.397 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:44.398 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:44.398 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:44.398 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:44.398 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:44.398 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:44.398 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:44.398 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:44.398 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:44.398 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:44.398 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:44.398 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:44.398 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:44.398 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:44.398 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:44.398 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:44.398 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:44.398 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:44.398 ' 00:34:47.122 [2024-10-21 12:19:23.463425] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:48.505 [2024-10-21 12:19:24.827677] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:51.048 [2024-10-21 12:19:27.366854] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:53.590 [2024-10-21 12:19:29.593237] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:54.974 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:54.974 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:54.974 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:54.974 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:54.974 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:54.974 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:54.974 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:54.974 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:54.974 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:54.974 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:54.974 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:54.974 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:54.974 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:54.974 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:54.974 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:54.974 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:54.974 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:54.974 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:54.974 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:54.974 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:54.974 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:54.974 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:54.974 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:54.974 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:54.974 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:54.974 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:54.974 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:54.974 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:54.974 12:19:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:54.974 12:19:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:54.974 12:19:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:54.974 12:19:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:54.974 12:19:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:54.974 12:19:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:54.974 12:19:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:34:54.974 12:19:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:55.235 12:19:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:55.495 12:19:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:55.495 12:19:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:55.495 12:19:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:55.495 12:19:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:55.495 12:19:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:55.495 12:19:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:55.495 12:19:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:55.495 12:19:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:55.495 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:55.495 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:55.495 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:55.495 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:55.495 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:55.495 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:55.495 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:55.495 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:55.495 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:55.495 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:55.495 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:55.495 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:55.495 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:55.495 ' 00:35:02.081 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:35:02.081 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:35:02.081 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:02.081 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:35:02.081 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:35:02.081 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:35:02.081 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:35:02.081 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:02.081 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:35:02.081 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:35:02.081 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:35:02.081 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:35:02.081 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:35:02.081 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:35:02.081 12:19:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:35:02.081 12:19:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:02.081 12:19:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:02.081 12:19:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1259492 00:35:02.081 12:19:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1259492 ']' 00:35:02.081 12:19:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1259492 00:35:02.081 12:19:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:35:02.081 12:19:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:02.081 12:19:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1259492 00:35:02.081 12:19:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:02.081 12:19:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:02.081 12:19:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1259492' 00:35:02.081 killing process with pid 1259492 00:35:02.081 12:19:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 1259492 00:35:02.081 12:19:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 1259492 00:35:02.081 12:19:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:35:02.081 12:19:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:35:02.081 12:19:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1259492 ']' 00:35:02.081 12:19:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1259492 00:35:02.081 12:19:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1259492 ']' 00:35:02.081 12:19:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1259492 00:35:02.081 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1259492) - No such process 00:35:02.081 12:19:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 1259492 is not found' 00:35:02.081 Process with pid 1259492 is not found 00:35:02.081 12:19:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:02.081 12:19:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:02.081 12:19:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:02.081 00:35:02.081 real 0m18.186s 00:35:02.081 user 0m40.403s 00:35:02.081 sys 0m0.885s 00:35:02.081 12:19:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:02.081 12:19:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:02.081 ************************************ 00:35:02.081 END TEST spdkcli_nvmf_tcp 00:35:02.081 ************************************ 00:35:02.081 12:19:37 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:02.081 12:19:37 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:02.081 12:19:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:02.081 12:19:37 -- common/autotest_common.sh@10 -- # set +x 00:35:02.081 ************************************ 00:35:02.081 START TEST nvmf_identify_passthru 00:35:02.081 ************************************ 00:35:02.081 12:19:37 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:02.081 * Looking for test storage... 00:35:02.081 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:02.081 12:19:37 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:02.081 12:19:37 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:35:02.081 12:19:37 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:02.081 12:19:38 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:02.081 12:19:38 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:02.081 12:19:38 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:02.081 12:19:38 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:02.081 12:19:38 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:35:02.081 12:19:38 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:35:02.081 12:19:38 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:35:02.081 12:19:38 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:35:02.081 12:19:38 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:35:02.081 12:19:38 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:35:02.081 12:19:38 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:35:02.081 12:19:38 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:02.081 12:19:38 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:35:02.081 12:19:38 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:35:02.081 12:19:38 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:02.081 12:19:38 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:02.081 12:19:38 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:35:02.081 12:19:38 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:35:02.081 12:19:38 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:02.081 12:19:38 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:35:02.081 12:19:38 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:35:02.081 12:19:38 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:35:02.081 12:19:38 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:35:02.081 12:19:38 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:02.081 12:19:38 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:35:02.081 12:19:38 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:35:02.081 12:19:38 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:02.081 12:19:38 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:02.081 12:19:38 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:35:02.081 12:19:38 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:02.081 12:19:38 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:02.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:02.081 --rc genhtml_branch_coverage=1 00:35:02.081 --rc genhtml_function_coverage=1 00:35:02.081 --rc genhtml_legend=1 00:35:02.081 --rc geninfo_all_blocks=1 00:35:02.081 --rc geninfo_unexecuted_blocks=1 00:35:02.081 00:35:02.081 ' 00:35:02.081 12:19:38 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:02.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:02.081 --rc genhtml_branch_coverage=1 00:35:02.081 --rc genhtml_function_coverage=1 00:35:02.081 --rc genhtml_legend=1 00:35:02.081 --rc geninfo_all_blocks=1 00:35:02.081 --rc geninfo_unexecuted_blocks=1 00:35:02.081 00:35:02.081 ' 00:35:02.081 12:19:38 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:02.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:02.081 --rc genhtml_branch_coverage=1 00:35:02.081 --rc genhtml_function_coverage=1 00:35:02.081 --rc genhtml_legend=1 00:35:02.081 --rc geninfo_all_blocks=1 00:35:02.081 --rc geninfo_unexecuted_blocks=1 00:35:02.081 00:35:02.081 ' 00:35:02.081 12:19:38 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:02.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:02.081 --rc genhtml_branch_coverage=1 00:35:02.081 --rc genhtml_function_coverage=1 00:35:02.081 --rc genhtml_legend=1 00:35:02.081 --rc geninfo_all_blocks=1 00:35:02.081 --rc geninfo_unexecuted_blocks=1 00:35:02.081 00:35:02.081 ' 00:35:02.081 12:19:38 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:02.081 12:19:38 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:35:02.081 12:19:38 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:02.081 12:19:38 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:02.081 12:19:38 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:02.081 12:19:38 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:02.081 12:19:38 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:02.081 12:19:38 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:02.081 12:19:38 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:02.081 12:19:38 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:02.081 12:19:38 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:02.081 12:19:38 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:02.081 12:19:38 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:02.081 12:19:38 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:02.081 12:19:38 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:02.081 12:19:38 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:02.082 12:19:38 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:02.082 12:19:38 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:02.082 12:19:38 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:02.082 12:19:38 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:02.082 12:19:38 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:02.082 12:19:38 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:02.082 12:19:38 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:02.082 12:19:38 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.082 12:19:38 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.082 12:19:38 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.082 12:19:38 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:02.082 12:19:38 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.082 12:19:38 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:35:02.082 12:19:38 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:02.082 12:19:38 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:02.082 12:19:38 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:02.082 12:19:38 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:02.082 12:19:38 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:02.082 12:19:38 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:02.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:02.082 12:19:38 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:02.082 12:19:38 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:02.082 12:19:38 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:02.082 12:19:38 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:02.082 12:19:38 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:02.082 12:19:38 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:02.082 12:19:38 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:02.082 12:19:38 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:02.082 12:19:38 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.082 12:19:38 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.082 12:19:38 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.082 12:19:38 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:02.082 12:19:38 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.082 12:19:38 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:35:02.082 12:19:38 nvmf_identify_passthru -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:02.082 12:19:38 nvmf_identify_passthru -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:02.082 12:19:38 nvmf_identify_passthru -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:02.082 12:19:38 nvmf_identify_passthru -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:02.082 12:19:38 nvmf_identify_passthru -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:02.082 12:19:38 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:02.082 12:19:38 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:02.082 12:19:38 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:02.082 12:19:38 nvmf_identify_passthru -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:02.082 12:19:38 nvmf_identify_passthru -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:02.082 12:19:38 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:35:02.082 12:19:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:08.668 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:08.668 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:08.668 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:08.669 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:08.669 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:08.669 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:08.669 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:08.669 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:08.669 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:08.669 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:08.669 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:08.669 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:08.669 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:08.669 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:08.669 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:08.669 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:08.669 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:08.669 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:08.669 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:08.669 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@440 -- # is_hw=yes 00:35:08.669 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:08.669 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:08.669 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:08.669 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:08.669 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:08.669 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:08.669 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:08.669 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:08.669 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:08.669 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:08.669 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:08.669 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:08.669 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:08.669 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:08.669 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:08.669 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:08.669 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:08.669 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:08.669 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:08.669 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:08.669 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:08.930 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:08.930 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:08.930 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:08.930 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:08.930 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:08.930 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:08.930 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.567 ms 00:35:08.930 00:35:08.930 --- 10.0.0.2 ping statistics --- 00:35:08.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:08.930 rtt min/avg/max/mdev = 0.567/0.567/0.567/0.000 ms 00:35:08.930 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:08.930 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:08.930 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:35:08.930 00:35:08.930 --- 10.0.0.1 ping statistics --- 00:35:08.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:08.930 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:35:08.930 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:08.930 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@448 -- # return 0 00:35:08.930 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:35:08.930 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:08.930 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:08.930 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:08.930 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:08.930 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:08.930 12:19:45 nvmf_identify_passthru -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:08.930 12:19:45 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:35:08.930 12:19:45 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:08.931 12:19:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:08.931 12:19:45 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:35:08.931 12:19:45 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:35:08.931 12:19:45 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:35:08.931 12:19:45 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:35:08.931 12:19:45 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:35:08.931 12:19:45 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:35:08.931 12:19:45 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:35:08.931 12:19:45 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:08.931 12:19:45 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:08.931 12:19:45 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:35:09.191 12:19:45 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:35:09.191 12:19:45 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:35:09.191 12:19:45 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:65:00.0 00:35:09.191 12:19:45 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:35:09.191 12:19:45 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:35:09.191 12:19:45 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:35:09.191 12:19:45 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:35:09.191 12:19:45 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:35:09.452 12:19:46 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:35:09.452 12:19:46 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:35:09.452 12:19:46 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:35:09.452 12:19:46 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:35:10.025 12:19:46 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:35:10.025 12:19:46 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:35:10.025 12:19:46 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:10.025 12:19:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:10.025 12:19:46 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:35:10.025 12:19:46 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:10.025 12:19:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:10.025 12:19:46 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1266907 00:35:10.025 12:19:46 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:10.025 12:19:46 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:10.025 12:19:46 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1266907 00:35:10.025 12:19:46 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 1266907 ']' 00:35:10.025 12:19:46 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:10.025 12:19:46 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:10.025 12:19:46 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:10.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:10.025 12:19:46 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:10.025 12:19:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:10.286 [2024-10-21 12:19:46.623197] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:35:10.286 [2024-10-21 12:19:46.623266] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:10.286 [2024-10-21 12:19:46.710379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:10.286 [2024-10-21 12:19:46.764459] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:10.286 [2024-10-21 12:19:46.764506] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:10.286 [2024-10-21 12:19:46.764515] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:10.286 [2024-10-21 12:19:46.764522] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:10.286 [2024-10-21 12:19:46.764528] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:10.286 [2024-10-21 12:19:46.766603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:10.286 [2024-10-21 12:19:46.766763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:10.286 [2024-10-21 12:19:46.766928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:10.286 [2024-10-21 12:19:46.766929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:10.858 12:19:47 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:10.858 12:19:47 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:35:10.858 12:19:47 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:35:10.858 12:19:47 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.858 12:19:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:10.858 INFO: Log level set to 20 00:35:10.858 INFO: Requests: 00:35:10.858 { 00:35:10.858 "jsonrpc": "2.0", 00:35:10.858 "method": "nvmf_set_config", 00:35:10.858 "id": 1, 00:35:10.858 "params": { 00:35:10.858 "admin_cmd_passthru": { 00:35:10.858 "identify_ctrlr": true 00:35:10.858 } 00:35:10.858 } 00:35:10.858 } 00:35:10.858 00:35:10.858 INFO: response: 00:35:10.858 { 00:35:10.858 "jsonrpc": "2.0", 00:35:10.858 "id": 1, 00:35:10.858 "result": true 00:35:10.858 } 00:35:10.858 00:35:10.858 12:19:47 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.858 12:19:47 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:35:10.858 12:19:47 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.858 12:19:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:10.858 INFO: Setting log level to 20 00:35:10.858 INFO: Setting log level to 20 00:35:10.858 INFO: Log level set to 20 00:35:10.858 INFO: Log level set to 20 00:35:10.858 INFO: Requests: 00:35:10.858 { 00:35:10.858 "jsonrpc": "2.0", 00:35:10.858 "method": "framework_start_init", 00:35:10.858 "id": 1 00:35:10.858 } 00:35:10.858 00:35:10.858 INFO: Requests: 00:35:10.858 { 00:35:10.858 "jsonrpc": "2.0", 00:35:10.858 "method": "framework_start_init", 00:35:10.858 "id": 1 00:35:10.858 } 00:35:10.858 00:35:11.119 [2024-10-21 12:19:47.503365] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:35:11.119 INFO: response: 00:35:11.119 { 00:35:11.119 "jsonrpc": "2.0", 00:35:11.119 "id": 1, 00:35:11.119 "result": true 00:35:11.119 } 00:35:11.119 00:35:11.119 INFO: response: 00:35:11.119 { 00:35:11.119 "jsonrpc": "2.0", 00:35:11.119 "id": 1, 00:35:11.119 "result": true 00:35:11.119 } 00:35:11.119 00:35:11.119 12:19:47 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:11.119 12:19:47 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:11.119 12:19:47 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:11.119 12:19:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:11.119 INFO: Setting log level to 40 00:35:11.119 INFO: Setting log level to 40 00:35:11.119 INFO: Setting log level to 40 00:35:11.119 [2024-10-21 12:19:47.512689] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:11.119 12:19:47 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:11.119 12:19:47 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:35:11.119 12:19:47 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:11.119 12:19:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:11.119 12:19:47 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:35:11.119 12:19:47 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:11.119 12:19:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:11.380 Nvme0n1 00:35:11.380 12:19:47 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:11.380 12:19:47 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:35:11.380 12:19:47 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:11.380 12:19:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:11.380 12:19:47 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:11.380 12:19:47 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:11.380 12:19:47 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:11.380 12:19:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:11.380 12:19:47 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:11.380 12:19:47 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:11.380 12:19:47 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:11.380 12:19:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:11.380 [2024-10-21 12:19:47.896507] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:11.380 12:19:47 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:11.380 12:19:47 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:35:11.380 12:19:47 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:11.380 12:19:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:11.380 [ 00:35:11.380 { 00:35:11.380 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:11.380 "subtype": "Discovery", 00:35:11.380 "listen_addresses": [], 00:35:11.380 "allow_any_host": true, 00:35:11.380 "hosts": [] 00:35:11.380 }, 00:35:11.380 { 00:35:11.380 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:11.380 "subtype": "NVMe", 00:35:11.380 "listen_addresses": [ 00:35:11.380 { 00:35:11.380 "trtype": "TCP", 00:35:11.380 "adrfam": "IPv4", 00:35:11.380 "traddr": "10.0.0.2", 00:35:11.380 "trsvcid": "4420" 00:35:11.380 } 00:35:11.380 ], 00:35:11.380 "allow_any_host": true, 00:35:11.380 "hosts": [], 00:35:11.380 "serial_number": "SPDK00000000000001", 00:35:11.380 "model_number": "SPDK bdev Controller", 00:35:11.380 "max_namespaces": 1, 00:35:11.380 "min_cntlid": 1, 00:35:11.380 "max_cntlid": 65519, 00:35:11.380 "namespaces": [ 00:35:11.380 { 00:35:11.380 "nsid": 1, 00:35:11.380 "bdev_name": "Nvme0n1", 00:35:11.380 "name": "Nvme0n1", 00:35:11.380 "nguid": "36344730526054870025384500000044", 00:35:11.380 "uuid": "36344730-5260-5487-0025-384500000044" 00:35:11.380 } 00:35:11.380 ] 00:35:11.380 } 00:35:11.380 ] 00:35:11.380 12:19:47 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:11.380 12:19:47 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:11.380 12:19:47 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:35:11.380 12:19:47 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:35:11.640 12:19:48 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:35:11.640 12:19:48 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:11.640 12:19:48 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:35:11.640 12:19:48 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:35:11.901 12:19:48 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:35:11.901 12:19:48 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:35:11.901 12:19:48 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:35:11.901 12:19:48 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:11.901 12:19:48 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:11.901 12:19:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:11.901 12:19:48 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:11.901 12:19:48 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:35:11.901 12:19:48 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:35:11.901 12:19:48 nvmf_identify_passthru -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:11.901 12:19:48 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:35:11.901 12:19:48 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:11.901 12:19:48 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:35:11.901 12:19:48 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:11.901 12:19:48 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:11.901 rmmod nvme_tcp 00:35:11.901 rmmod nvme_fabrics 00:35:11.901 rmmod nvme_keyring 00:35:11.901 12:19:48 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:11.901 12:19:48 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:35:11.901 12:19:48 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:35:11.901 12:19:48 nvmf_identify_passthru -- nvmf/common.sh@515 -- # '[' -n 1266907 ']' 00:35:11.901 12:19:48 nvmf_identify_passthru -- nvmf/common.sh@516 -- # killprocess 1266907 00:35:11.901 12:19:48 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 1266907 ']' 00:35:11.901 12:19:48 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 1266907 00:35:11.901 12:19:48 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:35:11.901 12:19:48 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:11.901 12:19:48 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1266907 00:35:12.162 12:19:48 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:12.162 12:19:48 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:12.162 12:19:48 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1266907' 00:35:12.162 killing process with pid 1266907 00:35:12.162 12:19:48 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 1266907 00:35:12.162 12:19:48 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 1266907 00:35:12.421 12:19:48 nvmf_identify_passthru -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:12.421 12:19:48 nvmf_identify_passthru -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:12.421 12:19:48 nvmf_identify_passthru -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:12.421 12:19:48 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:35:12.421 12:19:48 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-save 00:35:12.421 12:19:48 nvmf_identify_passthru -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:12.421 12:19:48 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-restore 00:35:12.421 12:19:48 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:12.421 12:19:48 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:12.421 12:19:48 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:12.421 12:19:48 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:12.421 12:19:48 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:14.331 12:19:50 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:14.331 00:35:14.331 real 0m13.031s 00:35:14.331 user 0m10.635s 00:35:14.331 sys 0m6.330s 00:35:14.331 12:19:50 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:14.331 12:19:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:14.331 ************************************ 00:35:14.331 END TEST nvmf_identify_passthru 00:35:14.331 ************************************ 00:35:14.592 12:19:50 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:14.592 12:19:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:14.592 12:19:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:14.592 12:19:50 -- common/autotest_common.sh@10 -- # set +x 00:35:14.592 ************************************ 00:35:14.592 START TEST nvmf_dif 00:35:14.592 ************************************ 00:35:14.592 12:19:50 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:14.592 * Looking for test storage... 00:35:14.592 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:14.592 12:19:51 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:14.592 12:19:51 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:35:14.592 12:19:51 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:14.592 12:19:51 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:14.592 12:19:51 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:14.592 12:19:51 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:14.592 12:19:51 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:14.592 12:19:51 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:35:14.592 12:19:51 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:35:14.592 12:19:51 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:35:14.592 12:19:51 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:35:14.592 12:19:51 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:35:14.592 12:19:51 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:35:14.592 12:19:51 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:35:14.593 12:19:51 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:14.593 12:19:51 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:35:14.593 12:19:51 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:35:14.593 12:19:51 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:14.593 12:19:51 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:14.593 12:19:51 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:35:14.593 12:19:51 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:35:14.593 12:19:51 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:14.593 12:19:51 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:35:14.593 12:19:51 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:35:14.593 12:19:51 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:35:14.593 12:19:51 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:35:14.593 12:19:51 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:14.593 12:19:51 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:35:14.593 12:19:51 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:35:14.593 12:19:51 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:14.593 12:19:51 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:14.593 12:19:51 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:35:14.593 12:19:51 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:14.593 12:19:51 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:14.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:14.593 --rc genhtml_branch_coverage=1 00:35:14.593 --rc genhtml_function_coverage=1 00:35:14.593 --rc genhtml_legend=1 00:35:14.593 --rc geninfo_all_blocks=1 00:35:14.593 --rc geninfo_unexecuted_blocks=1 00:35:14.593 00:35:14.593 ' 00:35:14.593 12:19:51 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:14.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:14.593 --rc genhtml_branch_coverage=1 00:35:14.593 --rc genhtml_function_coverage=1 00:35:14.593 --rc genhtml_legend=1 00:35:14.593 --rc geninfo_all_blocks=1 00:35:14.593 --rc geninfo_unexecuted_blocks=1 00:35:14.593 00:35:14.593 ' 00:35:14.593 12:19:51 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:14.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:14.593 --rc genhtml_branch_coverage=1 00:35:14.593 --rc genhtml_function_coverage=1 00:35:14.593 --rc genhtml_legend=1 00:35:14.593 --rc geninfo_all_blocks=1 00:35:14.593 --rc geninfo_unexecuted_blocks=1 00:35:14.593 00:35:14.593 ' 00:35:14.593 12:19:51 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:14.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:14.593 --rc genhtml_branch_coverage=1 00:35:14.593 --rc genhtml_function_coverage=1 00:35:14.593 --rc genhtml_legend=1 00:35:14.593 --rc geninfo_all_blocks=1 00:35:14.593 --rc geninfo_unexecuted_blocks=1 00:35:14.593 00:35:14.593 ' 00:35:14.593 12:19:51 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:14.593 12:19:51 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:14.593 12:19:51 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:14.593 12:19:51 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:14.593 12:19:51 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:14.593 12:19:51 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:14.593 12:19:51 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:14.593 12:19:51 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:14.593 12:19:51 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:14.593 12:19:51 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:14.593 12:19:51 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:14.854 12:19:51 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:14.854 12:19:51 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:14.854 12:19:51 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:14.854 12:19:51 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:14.854 12:19:51 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:14.854 12:19:51 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:14.854 12:19:51 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:14.854 12:19:51 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:14.854 12:19:51 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:35:14.854 12:19:51 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:14.854 12:19:51 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:14.854 12:19:51 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:14.854 12:19:51 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.854 12:19:51 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.854 12:19:51 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.854 12:19:51 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:14.854 12:19:51 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.854 12:19:51 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:35:14.854 12:19:51 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:14.854 12:19:51 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:14.854 12:19:51 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:14.854 12:19:51 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:14.854 12:19:51 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:14.854 12:19:51 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:14.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:14.855 12:19:51 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:14.855 12:19:51 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:14.855 12:19:51 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:14.855 12:19:51 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:14.855 12:19:51 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:14.855 12:19:51 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:14.855 12:19:51 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:14.855 12:19:51 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:14.855 12:19:51 nvmf_dif -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:14.855 12:19:51 nvmf_dif -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:14.855 12:19:51 nvmf_dif -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:14.855 12:19:51 nvmf_dif -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:14.855 12:19:51 nvmf_dif -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:14.855 12:19:51 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:14.855 12:19:51 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:14.855 12:19:51 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:14.855 12:19:51 nvmf_dif -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:14.855 12:19:51 nvmf_dif -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:14.855 12:19:51 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:35:14.855 12:19:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:22.994 12:19:58 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:22.994 12:19:58 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:35:22.994 12:19:58 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:22.994 12:19:58 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:22.994 12:19:58 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:22.994 12:19:58 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:22.994 12:19:58 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:22.994 12:19:58 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:35:22.994 12:19:58 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:22.994 12:19:58 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:35:22.994 12:19:58 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:35:22.994 12:19:58 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:35:22.994 12:19:58 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:35:22.994 12:19:58 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:35:22.994 12:19:58 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:35:22.994 12:19:58 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:22.994 12:19:58 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:22.994 12:19:58 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:22.994 12:19:58 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:22.994 12:19:58 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:22.994 12:19:58 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:22.994 12:19:58 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:22.994 12:19:58 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:22.994 12:19:58 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:22.994 12:19:58 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:22.994 12:19:58 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:22.994 12:19:58 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:22.994 12:19:58 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:22.994 12:19:58 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:22.994 12:19:58 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:22.994 12:19:58 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:22.994 12:19:58 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:22.994 12:19:58 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:22.994 12:19:58 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:22.994 12:19:58 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:22.994 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:22.994 12:19:58 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:22.994 12:19:58 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:22.994 12:19:58 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:22.994 12:19:58 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:22.994 12:19:58 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:22.994 12:19:58 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:22.994 12:19:58 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:22.994 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:22.995 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:22.995 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@440 -- # is_hw=yes 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:22.995 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:22.995 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.419 ms 00:35:22.995 00:35:22.995 --- 10.0.0.2 ping statistics --- 00:35:22.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:22.995 rtt min/avg/max/mdev = 0.419/0.419/0.419/0.000 ms 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:22.995 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:22.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:35:22.995 00:35:22.995 --- 10.0.0.1 ping statistics --- 00:35:22.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:22.995 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@448 -- # return 0 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:35:22.995 12:19:58 nvmf_dif -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:25.543 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:35:25.543 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:35:25.543 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:35:25.543 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:35:25.543 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:35:25.543 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:35:25.543 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:35:25.543 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:35:25.543 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:35:25.543 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:35:25.543 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:35:25.543 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:35:25.543 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:35:25.543 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:35:25.543 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:35:25.543 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:35:25.543 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:35:25.804 12:20:02 nvmf_dif -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:25.804 12:20:02 nvmf_dif -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:25.804 12:20:02 nvmf_dif -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:25.804 12:20:02 nvmf_dif -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:25.804 12:20:02 nvmf_dif -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:25.804 12:20:02 nvmf_dif -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:25.804 12:20:02 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:25.804 12:20:02 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:35:25.804 12:20:02 nvmf_dif -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:25.804 12:20:02 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:25.805 12:20:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:25.805 12:20:02 nvmf_dif -- nvmf/common.sh@507 -- # nvmfpid=1272969 00:35:25.805 12:20:02 nvmf_dif -- nvmf/common.sh@508 -- # waitforlisten 1272969 00:35:25.805 12:20:02 nvmf_dif -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:25.805 12:20:02 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 1272969 ']' 00:35:25.805 12:20:02 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:25.805 12:20:02 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:25.805 12:20:02 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:25.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:25.805 12:20:02 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:25.805 12:20:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:25.805 [2024-10-21 12:20:02.305300] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:35:25.805 [2024-10-21 12:20:02.305369] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:25.805 [2024-10-21 12:20:02.391767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:26.066 [2024-10-21 12:20:02.428113] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:26.066 [2024-10-21 12:20:02.428146] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:26.066 [2024-10-21 12:20:02.428154] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:26.066 [2024-10-21 12:20:02.428161] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:26.066 [2024-10-21 12:20:02.428167] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:26.066 [2024-10-21 12:20:02.428744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:26.637 12:20:03 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:26.637 12:20:03 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:35:26.637 12:20:03 nvmf_dif -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:26.637 12:20:03 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:26.637 12:20:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:26.637 12:20:03 nvmf_dif -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:26.637 12:20:03 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:35:26.637 12:20:03 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:26.637 12:20:03 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.637 12:20:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:26.637 [2024-10-21 12:20:03.153129] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:26.637 12:20:03 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.637 12:20:03 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:26.637 12:20:03 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:26.637 12:20:03 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:26.637 12:20:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:26.637 ************************************ 00:35:26.637 START TEST fio_dif_1_default 00:35:26.637 ************************************ 00:35:26.637 12:20:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:35:26.637 12:20:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:35:26.637 12:20:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:35:26.637 12:20:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:35:26.637 12:20:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:35:26.637 12:20:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:35:26.637 12:20:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:26.637 12:20:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.637 12:20:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:26.637 bdev_null0 00:35:26.637 12:20:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.637 12:20:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:26.637 12:20:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.637 12:20:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:26.637 12:20:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.637 12:20:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:26.637 12:20:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.637 12:20:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:26.898 12:20:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.898 12:20:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:26.898 12:20:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.898 12:20:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:26.898 [2024-10-21 12:20:03.241516] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:26.898 12:20:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.898 12:20:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:26.899 12:20:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:26.899 12:20:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:26.899 12:20:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # config=() 00:35:26.899 12:20:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:26.899 12:20:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # local subsystem config 00:35:26.899 12:20:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:26.899 12:20:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:26.899 12:20:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:26.899 { 00:35:26.899 "params": { 00:35:26.899 "name": "Nvme$subsystem", 00:35:26.899 "trtype": "$TEST_TRANSPORT", 00:35:26.899 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:26.899 "adrfam": "ipv4", 00:35:26.899 "trsvcid": "$NVMF_PORT", 00:35:26.899 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:26.899 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:26.899 "hdgst": ${hdgst:-false}, 00:35:26.899 "ddgst": ${ddgst:-false} 00:35:26.899 }, 00:35:26.899 "method": "bdev_nvme_attach_controller" 00:35:26.899 } 00:35:26.899 EOF 00:35:26.899 )") 00:35:26.899 12:20:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:35:26.899 12:20:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:26.899 12:20:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:35:26.899 12:20:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:26.899 12:20:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:26.899 12:20:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:35:26.899 12:20:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:26.899 12:20:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:35:26.899 12:20:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:26.899 12:20:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:26.899 12:20:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # cat 00:35:26.899 12:20:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:26.899 12:20:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:35:26.899 12:20:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:35:26.899 12:20:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:35:26.899 12:20:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:26.899 12:20:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # jq . 00:35:26.899 12:20:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@583 -- # IFS=, 00:35:26.899 12:20:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:35:26.899 "params": { 00:35:26.899 "name": "Nvme0", 00:35:26.899 "trtype": "tcp", 00:35:26.899 "traddr": "10.0.0.2", 00:35:26.899 "adrfam": "ipv4", 00:35:26.899 "trsvcid": "4420", 00:35:26.899 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:26.899 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:26.899 "hdgst": false, 00:35:26.899 "ddgst": false 00:35:26.899 }, 00:35:26.899 "method": "bdev_nvme_attach_controller" 00:35:26.899 }' 00:35:26.899 12:20:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:26.899 12:20:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:26.899 12:20:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:26.899 12:20:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:26.899 12:20:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:26.899 12:20:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:26.899 12:20:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:26.899 12:20:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:26.899 12:20:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:26.899 12:20:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:27.159 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:27.159 fio-3.35 00:35:27.159 Starting 1 thread 00:35:39.397 00:35:39.397 filename0: (groupid=0, jobs=1): err= 0: pid=1273531: Mon Oct 21 12:20:14 2024 00:35:39.397 read: IOPS=192, BW=770KiB/s (788kB/s)(7712KiB/10018msec) 00:35:39.397 slat (nsec): min=5645, max=34742, avg=6423.13, stdev=1430.71 00:35:39.397 clat (usec): min=680, max=42478, avg=20765.86, stdev=20163.90 00:35:39.397 lat (usec): min=685, max=42513, avg=20772.28, stdev=20163.87 00:35:39.397 clat percentiles (usec): 00:35:39.397 | 1.00th=[ 734], 5.00th=[ 791], 10.00th=[ 816], 20.00th=[ 840], 00:35:39.397 | 30.00th=[ 857], 40.00th=[ 906], 50.00th=[ 1020], 60.00th=[41157], 00:35:39.397 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:39.397 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42730], 00:35:39.397 | 99.99th=[42730] 00:35:39.397 bw ( KiB/s): min= 704, max= 864, per=99.89%, avg=769.60, stdev=33.60, samples=20 00:35:39.397 iops : min= 176, max= 216, avg=192.40, stdev= 8.40, samples=20 00:35:39.397 lat (usec) : 750=2.18%, 1000=46.99% 00:35:39.397 lat (msec) : 2=1.45%, 50=49.38% 00:35:39.397 cpu : usr=93.65%, sys=6.15%, ctx=15, majf=0, minf=223 00:35:39.397 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:39.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:39.397 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:39.397 issued rwts: total=1928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:39.397 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:39.397 00:35:39.397 Run status group 0 (all jobs): 00:35:39.397 READ: bw=770KiB/s (788kB/s), 770KiB/s-770KiB/s (788kB/s-788kB/s), io=7712KiB (7897kB), run=10018-10018msec 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.397 00:35:39.397 real 0m11.285s 00:35:39.397 user 0m24.466s 00:35:39.397 sys 0m0.975s 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:39.397 ************************************ 00:35:39.397 END TEST fio_dif_1_default 00:35:39.397 ************************************ 00:35:39.397 12:20:14 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:39.397 12:20:14 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:39.397 12:20:14 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:39.397 12:20:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:39.397 ************************************ 00:35:39.397 START TEST fio_dif_1_multi_subsystems 00:35:39.397 ************************************ 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:39.397 bdev_null0 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:39.397 [2024-10-21 12:20:14.609633] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:39.397 bdev_null1 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.397 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:39.398 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.398 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:39.398 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.398 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:39.398 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:39.398 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:39.398 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # config=() 00:35:39.398 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # local subsystem config 00:35:39.398 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:39.398 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:39.398 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:39.398 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:39.398 { 00:35:39.398 "params": { 00:35:39.398 "name": "Nvme$subsystem", 00:35:39.398 "trtype": "$TEST_TRANSPORT", 00:35:39.398 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:39.398 "adrfam": "ipv4", 00:35:39.398 "trsvcid": "$NVMF_PORT", 00:35:39.398 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:39.398 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:39.398 "hdgst": ${hdgst:-false}, 00:35:39.398 "ddgst": ${ddgst:-false} 00:35:39.398 }, 00:35:39.398 "method": "bdev_nvme_attach_controller" 00:35:39.398 } 00:35:39.398 EOF 00:35:39.398 )") 00:35:39.398 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:35:39.398 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:39.398 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:39.398 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:35:39.398 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:39.398 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:35:39.398 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:39.398 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:35:39.398 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:39.398 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:39.398 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:35:39.398 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:39.398 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:35:39.398 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:39.398 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:35:39.398 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:35:39.398 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:39.398 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:39.398 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:39.398 { 00:35:39.398 "params": { 00:35:39.398 "name": "Nvme$subsystem", 00:35:39.398 "trtype": "$TEST_TRANSPORT", 00:35:39.398 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:39.398 "adrfam": "ipv4", 00:35:39.398 "trsvcid": "$NVMF_PORT", 00:35:39.398 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:39.398 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:39.398 "hdgst": ${hdgst:-false}, 00:35:39.398 "ddgst": ${ddgst:-false} 00:35:39.398 }, 00:35:39.398 "method": "bdev_nvme_attach_controller" 00:35:39.398 } 00:35:39.398 EOF 00:35:39.398 )") 00:35:39.398 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:35:39.398 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:39.398 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:35:39.398 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # jq . 00:35:39.398 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@583 -- # IFS=, 00:35:39.398 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:35:39.398 "params": { 00:35:39.398 "name": "Nvme0", 00:35:39.398 "trtype": "tcp", 00:35:39.398 "traddr": "10.0.0.2", 00:35:39.398 "adrfam": "ipv4", 00:35:39.398 "trsvcid": "4420", 00:35:39.398 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:39.398 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:39.398 "hdgst": false, 00:35:39.398 "ddgst": false 00:35:39.398 }, 00:35:39.398 "method": "bdev_nvme_attach_controller" 00:35:39.398 },{ 00:35:39.398 "params": { 00:35:39.398 "name": "Nvme1", 00:35:39.398 "trtype": "tcp", 00:35:39.398 "traddr": "10.0.0.2", 00:35:39.398 "adrfam": "ipv4", 00:35:39.398 "trsvcid": "4420", 00:35:39.398 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:39.398 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:39.398 "hdgst": false, 00:35:39.398 "ddgst": false 00:35:39.398 }, 00:35:39.398 "method": "bdev_nvme_attach_controller" 00:35:39.398 }' 00:35:39.398 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:39.398 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:39.398 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:39.398 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:39.398 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:39.398 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:39.398 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:39.398 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:39.398 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:39.398 12:20:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:39.398 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:39.398 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:39.398 fio-3.35 00:35:39.398 Starting 2 threads 00:35:49.401 00:35:49.401 filename0: (groupid=0, jobs=1): err= 0: pid=1275827: Mon Oct 21 12:20:25 2024 00:35:49.402 read: IOPS=98, BW=395KiB/s (404kB/s)(3952KiB/10017msec) 00:35:49.402 slat (nsec): min=5645, max=31519, avg=6672.33, stdev=1609.45 00:35:49.402 clat (usec): min=858, max=42447, avg=40534.45, stdev=4398.47 00:35:49.402 lat (usec): min=863, max=42478, avg=40541.12, stdev=4398.58 00:35:49.402 clat percentiles (usec): 00:35:49.402 | 1.00th=[ 979], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:49.402 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:49.402 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:49.402 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:49.402 | 99.99th=[42206] 00:35:49.402 bw ( KiB/s): min= 384, max= 416, per=34.13%, avg=393.60, stdev=15.05, samples=20 00:35:49.402 iops : min= 96, max= 104, avg=98.40, stdev= 3.76, samples=20 00:35:49.402 lat (usec) : 1000=1.21% 00:35:49.402 lat (msec) : 50=98.79% 00:35:49.402 cpu : usr=95.84%, sys=3.95%, ctx=14, majf=0, minf=152 00:35:49.402 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:49.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.402 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.402 issued rwts: total=988,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:49.402 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:49.402 filename1: (groupid=0, jobs=1): err= 0: pid=1275828: Mon Oct 21 12:20:25 2024 00:35:49.402 read: IOPS=189, BW=758KiB/s (776kB/s)(7584KiB/10003msec) 00:35:49.402 slat (nsec): min=5720, max=33414, avg=6628.09, stdev=1469.27 00:35:49.402 clat (usec): min=651, max=42323, avg=21084.14, stdev=20166.19 00:35:49.402 lat (usec): min=657, max=42356, avg=21090.77, stdev=20166.17 00:35:49.402 clat percentiles (usec): 00:35:49.402 | 1.00th=[ 701], 5.00th=[ 783], 10.00th=[ 807], 20.00th=[ 832], 00:35:49.402 | 30.00th=[ 848], 40.00th=[ 865], 50.00th=[41157], 60.00th=[41157], 00:35:49.402 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:49.402 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:49.402 | 99.99th=[42206] 00:35:49.402 bw ( KiB/s): min= 672, max= 768, per=65.91%, avg=759.58, stdev=25.78, samples=19 00:35:49.402 iops : min= 168, max= 192, avg=189.89, stdev= 6.45, samples=19 00:35:49.402 lat (usec) : 750=2.80%, 1000=46.15% 00:35:49.402 lat (msec) : 2=0.84%, 50=50.21% 00:35:49.402 cpu : usr=95.16%, sys=4.63%, ctx=14, majf=0, minf=122 00:35:49.402 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:49.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.402 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.402 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:49.402 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:49.402 00:35:49.402 Run status group 0 (all jobs): 00:35:49.402 READ: bw=1152KiB/s (1179kB/s), 395KiB/s-758KiB/s (404kB/s-776kB/s), io=11.3MiB (11.8MB), run=10003-10017msec 00:35:49.663 12:20:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:49.663 12:20:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:49.663 12:20:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:49.663 12:20:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:49.663 12:20:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:49.663 12:20:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:49.663 12:20:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.663 12:20:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:49.663 12:20:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.663 12:20:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:49.663 12:20:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.663 12:20:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:49.663 12:20:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.663 12:20:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:49.663 12:20:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:49.663 12:20:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:49.663 12:20:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:49.663 12:20:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.663 12:20:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:49.663 12:20:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.663 12:20:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:49.663 12:20:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.663 12:20:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:49.663 12:20:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.663 00:35:49.663 real 0m11.541s 00:35:49.663 user 0m34.684s 00:35:49.663 sys 0m1.227s 00:35:49.663 12:20:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:49.663 12:20:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:49.663 ************************************ 00:35:49.663 END TEST fio_dif_1_multi_subsystems 00:35:49.663 ************************************ 00:35:49.663 12:20:26 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:49.663 12:20:26 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:49.663 12:20:26 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:49.663 12:20:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:49.663 ************************************ 00:35:49.663 START TEST fio_dif_rand_params 00:35:49.663 ************************************ 00:35:49.663 12:20:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:35:49.663 12:20:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:49.663 12:20:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:49.663 12:20:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:49.663 12:20:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:49.663 12:20:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:49.663 12:20:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:49.663 12:20:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:49.663 12:20:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:49.663 12:20:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:49.663 12:20:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:49.663 12:20:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:49.663 12:20:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:49.663 12:20:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:49.663 12:20:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.663 12:20:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:49.663 bdev_null0 00:35:49.663 12:20:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.663 12:20:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:49.663 12:20:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.663 12:20:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:49.663 12:20:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.663 12:20:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:49.663 12:20:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.663 12:20:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:49.663 12:20:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.663 12:20:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:49.663 12:20:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.663 12:20:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:49.663 [2024-10-21 12:20:26.234801] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:49.663 12:20:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.663 12:20:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:49.663 12:20:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:49.663 12:20:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:49.663 12:20:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:35:49.663 12:20:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:49.663 12:20:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:35:49.663 12:20:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:49.663 12:20:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:49.663 12:20:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:49.663 { 00:35:49.663 "params": { 00:35:49.663 "name": "Nvme$subsystem", 00:35:49.663 "trtype": "$TEST_TRANSPORT", 00:35:49.663 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:49.663 "adrfam": "ipv4", 00:35:49.663 "trsvcid": "$NVMF_PORT", 00:35:49.663 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:49.663 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:49.663 "hdgst": ${hdgst:-false}, 00:35:49.663 "ddgst": ${ddgst:-false} 00:35:49.663 }, 00:35:49.663 "method": "bdev_nvme_attach_controller" 00:35:49.663 } 00:35:49.663 EOF 00:35:49.663 )") 00:35:49.663 12:20:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:49.663 12:20:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:49.663 12:20:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:49.663 12:20:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:49.663 12:20:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:49.663 12:20:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:49.664 12:20:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:49.664 12:20:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:35:49.664 12:20:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:49.664 12:20:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:49.664 12:20:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:35:49.664 12:20:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:49.664 12:20:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:49.664 12:20:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:35:49.664 12:20:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:49.664 12:20:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:49.664 12:20:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:35:49.664 12:20:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:35:49.664 12:20:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:35:49.664 "params": { 00:35:49.664 "name": "Nvme0", 00:35:49.664 "trtype": "tcp", 00:35:49.664 "traddr": "10.0.0.2", 00:35:49.664 "adrfam": "ipv4", 00:35:49.664 "trsvcid": "4420", 00:35:49.664 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:49.664 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:49.664 "hdgst": false, 00:35:49.664 "ddgst": false 00:35:49.664 }, 00:35:49.664 "method": "bdev_nvme_attach_controller" 00:35:49.664 }' 00:35:49.925 12:20:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:49.925 12:20:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:49.925 12:20:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:49.925 12:20:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:49.925 12:20:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:49.925 12:20:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:49.925 12:20:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:49.925 12:20:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:49.925 12:20:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:49.925 12:20:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:50.193 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:50.193 ... 00:35:50.193 fio-3.35 00:35:50.193 Starting 3 threads 00:35:56.782 00:35:56.782 filename0: (groupid=0, jobs=1): err= 0: pid=1278055: Mon Oct 21 12:20:32 2024 00:35:56.782 read: IOPS=248, BW=31.1MiB/s (32.6MB/s)(156MiB/5018msec) 00:35:56.782 slat (nsec): min=5664, max=33353, avg=8479.76, stdev=2268.81 00:35:56.782 clat (usec): min=3696, max=90710, avg=12039.21, stdev=14173.01 00:35:56.782 lat (usec): min=3723, max=90716, avg=12047.69, stdev=14172.82 00:35:56.782 clat percentiles (usec): 00:35:56.782 | 1.00th=[ 4359], 5.00th=[ 5014], 10.00th=[ 5538], 20.00th=[ 6063], 00:35:56.782 | 30.00th=[ 6390], 40.00th=[ 6587], 50.00th=[ 6783], 60.00th=[ 7046], 00:35:56.782 | 70.00th=[ 7242], 80.00th=[ 7701], 90.00th=[46924], 95.00th=[47973], 00:35:56.782 | 99.00th=[49021], 99.50th=[49546], 99.90th=[88605], 99.95th=[90702], 00:35:56.782 | 99.99th=[90702] 00:35:56.782 bw ( KiB/s): min=20992, max=44288, per=27.27%, avg=31897.60, stdev=7847.14, samples=10 00:35:56.782 iops : min= 164, max= 346, avg=249.20, stdev=61.31, samples=10 00:35:56.782 lat (msec) : 4=0.24%, 10=86.71%, 50=12.81%, 100=0.24% 00:35:56.782 cpu : usr=96.03%, sys=3.73%, ctx=5, majf=0, minf=126 00:35:56.782 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:56.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:56.782 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:56.782 issued rwts: total=1249,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:56.782 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:56.782 filename0: (groupid=0, jobs=1): err= 0: pid=1278056: Mon Oct 21 12:20:32 2024 00:35:56.782 read: IOPS=330, BW=41.3MiB/s (43.3MB/s)(208MiB/5045msec) 00:35:56.782 slat (nsec): min=5834, max=31178, avg=8431.75, stdev=1493.90 00:35:56.782 clat (usec): min=4785, max=49328, avg=9042.64, stdev=5129.68 00:35:56.783 lat (usec): min=4794, max=49338, avg=9051.07, stdev=5129.81 00:35:56.783 clat percentiles (usec): 00:35:56.783 | 1.00th=[ 5145], 5.00th=[ 5932], 10.00th=[ 6390], 20.00th=[ 6849], 00:35:56.783 | 30.00th=[ 7373], 40.00th=[ 7701], 50.00th=[ 8160], 60.00th=[ 8717], 00:35:56.783 | 70.00th=[ 9503], 80.00th=[10290], 90.00th=[10945], 95.00th=[11731], 00:35:56.783 | 99.00th=[46924], 99.50th=[47449], 99.90th=[49021], 99.95th=[49546], 00:35:56.783 | 99.99th=[49546] 00:35:56.783 bw ( KiB/s): min=34304, max=49664, per=36.42%, avg=42598.40, stdev=5058.76, samples=10 00:35:56.783 iops : min= 268, max= 388, avg=333.00, stdev=39.58, samples=10 00:35:56.783 lat (msec) : 10=76.42%, 20=22.02%, 50=1.56% 00:35:56.783 cpu : usr=93.64%, sys=6.13%, ctx=12, majf=0, minf=83 00:35:56.783 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:56.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:56.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:56.783 issued rwts: total=1667,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:56.783 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:56.783 filename0: (groupid=0, jobs=1): err= 0: pid=1278058: Mon Oct 21 12:20:32 2024 00:35:56.783 read: IOPS=335, BW=42.0MiB/s (44.0MB/s)(212MiB/5045msec) 00:35:56.783 slat (nsec): min=5708, max=68897, avg=8758.54, stdev=2573.97 00:35:56.783 clat (usec): min=3907, max=51632, avg=8897.98, stdev=6936.07 00:35:56.783 lat (usec): min=3916, max=51641, avg=8906.74, stdev=6936.38 00:35:56.783 clat percentiles (usec): 00:35:56.783 | 1.00th=[ 4686], 5.00th=[ 5342], 10.00th=[ 5735], 20.00th=[ 6390], 00:35:56.783 | 30.00th=[ 6783], 40.00th=[ 7177], 50.00th=[ 7504], 60.00th=[ 8160], 00:35:56.783 | 70.00th=[ 8717], 80.00th=[ 9372], 90.00th=[10159], 95.00th=[10945], 00:35:56.783 | 99.00th=[48497], 99.50th=[49546], 99.90th=[50594], 99.95th=[51643], 00:35:56.783 | 99.99th=[51643] 00:35:56.783 bw ( KiB/s): min=30464, max=52736, per=37.01%, avg=43289.60, stdev=8058.87, samples=10 00:35:56.783 iops : min= 238, max= 412, avg=338.20, stdev=62.96, samples=10 00:35:56.783 lat (msec) : 4=0.12%, 10=88.08%, 20=8.85%, 50=2.54%, 100=0.41% 00:35:56.783 cpu : usr=91.08%, sys=6.86%, ctx=346, majf=0, minf=78 00:35:56.783 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:56.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:56.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:56.783 issued rwts: total=1694,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:56.783 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:56.783 00:35:56.783 Run status group 0 (all jobs): 00:35:56.783 READ: bw=114MiB/s (120MB/s), 31.1MiB/s-42.0MiB/s (32.6MB/s-44.0MB/s), io=576MiB (604MB), run=5018-5045msec 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:56.783 bdev_null0 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:56.783 [2024-10-21 12:20:32.497225] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:56.783 bdev_null1 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:56.783 bdev_null2 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.783 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:56.784 { 00:35:56.784 "params": { 00:35:56.784 "name": "Nvme$subsystem", 00:35:56.784 "trtype": "$TEST_TRANSPORT", 00:35:56.784 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:56.784 "adrfam": "ipv4", 00:35:56.784 "trsvcid": "$NVMF_PORT", 00:35:56.784 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:56.784 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:56.784 "hdgst": ${hdgst:-false}, 00:35:56.784 "ddgst": ${ddgst:-false} 00:35:56.784 }, 00:35:56.784 "method": "bdev_nvme_attach_controller" 00:35:56.784 } 00:35:56.784 EOF 00:35:56.784 )") 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:56.784 { 00:35:56.784 "params": { 00:35:56.784 "name": "Nvme$subsystem", 00:35:56.784 "trtype": "$TEST_TRANSPORT", 00:35:56.784 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:56.784 "adrfam": "ipv4", 00:35:56.784 "trsvcid": "$NVMF_PORT", 00:35:56.784 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:56.784 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:56.784 "hdgst": ${hdgst:-false}, 00:35:56.784 "ddgst": ${ddgst:-false} 00:35:56.784 }, 00:35:56.784 "method": "bdev_nvme_attach_controller" 00:35:56.784 } 00:35:56.784 EOF 00:35:56.784 )") 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:56.784 { 00:35:56.784 "params": { 00:35:56.784 "name": "Nvme$subsystem", 00:35:56.784 "trtype": "$TEST_TRANSPORT", 00:35:56.784 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:56.784 "adrfam": "ipv4", 00:35:56.784 "trsvcid": "$NVMF_PORT", 00:35:56.784 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:56.784 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:56.784 "hdgst": ${hdgst:-false}, 00:35:56.784 "ddgst": ${ddgst:-false} 00:35:56.784 }, 00:35:56.784 "method": "bdev_nvme_attach_controller" 00:35:56.784 } 00:35:56.784 EOF 00:35:56.784 )") 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:35:56.784 "params": { 00:35:56.784 "name": "Nvme0", 00:35:56.784 "trtype": "tcp", 00:35:56.784 "traddr": "10.0.0.2", 00:35:56.784 "adrfam": "ipv4", 00:35:56.784 "trsvcid": "4420", 00:35:56.784 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:56.784 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:56.784 "hdgst": false, 00:35:56.784 "ddgst": false 00:35:56.784 }, 00:35:56.784 "method": "bdev_nvme_attach_controller" 00:35:56.784 },{ 00:35:56.784 "params": { 00:35:56.784 "name": "Nvme1", 00:35:56.784 "trtype": "tcp", 00:35:56.784 "traddr": "10.0.0.2", 00:35:56.784 "adrfam": "ipv4", 00:35:56.784 "trsvcid": "4420", 00:35:56.784 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:56.784 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:56.784 "hdgst": false, 00:35:56.784 "ddgst": false 00:35:56.784 }, 00:35:56.784 "method": "bdev_nvme_attach_controller" 00:35:56.784 },{ 00:35:56.784 "params": { 00:35:56.784 "name": "Nvme2", 00:35:56.784 "trtype": "tcp", 00:35:56.784 "traddr": "10.0.0.2", 00:35:56.784 "adrfam": "ipv4", 00:35:56.784 "trsvcid": "4420", 00:35:56.784 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:56.784 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:56.784 "hdgst": false, 00:35:56.784 "ddgst": false 00:35:56.784 }, 00:35:56.784 "method": "bdev_nvme_attach_controller" 00:35:56.784 }' 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:56.784 12:20:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:56.784 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:56.784 ... 00:35:56.785 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:56.785 ... 00:35:56.785 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:56.785 ... 00:35:56.785 fio-3.35 00:35:56.785 Starting 24 threads 00:36:09.082 00:36:09.082 filename0: (groupid=0, jobs=1): err= 0: pid=1279524: Mon Oct 21 12:20:44 2024 00:36:09.082 read: IOPS=714, BW=2860KiB/s (2928kB/s)(28.0MiB/10012msec) 00:36:09.082 slat (nsec): min=5823, max=79430, avg=10931.87, stdev=7310.89 00:36:09.082 clat (usec): min=1392, max=41197, avg=22288.06, stdev=5069.63 00:36:09.082 lat (usec): min=1417, max=41204, avg=22298.99, stdev=5069.34 00:36:09.082 clat percentiles (usec): 00:36:09.082 | 1.00th=[ 2180], 5.00th=[13042], 10.00th=[16057], 20.00th=[21365], 00:36:09.082 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:09.082 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24773], 95.00th=[25297], 00:36:09.082 | 99.00th=[35914], 99.50th=[39060], 99.90th=[40633], 99.95th=[41157], 00:36:09.083 | 99.99th=[41157] 00:36:09.083 bw ( KiB/s): min= 2560, max= 4352, per=4.43%, avg=2856.80, stdev=398.62, samples=20 00:36:09.083 iops : min= 640, max= 1088, avg=714.20, stdev=99.65, samples=20 00:36:09.083 lat (msec) : 2=0.63%, 4=2.28%, 10=0.27%, 20=15.19%, 50=81.64% 00:36:09.083 cpu : usr=98.97%, sys=0.76%, ctx=11, majf=0, minf=35 00:36:09.083 IO depths : 1=4.1%, 2=9.0%, 4=20.9%, 8=57.5%, 16=8.4%, 32=0.0%, >=64=0.0% 00:36:09.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.083 complete : 0=0.0%, 4=93.1%, 8=1.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.083 issued rwts: total=7158,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.083 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.083 filename0: (groupid=0, jobs=1): err= 0: pid=1279525: Mon Oct 21 12:20:44 2024 00:36:09.083 read: IOPS=673, BW=2695KiB/s (2760kB/s)(26.3MiB/10010msec) 00:36:09.083 slat (nsec): min=5832, max=75406, avg=9597.50, stdev=6016.28 00:36:09.083 clat (usec): min=11687, max=39333, avg=23666.01, stdev=2034.64 00:36:09.083 lat (usec): min=11694, max=39339, avg=23675.61, stdev=2034.33 00:36:09.083 clat percentiles (usec): 00:36:09.083 | 1.00th=[13173], 5.00th=[20055], 10.00th=[23200], 20.00th=[23725], 00:36:09.083 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:09.083 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:36:09.083 | 99.00th=[25822], 99.50th=[26346], 99.90th=[34341], 99.95th=[39060], 00:36:09.083 | 99.99th=[39584] 00:36:09.083 bw ( KiB/s): min= 2560, max= 3040, per=4.17%, avg=2691.20, stdev=106.33, samples=20 00:36:09.083 iops : min= 640, max= 760, avg=672.80, stdev=26.58, samples=20 00:36:09.083 lat (msec) : 20=5.01%, 50=94.99% 00:36:09.083 cpu : usr=98.70%, sys=0.95%, ctx=105, majf=0, minf=32 00:36:09.083 IO depths : 1=6.0%, 2=11.9%, 4=24.2%, 8=51.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:36:09.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.083 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.083 issued rwts: total=6744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.083 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.083 filename0: (groupid=0, jobs=1): err= 0: pid=1279526: Mon Oct 21 12:20:44 2024 00:36:09.083 read: IOPS=665, BW=2662KiB/s (2725kB/s)(26.0MiB/10003msec) 00:36:09.083 slat (nsec): min=5875, max=80369, avg=20389.85, stdev=12920.31 00:36:09.083 clat (usec): min=9685, max=29367, avg=23863.51, stdev=1190.67 00:36:09.083 lat (usec): min=9706, max=29399, avg=23883.90, stdev=1189.62 00:36:09.083 clat percentiles (usec): 00:36:09.083 | 1.00th=[19268], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:36:09.083 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:09.083 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24773], 95.00th=[25035], 00:36:09.083 | 99.00th=[25822], 99.50th=[26346], 99.90th=[26870], 99.95th=[28443], 00:36:09.083 | 99.99th=[29492] 00:36:09.083 bw ( KiB/s): min= 2560, max= 2816, per=4.13%, avg=2661.05, stdev=67.05, samples=19 00:36:09.083 iops : min= 640, max= 704, avg=665.26, stdev=16.76, samples=19 00:36:09.083 lat (msec) : 10=0.17%, 20=0.86%, 50=98.98% 00:36:09.083 cpu : usr=98.98%, sys=0.73%, ctx=20, majf=0, minf=25 00:36:09.083 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:09.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.083 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.083 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.083 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.083 filename0: (groupid=0, jobs=1): err= 0: pid=1279527: Mon Oct 21 12:20:44 2024 00:36:09.083 read: IOPS=677, BW=2709KiB/s (2774kB/s)(26.5MiB/10002msec) 00:36:09.083 slat (usec): min=5, max=114, avg=19.90, stdev=12.02 00:36:09.083 clat (usec): min=1440, max=27005, avg=23436.35, stdev=3177.19 00:36:09.083 lat (usec): min=1546, max=27015, avg=23456.25, stdev=3176.72 00:36:09.083 clat percentiles (usec): 00:36:09.083 | 1.00th=[ 2409], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:36:09.083 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:09.083 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[25035], 00:36:09.083 | 99.00th=[25560], 99.50th=[26084], 99.90th=[26870], 99.95th=[26870], 00:36:09.083 | 99.99th=[27132] 00:36:09.083 bw ( KiB/s): min= 2560, max= 3760, per=4.20%, avg=2710.74, stdev=260.48, samples=19 00:36:09.083 iops : min= 640, max= 940, avg=677.68, stdev=65.12, samples=19 00:36:09.083 lat (msec) : 2=0.13%, 4=1.61%, 10=0.24%, 20=0.97%, 50=97.05% 00:36:09.083 cpu : usr=98.84%, sys=0.87%, ctx=13, majf=0, minf=19 00:36:09.083 IO depths : 1=6.1%, 2=12.2%, 4=24.5%, 8=50.7%, 16=6.5%, 32=0.0%, >=64=0.0% 00:36:09.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.083 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.083 issued rwts: total=6774,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.083 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.083 filename0: (groupid=0, jobs=1): err= 0: pid=1279528: Mon Oct 21 12:20:44 2024 00:36:09.083 read: IOPS=663, BW=2654KiB/s (2718kB/s)(25.9MiB/10006msec) 00:36:09.083 slat (nsec): min=5840, max=52164, avg=11854.15, stdev=6917.22 00:36:09.083 clat (usec): min=14800, max=31875, avg=23999.01, stdev=929.98 00:36:09.083 lat (usec): min=14811, max=31907, avg=24010.87, stdev=929.56 00:36:09.083 clat percentiles (usec): 00:36:09.083 | 1.00th=[22414], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:36:09.083 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:09.083 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:36:09.083 | 99.00th=[25822], 99.50th=[26084], 99.90th=[31851], 99.95th=[31851], 00:36:09.083 | 99.99th=[31851] 00:36:09.083 bw ( KiB/s): min= 2560, max= 2693, per=4.12%, avg=2654.58, stdev=58.08, samples=19 00:36:09.083 iops : min= 640, max= 673, avg=663.63, stdev=14.51, samples=19 00:36:09.083 lat (msec) : 20=0.72%, 50=99.28% 00:36:09.083 cpu : usr=98.96%, sys=0.75%, ctx=24, majf=0, minf=30 00:36:09.083 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:09.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.083 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.083 issued rwts: total=6640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.083 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.083 filename0: (groupid=0, jobs=1): err= 0: pid=1279529: Mon Oct 21 12:20:44 2024 00:36:09.083 read: IOPS=678, BW=2714KiB/s (2779kB/s)(26.5MiB/10001msec) 00:36:09.083 slat (nsec): min=5817, max=83899, avg=17521.26, stdev=12861.14 00:36:09.083 clat (usec): min=10233, max=52386, avg=23451.96, stdev=3510.17 00:36:09.083 lat (usec): min=10242, max=52404, avg=23469.48, stdev=3511.38 00:36:09.083 clat percentiles (usec): 00:36:09.083 | 1.00th=[14353], 5.00th=[16581], 10.00th=[18744], 20.00th=[23200], 00:36:09.083 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:09.083 | 70.00th=[23987], 80.00th=[24249], 90.00th=[25035], 95.00th=[28705], 00:36:09.083 | 99.00th=[36439], 99.50th=[39060], 99.90th=[41681], 99.95th=[52167], 00:36:09.083 | 99.99th=[52167] 00:36:09.083 bw ( KiB/s): min= 2480, max= 2832, per=4.17%, avg=2689.68, stdev=103.68, samples=19 00:36:09.083 iops : min= 620, max= 708, avg=672.42, stdev=25.92, samples=19 00:36:09.083 lat (msec) : 20=13.19%, 50=86.74%, 100=0.07% 00:36:09.083 cpu : usr=97.96%, sys=1.36%, ctx=195, majf=0, minf=22 00:36:09.083 IO depths : 1=2.6%, 2=5.2%, 4=12.9%, 8=68.0%, 16=11.3%, 32=0.0%, >=64=0.0% 00:36:09.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.083 complete : 0=0.0%, 4=91.2%, 8=4.4%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.083 issued rwts: total=6786,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.083 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.083 filename0: (groupid=0, jobs=1): err= 0: pid=1279530: Mon Oct 21 12:20:44 2024 00:36:09.083 read: IOPS=663, BW=2655KiB/s (2718kB/s)(25.9MiB/10005msec) 00:36:09.083 slat (nsec): min=5830, max=86426, avg=13821.94, stdev=11784.64 00:36:09.083 clat (usec): min=10429, max=33428, avg=23993.13, stdev=1120.57 00:36:09.083 lat (usec): min=10450, max=33449, avg=24006.96, stdev=1119.21 00:36:09.083 clat percentiles (usec): 00:36:09.083 | 1.00th=[22676], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:36:09.083 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:09.083 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24773], 95.00th=[25035], 00:36:09.083 | 99.00th=[25822], 99.50th=[26084], 99.90th=[33424], 99.95th=[33424], 00:36:09.083 | 99.99th=[33424] 00:36:09.083 bw ( KiB/s): min= 2554, max= 2688, per=4.11%, avg=2647.26, stdev=61.62, samples=19 00:36:09.083 iops : min= 638, max= 672, avg=661.79, stdev=15.45, samples=19 00:36:09.083 lat (msec) : 20=0.48%, 50=99.52% 00:36:09.083 cpu : usr=99.01%, sys=0.69%, ctx=20, majf=0, minf=24 00:36:09.083 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:09.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.083 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.083 issued rwts: total=6640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.083 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.083 filename0: (groupid=0, jobs=1): err= 0: pid=1279531: Mon Oct 21 12:20:44 2024 00:36:09.083 read: IOPS=668, BW=2673KiB/s (2737kB/s)(26.1MiB/10003msec) 00:36:09.083 slat (nsec): min=5864, max=72791, avg=9796.91, stdev=6020.59 00:36:09.083 clat (usec): min=10209, max=40169, avg=23863.29, stdev=1634.51 00:36:09.083 lat (usec): min=10216, max=40176, avg=23873.09, stdev=1633.72 00:36:09.084 clat percentiles (usec): 00:36:09.084 | 1.00th=[15926], 5.00th=[22938], 10.00th=[23462], 20.00th=[23725], 00:36:09.084 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:09.084 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:36:09.084 | 99.00th=[26608], 99.50th=[28181], 99.90th=[30540], 99.95th=[40109], 00:36:09.084 | 99.99th=[40109] 00:36:09.084 bw ( KiB/s): min= 2560, max= 2864, per=4.14%, avg=2672.84, stdev=72.44, samples=19 00:36:09.084 iops : min= 640, max= 716, avg=668.21, stdev=18.11, samples=19 00:36:09.084 lat (msec) : 20=2.63%, 50=97.37% 00:36:09.084 cpu : usr=98.50%, sys=1.03%, ctx=74, majf=0, minf=28 00:36:09.084 IO depths : 1=5.7%, 2=11.7%, 4=24.4%, 8=51.3%, 16=6.8%, 32=0.0%, >=64=0.0% 00:36:09.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.084 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.084 issued rwts: total=6684,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.084 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.084 filename1: (groupid=0, jobs=1): err= 0: pid=1279532: Mon Oct 21 12:20:44 2024 00:36:09.084 read: IOPS=664, BW=2657KiB/s (2721kB/s)(26.0MiB/10017msec) 00:36:09.084 slat (usec): min=5, max=194, avg=13.33, stdev= 9.33 00:36:09.084 clat (usec): min=12886, max=32508, avg=23976.22, stdev=1462.52 00:36:09.084 lat (usec): min=12897, max=32531, avg=23989.55, stdev=1463.06 00:36:09.084 clat percentiles (usec): 00:36:09.084 | 1.00th=[16909], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:36:09.084 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:09.084 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25297], 00:36:09.084 | 99.00th=[29492], 99.50th=[30802], 99.90th=[32113], 99.95th=[32375], 00:36:09.084 | 99.99th=[32637] 00:36:09.084 bw ( KiB/s): min= 2560, max= 2688, per=4.12%, avg=2654.32, stdev=57.91, samples=19 00:36:09.084 iops : min= 640, max= 672, avg=663.58, stdev=14.48, samples=19 00:36:09.084 lat (msec) : 20=2.52%, 50=97.48% 00:36:09.084 cpu : usr=98.73%, sys=0.97%, ctx=18, majf=0, minf=21 00:36:09.084 IO depths : 1=5.0%, 2=11.2%, 4=25.0%, 8=51.3%, 16=7.5%, 32=0.0%, >=64=0.0% 00:36:09.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.084 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.084 issued rwts: total=6654,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.084 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.084 filename1: (groupid=0, jobs=1): err= 0: pid=1279533: Mon Oct 21 12:20:44 2024 00:36:09.084 read: IOPS=663, BW=2654KiB/s (2717kB/s)(25.9MiB/10009msec) 00:36:09.084 slat (nsec): min=5845, max=64438, avg=16560.39, stdev=10358.76 00:36:09.084 clat (usec): min=10219, max=32642, avg=23961.84, stdev=1269.42 00:36:09.084 lat (usec): min=10231, max=32665, avg=23978.40, stdev=1269.84 00:36:09.084 clat percentiles (usec): 00:36:09.084 | 1.00th=[19530], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:36:09.084 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:09.084 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25297], 00:36:09.084 | 99.00th=[28181], 99.50th=[28967], 99.90th=[32637], 99.95th=[32637], 00:36:09.084 | 99.99th=[32637] 00:36:09.084 bw ( KiB/s): min= 2560, max= 2688, per=4.11%, avg=2647.58, stdev=59.48, samples=19 00:36:09.084 iops : min= 640, max= 672, avg=661.89, stdev=14.87, samples=19 00:36:09.084 lat (msec) : 20=1.33%, 50=98.67% 00:36:09.084 cpu : usr=98.98%, sys=0.74%, ctx=12, majf=0, minf=28 00:36:09.084 IO depths : 1=5.0%, 2=11.2%, 4=25.0%, 8=51.4%, 16=7.5%, 32=0.0%, >=64=0.0% 00:36:09.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.084 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.084 issued rwts: total=6640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.084 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.084 filename1: (groupid=0, jobs=1): err= 0: pid=1279534: Mon Oct 21 12:20:44 2024 00:36:09.084 read: IOPS=665, BW=2661KiB/s (2725kB/s)(26.0MiB/10004msec) 00:36:09.084 slat (nsec): min=5819, max=92267, avg=23837.52, stdev=13863.62 00:36:09.084 clat (usec): min=4648, max=44583, avg=23816.72, stdev=1892.49 00:36:09.084 lat (usec): min=4661, max=44601, avg=23840.56, stdev=1892.71 00:36:09.084 clat percentiles (usec): 00:36:09.084 | 1.00th=[21365], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:36:09.084 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:09.084 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[25035], 00:36:09.084 | 99.00th=[25560], 99.50th=[26084], 99.90th=[44303], 99.95th=[44303], 00:36:09.084 | 99.99th=[44827] 00:36:09.084 bw ( KiB/s): min= 2432, max= 2688, per=4.11%, avg=2647.58, stdev=74.55, samples=19 00:36:09.084 iops : min= 608, max= 672, avg=661.89, stdev=18.64, samples=19 00:36:09.084 lat (msec) : 10=0.48%, 20=0.48%, 50=99.04% 00:36:09.084 cpu : usr=98.79%, sys=0.89%, ctx=20, majf=0, minf=20 00:36:09.084 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:09.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.084 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.084 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.084 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.084 filename1: (groupid=0, jobs=1): err= 0: pid=1279535: Mon Oct 21 12:20:44 2024 00:36:09.084 read: IOPS=665, BW=2661KiB/s (2725kB/s)(26.0MiB/10005msec) 00:36:09.084 slat (nsec): min=5683, max=99304, avg=23496.09, stdev=15712.92 00:36:09.084 clat (usec): min=4823, max=44454, avg=23840.08, stdev=1966.89 00:36:09.084 lat (usec): min=4828, max=44471, avg=23863.57, stdev=1967.04 00:36:09.084 clat percentiles (usec): 00:36:09.084 | 1.00th=[15926], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:36:09.084 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:09.084 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24773], 95.00th=[25035], 00:36:09.084 | 99.00th=[25822], 99.50th=[30540], 99.90th=[44303], 99.95th=[44303], 00:36:09.084 | 99.99th=[44303] 00:36:09.084 bw ( KiB/s): min= 2432, max= 2688, per=4.11%, avg=2647.58, stdev=74.55, samples=19 00:36:09.084 iops : min= 608, max= 672, avg=661.89, stdev=18.64, samples=19 00:36:09.084 lat (msec) : 10=0.48%, 20=0.75%, 50=98.77% 00:36:09.084 cpu : usr=98.96%, sys=0.70%, ctx=81, majf=0, minf=26 00:36:09.084 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:09.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.084 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.084 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.084 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.084 filename1: (groupid=0, jobs=1): err= 0: pid=1279536: Mon Oct 21 12:20:44 2024 00:36:09.084 read: IOPS=667, BW=2669KiB/s (2733kB/s)(26.1MiB/10008msec) 00:36:09.084 slat (nsec): min=5820, max=79373, avg=19316.38, stdev=11719.94 00:36:09.084 clat (usec): min=9886, max=39532, avg=23809.24, stdev=1926.66 00:36:09.084 lat (usec): min=9905, max=39540, avg=23828.55, stdev=1927.09 00:36:09.084 clat percentiles (usec): 00:36:09.084 | 1.00th=[16188], 5.00th=[21890], 10.00th=[23200], 20.00th=[23462], 00:36:09.084 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:09.084 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24773], 95.00th=[25297], 00:36:09.084 | 99.00th=[29754], 99.50th=[32900], 99.90th=[35914], 99.95th=[35914], 00:36:09.084 | 99.99th=[39584] 00:36:09.084 bw ( KiB/s): min= 2544, max= 2864, per=4.13%, avg=2664.80, stdev=83.30, samples=20 00:36:09.084 iops : min= 636, max= 716, avg=666.20, stdev=20.82, samples=20 00:36:09.084 lat (msec) : 10=0.07%, 20=3.80%, 50=96.12% 00:36:09.084 cpu : usr=98.90%, sys=0.79%, ctx=13, majf=0, minf=22 00:36:09.084 IO depths : 1=4.8%, 2=10.5%, 4=23.7%, 8=53.3%, 16=7.8%, 32=0.0%, >=64=0.0% 00:36:09.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.084 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.084 issued rwts: total=6678,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.084 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.084 filename1: (groupid=0, jobs=1): err= 0: pid=1279537: Mon Oct 21 12:20:44 2024 00:36:09.084 read: IOPS=670, BW=2683KiB/s (2747kB/s)(26.2MiB/10004msec) 00:36:09.084 slat (usec): min=5, max=105, avg=19.69, stdev=15.44 00:36:09.084 clat (usec): min=9353, max=43357, avg=23713.72, stdev=4765.31 00:36:09.084 lat (usec): min=9359, max=43377, avg=23733.40, stdev=4767.62 00:36:09.084 clat percentiles (usec): 00:36:09.084 | 1.00th=[13042], 5.00th=[16188], 10.00th=[17957], 20.00th=[20317], 00:36:09.084 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23725], 60.00th=[23987], 00:36:09.084 | 70.00th=[24249], 80.00th=[25035], 90.00th=[29230], 95.00th=[33162], 00:36:09.084 | 99.00th=[39060], 99.50th=[40633], 99.90th=[42730], 99.95th=[43254], 00:36:09.084 | 99.99th=[43254] 00:36:09.084 bw ( KiB/s): min= 2560, max= 2864, per=4.16%, avg=2684.63, stdev=82.21, samples=19 00:36:09.084 iops : min= 640, max= 716, avg=671.16, stdev=20.55, samples=19 00:36:09.084 lat (msec) : 10=0.09%, 20=17.94%, 50=81.97% 00:36:09.084 cpu : usr=98.58%, sys=0.93%, ctx=66, majf=0, minf=34 00:36:09.084 IO depths : 1=2.4%, 2=4.8%, 4=13.0%, 8=69.1%, 16=10.7%, 32=0.0%, >=64=0.0% 00:36:09.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.084 complete : 0=0.0%, 4=90.9%, 8=3.9%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.084 issued rwts: total=6710,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.084 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.084 filename1: (groupid=0, jobs=1): err= 0: pid=1279538: Mon Oct 21 12:20:44 2024 00:36:09.084 read: IOPS=664, BW=2660KiB/s (2724kB/s)(26.0MiB/10004msec) 00:36:09.084 slat (nsec): min=5835, max=96793, avg=23061.64, stdev=14383.15 00:36:09.084 clat (usec): min=13544, max=41143, avg=23858.41, stdev=1705.84 00:36:09.085 lat (usec): min=13560, max=41149, avg=23881.47, stdev=1705.92 00:36:09.085 clat percentiles (usec): 00:36:09.085 | 1.00th=[16712], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:36:09.085 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:09.085 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24773], 95.00th=[25035], 00:36:09.085 | 99.00th=[30016], 99.50th=[31065], 99.90th=[40109], 99.95th=[41157], 00:36:09.085 | 99.99th=[41157] 00:36:09.085 bw ( KiB/s): min= 2560, max= 2864, per=4.12%, avg=2659.37, stdev=75.91, samples=19 00:36:09.085 iops : min= 640, max= 716, avg=664.84, stdev=18.98, samples=19 00:36:09.085 lat (msec) : 20=2.54%, 50=97.46% 00:36:09.085 cpu : usr=98.95%, sys=0.73%, ctx=20, majf=0, minf=30 00:36:09.085 IO depths : 1=5.8%, 2=11.7%, 4=24.2%, 8=51.6%, 16=6.7%, 32=0.0%, >=64=0.0% 00:36:09.085 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.085 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.085 issued rwts: total=6652,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.085 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.085 filename1: (groupid=0, jobs=1): err= 0: pid=1279539: Mon Oct 21 12:20:44 2024 00:36:09.085 read: IOPS=709, BW=2837KiB/s (2905kB/s)(27.7MiB/10007msec) 00:36:09.085 slat (nsec): min=5812, max=72209, avg=10565.22, stdev=7193.84 00:36:09.085 clat (usec): min=8606, max=43172, avg=22472.14, stdev=3660.10 00:36:09.085 lat (usec): min=8612, max=43178, avg=22482.71, stdev=3661.52 00:36:09.085 clat percentiles (usec): 00:36:09.085 | 1.00th=[ 9634], 5.00th=[15139], 10.00th=[16450], 20.00th=[21103], 00:36:09.085 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:09.085 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:09.085 | 99.00th=[26608], 99.50th=[38011], 99.90th=[40633], 99.95th=[41157], 00:36:09.085 | 99.99th=[43254] 00:36:09.085 bw ( KiB/s): min= 2560, max= 3536, per=4.40%, avg=2835.20, stdev=297.24, samples=20 00:36:09.085 iops : min= 640, max= 884, avg=708.80, stdev=74.31, samples=20 00:36:09.085 lat (msec) : 10=1.96%, 20=15.98%, 50=82.07% 00:36:09.085 cpu : usr=98.97%, sys=0.73%, ctx=18, majf=0, minf=25 00:36:09.085 IO depths : 1=3.8%, 2=8.5%, 4=20.1%, 8=58.7%, 16=8.8%, 32=0.0%, >=64=0.0% 00:36:09.085 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.085 complete : 0=0.0%, 4=92.8%, 8=1.7%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.085 issued rwts: total=7098,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.085 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.085 filename2: (groupid=0, jobs=1): err= 0: pid=1279540: Mon Oct 21 12:20:44 2024 00:36:09.085 read: IOPS=664, BW=2657KiB/s (2720kB/s)(25.9MiB/10002msec) 00:36:09.085 slat (nsec): min=5663, max=99144, avg=16973.47, stdev=14112.88 00:36:09.085 clat (usec): min=6769, max=60855, avg=24019.95, stdev=3266.46 00:36:09.085 lat (usec): min=6775, max=60880, avg=24036.92, stdev=3266.93 00:36:09.085 clat percentiles (usec): 00:36:09.085 | 1.00th=[14615], 5.00th=[18744], 10.00th=[21627], 20.00th=[23462], 00:36:09.085 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:09.085 | 70.00th=[24249], 80.00th=[24511], 90.00th=[25560], 95.00th=[29230], 00:36:09.085 | 99.00th=[35914], 99.50th=[38536], 99.90th=[46400], 99.95th=[60556], 00:36:09.085 | 99.99th=[61080] 00:36:09.085 bw ( KiB/s): min= 2392, max= 2848, per=4.10%, avg=2643.79, stdev=100.15, samples=19 00:36:09.085 iops : min= 598, max= 712, avg=660.95, stdev=25.04, samples=19 00:36:09.085 lat (msec) : 10=0.39%, 20=6.20%, 50=93.33%, 100=0.08% 00:36:09.085 cpu : usr=98.88%, sys=0.79%, ctx=53, majf=0, minf=67 00:36:09.085 IO depths : 1=0.3%, 2=0.7%, 4=2.9%, 8=79.4%, 16=16.7%, 32=0.0%, >=64=0.0% 00:36:09.085 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.085 complete : 0=0.0%, 4=89.5%, 8=9.1%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.085 issued rwts: total=6643,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.085 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.085 filename2: (groupid=0, jobs=1): err= 0: pid=1279541: Mon Oct 21 12:20:44 2024 00:36:09.085 read: IOPS=665, BW=2661KiB/s (2725kB/s)(26.0MiB/10005msec) 00:36:09.085 slat (nsec): min=5739, max=94128, avg=25497.37, stdev=15017.31 00:36:09.085 clat (usec): min=4755, max=44441, avg=23793.79, stdev=1880.16 00:36:09.085 lat (usec): min=4778, max=44461, avg=23819.28, stdev=1880.68 00:36:09.085 clat percentiles (usec): 00:36:09.085 | 1.00th=[21103], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:36:09.085 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:09.085 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[25035], 00:36:09.085 | 99.00th=[25560], 99.50th=[26084], 99.90th=[44303], 99.95th=[44303], 00:36:09.085 | 99.99th=[44303] 00:36:09.085 bw ( KiB/s): min= 2432, max= 2688, per=4.11%, avg=2647.58, stdev=74.55, samples=19 00:36:09.085 iops : min= 608, max= 672, avg=661.89, stdev=18.64, samples=19 00:36:09.085 lat (msec) : 10=0.48%, 20=0.48%, 50=99.04% 00:36:09.085 cpu : usr=98.93%, sys=0.71%, ctx=131, majf=0, minf=21 00:36:09.085 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:09.085 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.085 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.085 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.085 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.085 filename2: (groupid=0, jobs=1): err= 0: pid=1279542: Mon Oct 21 12:20:44 2024 00:36:09.085 read: IOPS=665, BW=2661KiB/s (2725kB/s)(26.0MiB/10005msec) 00:36:09.085 slat (nsec): min=5814, max=62846, avg=17828.69, stdev=9737.19 00:36:09.085 clat (usec): min=4690, max=44875, avg=23897.71, stdev=1913.67 00:36:09.085 lat (usec): min=4699, max=44896, avg=23915.54, stdev=1913.76 00:36:09.085 clat percentiles (usec): 00:36:09.085 | 1.00th=[17957], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:36:09.085 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:09.085 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24773], 95.00th=[25035], 00:36:09.085 | 99.00th=[25560], 99.50th=[27919], 99.90th=[44827], 99.95th=[44827], 00:36:09.085 | 99.99th=[44827] 00:36:09.085 bw ( KiB/s): min= 2436, max= 2944, per=4.12%, avg=2656.20, stdev=100.19, samples=20 00:36:09.085 iops : min= 609, max= 736, avg=664.05, stdev=25.05, samples=20 00:36:09.085 lat (msec) : 10=0.48%, 20=0.57%, 50=98.95% 00:36:09.085 cpu : usr=99.00%, sys=0.69%, ctx=28, majf=0, minf=30 00:36:09.085 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:09.085 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.085 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.085 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.085 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.085 filename2: (groupid=0, jobs=1): err= 0: pid=1279543: Mon Oct 21 12:20:44 2024 00:36:09.085 read: IOPS=703, BW=2814KiB/s (2881kB/s)(27.5MiB/10017msec) 00:36:09.085 slat (nsec): min=5814, max=64158, avg=10622.75, stdev=7856.46 00:36:09.085 clat (usec): min=7663, max=39336, avg=22672.88, stdev=4433.31 00:36:09.085 lat (usec): min=7672, max=39348, avg=22683.50, stdev=4434.59 00:36:09.085 clat percentiles (usec): 00:36:09.085 | 1.00th=[13304], 5.00th=[15139], 10.00th=[16188], 20.00th=[18482], 00:36:09.085 | 30.00th=[21103], 40.00th=[23462], 50.00th=[23725], 60.00th=[23987], 00:36:09.085 | 70.00th=[23987], 80.00th=[24511], 90.00th=[27132], 95.00th=[30016], 00:36:09.085 | 99.00th=[36439], 99.50th=[37487], 99.90th=[39060], 99.95th=[39060], 00:36:09.085 | 99.99th=[39584] 00:36:09.085 bw ( KiB/s): min= 2688, max= 2976, per=4.36%, avg=2810.11, stdev=93.33, samples=19 00:36:09.085 iops : min= 672, max= 744, avg=702.53, stdev=23.33, samples=19 00:36:09.085 lat (msec) : 10=0.06%, 20=24.72%, 50=75.22% 00:36:09.085 cpu : usr=98.95%, sys=0.76%, ctx=25, majf=0, minf=23 00:36:09.085 IO depths : 1=1.5%, 2=3.1%, 4=9.8%, 8=73.4%, 16=12.0%, 32=0.0%, >=64=0.0% 00:36:09.085 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.085 complete : 0=0.0%, 4=90.1%, 8=5.4%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.085 issued rwts: total=7046,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.085 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.085 filename2: (groupid=0, jobs=1): err= 0: pid=1279544: Mon Oct 21 12:20:44 2024 00:36:09.085 read: IOPS=665, BW=2662KiB/s (2725kB/s)(26.0MiB/10003msec) 00:36:09.085 slat (nsec): min=5814, max=82236, avg=15931.65, stdev=12434.35 00:36:09.085 clat (usec): min=9174, max=29971, avg=23905.40, stdev=1406.45 00:36:09.085 lat (usec): min=9187, max=29980, avg=23921.33, stdev=1405.59 00:36:09.085 clat percentiles (usec): 00:36:09.085 | 1.00th=[17171], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:36:09.085 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:09.085 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25297], 00:36:09.085 | 99.00th=[28705], 99.50th=[29230], 99.90th=[30016], 99.95th=[30016], 00:36:09.085 | 99.99th=[30016] 00:36:09.085 bw ( KiB/s): min= 2560, max= 2816, per=4.13%, avg=2661.05, stdev=80.72, samples=19 00:36:09.085 iops : min= 640, max= 704, avg=665.26, stdev=20.18, samples=19 00:36:09.086 lat (msec) : 10=0.12%, 20=1.68%, 50=98.20% 00:36:09.086 cpu : usr=97.50%, sys=1.58%, ctx=274, majf=0, minf=21 00:36:09.086 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:09.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.086 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.086 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.086 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.086 filename2: (groupid=0, jobs=1): err= 0: pid=1279545: Mon Oct 21 12:20:44 2024 00:36:09.086 read: IOPS=665, BW=2663KiB/s (2727kB/s)(26.0MiB/10005msec) 00:36:09.086 slat (nsec): min=5639, max=81047, avg=15307.16, stdev=12476.79 00:36:09.086 clat (usec): min=4373, max=52984, avg=23960.07, stdev=3604.95 00:36:09.086 lat (usec): min=4379, max=53000, avg=23975.38, stdev=3605.05 00:36:09.086 clat percentiles (usec): 00:36:09.086 | 1.00th=[13042], 5.00th=[17957], 10.00th=[20317], 20.00th=[23462], 00:36:09.086 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[24249], 00:36:09.086 | 70.00th=[24511], 80.00th=[24773], 90.00th=[27132], 95.00th=[30016], 00:36:09.086 | 99.00th=[35390], 99.50th=[38536], 99.90th=[41157], 99.95th=[52691], 00:36:09.086 | 99.99th=[53216] 00:36:09.086 bw ( KiB/s): min= 2432, max= 2768, per=4.10%, avg=2646.74, stdev=68.83, samples=19 00:36:09.086 iops : min= 608, max= 692, avg=661.68, stdev=17.21, samples=19 00:36:09.086 lat (msec) : 10=0.48%, 20=8.98%, 50=90.47%, 100=0.08% 00:36:09.086 cpu : usr=98.54%, sys=0.99%, ctx=109, majf=0, minf=29 00:36:09.086 IO depths : 1=0.4%, 2=0.9%, 4=4.2%, 8=78.5%, 16=16.0%, 32=0.0%, >=64=0.0% 00:36:09.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.086 complete : 0=0.0%, 4=89.3%, 8=8.7%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.086 issued rwts: total=6662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.086 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.086 filename2: (groupid=0, jobs=1): err= 0: pid=1279546: Mon Oct 21 12:20:44 2024 00:36:09.086 read: IOPS=667, BW=2670KiB/s (2734kB/s)(26.1MiB/10005msec) 00:36:09.086 slat (nsec): min=5764, max=80786, avg=20104.98, stdev=12268.96 00:36:09.086 clat (usec): min=3674, max=45262, avg=23811.54, stdev=3263.19 00:36:09.086 lat (usec): min=3681, max=45280, avg=23831.64, stdev=3263.75 00:36:09.086 clat percentiles (usec): 00:36:09.086 | 1.00th=[11731], 5.00th=[18744], 10.00th=[23200], 20.00th=[23462], 00:36:09.086 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:09.086 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25560], 00:36:09.086 | 99.00th=[36963], 99.50th=[38536], 99.90th=[45351], 99.95th=[45351], 00:36:09.086 | 99.99th=[45351] 00:36:09.086 bw ( KiB/s): min= 2436, max= 2944, per=4.13%, avg=2665.00, stdev=100.91, samples=20 00:36:09.086 iops : min= 609, max= 736, avg=666.25, stdev=25.23, samples=20 00:36:09.086 lat (msec) : 4=0.04%, 10=0.57%, 20=4.90%, 50=94.49% 00:36:09.086 cpu : usr=98.58%, sys=1.11%, ctx=17, majf=0, minf=23 00:36:09.086 IO depths : 1=3.1%, 2=8.0%, 4=20.7%, 8=58.2%, 16=10.0%, 32=0.0%, >=64=0.0% 00:36:09.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.086 complete : 0=0.0%, 4=93.3%, 8=1.5%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.086 issued rwts: total=6678,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.086 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.086 filename2: (groupid=0, jobs=1): err= 0: pid=1279548: Mon Oct 21 12:20:44 2024 00:36:09.086 read: IOPS=663, BW=2654KiB/s (2718kB/s)(25.9MiB/10006msec) 00:36:09.086 slat (nsec): min=5841, max=87764, avg=25383.06, stdev=13882.24 00:36:09.086 clat (usec): min=12925, max=32316, avg=23877.51, stdev=914.00 00:36:09.086 lat (usec): min=12931, max=32347, avg=23902.89, stdev=913.91 00:36:09.086 clat percentiles (usec): 00:36:09.086 | 1.00th=[22676], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:36:09.086 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:09.086 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[25035], 00:36:09.086 | 99.00th=[25560], 99.50th=[27657], 99.90th=[28705], 99.95th=[28705], 00:36:09.086 | 99.99th=[32375] 00:36:09.086 bw ( KiB/s): min= 2560, max= 2688, per=4.11%, avg=2649.60, stdev=60.18, samples=20 00:36:09.086 iops : min= 640, max= 672, avg=662.40, stdev=15.05, samples=20 00:36:09.086 lat (msec) : 20=0.48%, 50=99.52% 00:36:09.086 cpu : usr=98.95%, sys=0.74%, ctx=33, majf=0, minf=29 00:36:09.086 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:09.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.086 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.086 issued rwts: total=6640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.086 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.086 00:36:09.086 Run status group 0 (all jobs): 00:36:09.086 READ: bw=63.0MiB/s (66.0MB/s), 2654KiB/s-2860KiB/s (2717kB/s-2928kB/s), io=631MiB (661MB), run=10001-10017msec 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:09.086 bdev_null0 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.086 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:09.087 [2024-10-21 12:20:44.392555] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:09.087 bdev_null1 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:09.087 { 00:36:09.087 "params": { 00:36:09.087 "name": "Nvme$subsystem", 00:36:09.087 "trtype": "$TEST_TRANSPORT", 00:36:09.087 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:09.087 "adrfam": "ipv4", 00:36:09.087 "trsvcid": "$NVMF_PORT", 00:36:09.087 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:09.087 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:09.087 "hdgst": ${hdgst:-false}, 00:36:09.087 "ddgst": ${ddgst:-false} 00:36:09.087 }, 00:36:09.087 "method": "bdev_nvme_attach_controller" 00:36:09.087 } 00:36:09.087 EOF 00:36:09.087 )") 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:09.087 { 00:36:09.087 "params": { 00:36:09.087 "name": "Nvme$subsystem", 00:36:09.087 "trtype": "$TEST_TRANSPORT", 00:36:09.087 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:09.087 "adrfam": "ipv4", 00:36:09.087 "trsvcid": "$NVMF_PORT", 00:36:09.087 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:09.087 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:09.087 "hdgst": ${hdgst:-false}, 00:36:09.087 "ddgst": ${ddgst:-false} 00:36:09.087 }, 00:36:09.087 "method": "bdev_nvme_attach_controller" 00:36:09.087 } 00:36:09.087 EOF 00:36:09.087 )") 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:36:09.087 "params": { 00:36:09.087 "name": "Nvme0", 00:36:09.087 "trtype": "tcp", 00:36:09.087 "traddr": "10.0.0.2", 00:36:09.087 "adrfam": "ipv4", 00:36:09.087 "trsvcid": "4420", 00:36:09.087 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:09.087 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:09.087 "hdgst": false, 00:36:09.087 "ddgst": false 00:36:09.087 }, 00:36:09.087 "method": "bdev_nvme_attach_controller" 00:36:09.087 },{ 00:36:09.087 "params": { 00:36:09.087 "name": "Nvme1", 00:36:09.087 "trtype": "tcp", 00:36:09.087 "traddr": "10.0.0.2", 00:36:09.087 "adrfam": "ipv4", 00:36:09.087 "trsvcid": "4420", 00:36:09.087 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:09.087 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:09.087 "hdgst": false, 00:36:09.087 "ddgst": false 00:36:09.087 }, 00:36:09.087 "method": "bdev_nvme_attach_controller" 00:36:09.087 }' 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:09.087 12:20:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:09.087 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:09.087 ... 00:36:09.087 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:09.087 ... 00:36:09.087 fio-3.35 00:36:09.087 Starting 4 threads 00:36:14.428 00:36:14.428 filename0: (groupid=0, jobs=1): err= 0: pid=1281973: Mon Oct 21 12:20:50 2024 00:36:14.428 read: IOPS=3007, BW=23.5MiB/s (24.6MB/s)(118MiB/5002msec) 00:36:14.428 slat (nsec): min=5649, max=47007, avg=8787.92, stdev=1829.74 00:36:14.428 clat (usec): min=930, max=4672, avg=2638.30, stdev=391.40 00:36:14.428 lat (usec): min=948, max=4680, avg=2647.09, stdev=391.19 00:36:14.428 clat percentiles (usec): 00:36:14.428 | 1.00th=[ 1844], 5.00th=[ 2057], 10.00th=[ 2212], 20.00th=[ 2343], 00:36:14.428 | 30.00th=[ 2442], 40.00th=[ 2573], 50.00th=[ 2671], 60.00th=[ 2737], 00:36:14.428 | 70.00th=[ 2737], 80.00th=[ 2769], 90.00th=[ 2966], 95.00th=[ 3490], 00:36:14.428 | 99.00th=[ 3720], 99.50th=[ 3818], 99.90th=[ 4113], 99.95th=[ 4293], 00:36:14.428 | 99.99th=[ 4686] 00:36:14.428 bw ( KiB/s): min=23664, max=24464, per=25.81%, avg=24133.33, stdev=263.64, samples=9 00:36:14.428 iops : min= 2958, max= 3058, avg=3016.67, stdev=32.95, samples=9 00:36:14.428 lat (usec) : 1000=0.03% 00:36:14.428 lat (msec) : 2=2.25%, 4=97.56%, 10=0.16% 00:36:14.428 cpu : usr=96.88%, sys=2.90%, ctx=7, majf=0, minf=35 00:36:14.428 IO depths : 1=0.1%, 2=0.5%, 4=67.9%, 8=31.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:14.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.428 complete : 0=0.0%, 4=95.9%, 8=4.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.428 issued rwts: total=15046,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:14.428 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:14.428 filename0: (groupid=0, jobs=1): err= 0: pid=1281974: Mon Oct 21 12:20:50 2024 00:36:14.428 read: IOPS=2831, BW=22.1MiB/s (23.2MB/s)(111MiB/5002msec) 00:36:14.428 slat (nsec): min=5643, max=65736, avg=8058.03, stdev=1899.17 00:36:14.428 clat (usec): min=1779, max=4895, avg=2802.30, stdev=254.01 00:36:14.428 lat (usec): min=1785, max=4901, avg=2810.35, stdev=253.97 00:36:14.428 clat percentiles (usec): 00:36:14.428 | 1.00th=[ 2311], 5.00th=[ 2507], 10.00th=[ 2573], 20.00th=[ 2671], 00:36:14.428 | 30.00th=[ 2704], 40.00th=[ 2737], 50.00th=[ 2737], 60.00th=[ 2769], 00:36:14.428 | 70.00th=[ 2933], 80.00th=[ 2933], 90.00th=[ 2999], 95.00th=[ 3195], 00:36:14.428 | 99.00th=[ 3884], 99.50th=[ 4113], 99.90th=[ 4555], 99.95th=[ 4817], 00:36:14.428 | 99.99th=[ 4883] 00:36:14.428 bw ( KiB/s): min=22432, max=22928, per=24.26%, avg=22682.67, stdev=163.56, samples=9 00:36:14.428 iops : min= 2804, max= 2866, avg=2835.33, stdev=20.45, samples=9 00:36:14.428 lat (msec) : 2=0.12%, 4=99.10%, 10=0.78% 00:36:14.428 cpu : usr=95.98%, sys=3.58%, ctx=185, majf=0, minf=111 00:36:14.428 IO depths : 1=0.1%, 2=0.1%, 4=74.1%, 8=25.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:14.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.428 complete : 0=0.0%, 4=90.7%, 8=9.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.428 issued rwts: total=14165,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:14.428 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:14.428 filename1: (groupid=0, jobs=1): err= 0: pid=1281975: Mon Oct 21 12:20:50 2024 00:36:14.428 read: IOPS=3000, BW=23.4MiB/s (24.6MB/s)(117MiB/5002msec) 00:36:14.428 slat (nsec): min=5649, max=72375, avg=6350.34, stdev=1888.66 00:36:14.428 clat (usec): min=1204, max=4406, avg=2650.17, stdev=403.37 00:36:14.428 lat (usec): min=1210, max=4412, avg=2656.52, stdev=403.41 00:36:14.428 clat percentiles (usec): 00:36:14.428 | 1.00th=[ 1926], 5.00th=[ 2073], 10.00th=[ 2180], 20.00th=[ 2343], 00:36:14.428 | 30.00th=[ 2442], 40.00th=[ 2507], 50.00th=[ 2671], 60.00th=[ 2704], 00:36:14.429 | 70.00th=[ 2737], 80.00th=[ 2933], 90.00th=[ 3130], 95.00th=[ 3523], 00:36:14.429 | 99.00th=[ 3785], 99.50th=[ 3916], 99.90th=[ 4228], 99.95th=[ 4293], 00:36:14.429 | 99.99th=[ 4424] 00:36:14.429 bw ( KiB/s): min=23776, max=24032, per=25.57%, avg=23912.78, stdev=71.56, samples=9 00:36:14.429 iops : min= 2972, max= 3004, avg=2989.00, stdev= 8.92, samples=9 00:36:14.429 lat (msec) : 2=1.57%, 4=98.07%, 10=0.36% 00:36:14.429 cpu : usr=97.08%, sys=2.70%, ctx=7, majf=0, minf=93 00:36:14.429 IO depths : 1=0.1%, 2=0.2%, 4=69.7%, 8=30.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:14.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.429 complete : 0=0.0%, 4=94.7%, 8=5.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.429 issued rwts: total=15008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:14.429 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:14.429 filename1: (groupid=0, jobs=1): err= 0: pid=1281976: Mon Oct 21 12:20:50 2024 00:36:14.429 read: IOPS=2848, BW=22.3MiB/s (23.3MB/s)(111MiB/5002msec) 00:36:14.429 slat (nsec): min=5649, max=53484, avg=6194.19, stdev=1463.01 00:36:14.429 clat (usec): min=1635, max=6086, avg=2790.57, stdev=246.78 00:36:14.429 lat (usec): min=1641, max=6112, avg=2796.76, stdev=246.87 00:36:14.429 clat percentiles (usec): 00:36:14.429 | 1.00th=[ 2278], 5.00th=[ 2507], 10.00th=[ 2573], 20.00th=[ 2638], 00:36:14.429 | 30.00th=[ 2704], 40.00th=[ 2737], 50.00th=[ 2737], 60.00th=[ 2769], 00:36:14.429 | 70.00th=[ 2900], 80.00th=[ 2966], 90.00th=[ 2999], 95.00th=[ 3130], 00:36:14.429 | 99.00th=[ 3752], 99.50th=[ 4113], 99.90th=[ 4686], 99.95th=[ 5800], 00:36:14.429 | 99.99th=[ 5866] 00:36:14.429 bw ( KiB/s): min=22444, max=22944, per=24.38%, avg=22796.00, stdev=164.20, samples=9 00:36:14.429 iops : min= 2805, max= 2868, avg=2849.44, stdev=20.66, samples=9 00:36:14.429 lat (msec) : 2=0.24%, 4=99.13%, 10=0.63% 00:36:14.429 cpu : usr=96.78%, sys=2.98%, ctx=8, majf=0, minf=63 00:36:14.429 IO depths : 1=0.1%, 2=0.1%, 4=74.5%, 8=25.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:14.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.429 complete : 0=0.0%, 4=90.4%, 8=9.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.429 issued rwts: total=14249,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:14.429 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:14.429 00:36:14.429 Run status group 0 (all jobs): 00:36:14.429 READ: bw=91.3MiB/s (95.8MB/s), 22.1MiB/s-23.5MiB/s (23.2MB/s-24.6MB/s), io=457MiB (479MB), run=5002-5002msec 00:36:14.429 12:20:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:36:14.429 12:20:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:14.429 12:20:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:14.429 12:20:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:14.429 12:20:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:14.429 12:20:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:14.429 12:20:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.429 12:20:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:14.429 12:20:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.429 12:20:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:14.429 12:20:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.429 12:20:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:14.429 12:20:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.429 12:20:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:14.429 12:20:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:14.429 12:20:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:14.429 12:20:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:14.429 12:20:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.429 12:20:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:14.429 12:20:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.429 12:20:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:14.429 12:20:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.429 12:20:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:14.429 12:20:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.429 00:36:14.429 real 0m24.720s 00:36:14.429 user 5m20.090s 00:36:14.429 sys 0m4.694s 00:36:14.429 12:20:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:14.429 12:20:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:14.429 ************************************ 00:36:14.429 END TEST fio_dif_rand_params 00:36:14.429 ************************************ 00:36:14.429 12:20:50 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:36:14.429 12:20:50 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:14.429 12:20:50 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:14.429 12:20:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:14.429 ************************************ 00:36:14.429 START TEST fio_dif_digest 00:36:14.429 ************************************ 00:36:14.429 12:20:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:36:14.429 12:20:50 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:36:14.429 12:20:50 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:36:14.429 12:20:50 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:36:14.429 12:20:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:36:14.429 12:20:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:36:14.429 12:20:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:36:14.429 12:20:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:36:14.429 12:20:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:36:14.429 12:20:50 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:36:14.429 12:20:50 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:36:14.429 12:20:50 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:36:14.429 12:20:50 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:36:14.429 12:20:50 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:36:14.429 12:20:50 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:36:14.429 12:20:50 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:36:14.429 12:20:50 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:14.429 12:20:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.429 12:20:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:14.429 bdev_null0 00:36:14.429 12:20:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.429 12:20:51 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:14.429 12:20:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.429 12:20:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:14.429 12:20:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.429 12:20:51 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:14.429 12:20:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.429 12:20:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:14.690 12:20:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.690 12:20:51 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:14.690 12:20:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.690 12:20:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:14.690 [2024-10-21 12:20:51.036689] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:14.690 12:20:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.690 12:20:51 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:36:14.690 12:20:51 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:36:14.690 12:20:51 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:14.690 12:20:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # config=() 00:36:14.690 12:20:51 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:14.690 12:20:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # local subsystem config 00:36:14.690 12:20:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:14.690 12:20:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:14.690 12:20:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:14.690 { 00:36:14.690 "params": { 00:36:14.690 "name": "Nvme$subsystem", 00:36:14.690 "trtype": "$TEST_TRANSPORT", 00:36:14.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:14.690 "adrfam": "ipv4", 00:36:14.690 "trsvcid": "$NVMF_PORT", 00:36:14.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:14.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:14.690 "hdgst": ${hdgst:-false}, 00:36:14.690 "ddgst": ${ddgst:-false} 00:36:14.690 }, 00:36:14.690 "method": "bdev_nvme_attach_controller" 00:36:14.690 } 00:36:14.690 EOF 00:36:14.690 )") 00:36:14.690 12:20:51 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:36:14.690 12:20:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:14.690 12:20:51 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:36:14.690 12:20:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:14.690 12:20:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:14.690 12:20:51 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:36:14.690 12:20:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:14.690 12:20:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:36:14.690 12:20:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:14.690 12:20:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:14.690 12:20:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # cat 00:36:14.690 12:20:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:14.690 12:20:51 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:36:14.690 12:20:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:36:14.690 12:20:51 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:36:14.690 12:20:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:14.690 12:20:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # jq . 00:36:14.690 12:20:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@583 -- # IFS=, 00:36:14.690 12:20:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:36:14.690 "params": { 00:36:14.690 "name": "Nvme0", 00:36:14.690 "trtype": "tcp", 00:36:14.690 "traddr": "10.0.0.2", 00:36:14.690 "adrfam": "ipv4", 00:36:14.690 "trsvcid": "4420", 00:36:14.690 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:14.690 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:14.690 "hdgst": true, 00:36:14.690 "ddgst": true 00:36:14.690 }, 00:36:14.690 "method": "bdev_nvme_attach_controller" 00:36:14.690 }' 00:36:14.690 12:20:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:14.690 12:20:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:14.690 12:20:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:14.690 12:20:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:14.690 12:20:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:14.690 12:20:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:14.690 12:20:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:14.690 12:20:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:14.690 12:20:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:14.690 12:20:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:14.950 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:14.951 ... 00:36:14.951 fio-3.35 00:36:14.951 Starting 3 threads 00:36:27.190 00:36:27.190 filename0: (groupid=0, jobs=1): err= 0: pid=1283257: Mon Oct 21 12:21:02 2024 00:36:27.190 read: IOPS=191, BW=24.0MiB/s (25.1MB/s)(241MiB/10048msec) 00:36:27.190 slat (nsec): min=5991, max=32314, avg=7372.46, stdev=1534.64 00:36:27.190 clat (msec): min=6, max=131, avg=15.61, stdev=15.55 00:36:27.190 lat (msec): min=6, max=131, avg=15.62, stdev=15.55 00:36:27.190 clat percentiles (msec): 00:36:27.190 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 9], 00:36:27.190 | 30.00th=[ 10], 40.00th=[ 10], 50.00th=[ 10], 60.00th=[ 11], 00:36:27.190 | 70.00th=[ 11], 80.00th=[ 12], 90.00th=[ 51], 95.00th=[ 52], 00:36:27.190 | 99.00th=[ 53], 99.50th=[ 91], 99.90th=[ 131], 99.95th=[ 132], 00:36:27.190 | 99.99th=[ 132] 00:36:27.190 bw ( KiB/s): min=16896, max=39936, per=22.02%, avg=24638.00, stdev=5955.79, samples=20 00:36:27.190 iops : min= 132, max= 312, avg=192.45, stdev=46.56, samples=20 00:36:27.190 lat (msec) : 10=56.77%, 20=29.53%, 50=4.05%, 100=9.55%, 250=0.10% 00:36:27.190 cpu : usr=94.92%, sys=4.81%, ctx=21, majf=0, minf=75 00:36:27.190 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:27.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.190 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.190 issued rwts: total=1927,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:27.190 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:27.190 filename0: (groupid=0, jobs=1): err= 0: pid=1283258: Mon Oct 21 12:21:02 2024 00:36:27.190 read: IOPS=337, BW=42.2MiB/s (44.2MB/s)(422MiB/10004msec) 00:36:27.190 slat (nsec): min=5857, max=31741, avg=7061.51, stdev=1248.24 00:36:27.190 clat (usec): min=5055, max=13837, avg=8883.73, stdev=1540.76 00:36:27.190 lat (usec): min=5062, max=13844, avg=8890.79, stdev=1540.77 00:36:27.190 clat percentiles (usec): 00:36:27.190 | 1.00th=[ 5932], 5.00th=[ 6652], 10.00th=[ 6980], 20.00th=[ 7439], 00:36:27.190 | 30.00th=[ 7767], 40.00th=[ 8225], 50.00th=[ 8848], 60.00th=[ 9372], 00:36:27.190 | 70.00th=[ 9896], 80.00th=[10290], 90.00th=[10945], 95.00th=[11338], 00:36:27.190 | 99.00th=[12256], 99.50th=[12649], 99.90th=[13435], 99.95th=[13698], 00:36:27.190 | 99.99th=[13829] 00:36:27.190 bw ( KiB/s): min=36096, max=46848, per=38.64%, avg=43237.05, stdev=2408.90, samples=19 00:36:27.190 iops : min= 282, max= 366, avg=337.79, stdev=18.82, samples=19 00:36:27.190 lat (msec) : 10=73.10%, 20=26.90% 00:36:27.190 cpu : usr=93.36%, sys=6.38%, ctx=33, majf=0, minf=156 00:36:27.190 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:27.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.190 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.190 issued rwts: total=3375,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:27.190 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:27.190 filename0: (groupid=0, jobs=1): err= 0: pid=1283259: Mon Oct 21 12:21:02 2024 00:36:27.190 read: IOPS=346, BW=43.3MiB/s (45.5MB/s)(435MiB/10044msec) 00:36:27.190 slat (nsec): min=5960, max=42397, avg=7606.00, stdev=1635.50 00:36:27.190 clat (usec): min=4637, max=52164, avg=8630.13, stdev=2121.13 00:36:27.190 lat (usec): min=4643, max=52170, avg=8637.74, stdev=2121.02 00:36:27.190 clat percentiles (usec): 00:36:27.190 | 1.00th=[ 5735], 5.00th=[ 6390], 10.00th=[ 6783], 20.00th=[ 7242], 00:36:27.190 | 30.00th=[ 7635], 40.00th=[ 8029], 50.00th=[ 8586], 60.00th=[ 9110], 00:36:27.190 | 70.00th=[ 9503], 80.00th=[ 9896], 90.00th=[10421], 95.00th=[10814], 00:36:27.190 | 99.00th=[11731], 99.50th=[12125], 99.90th=[49021], 99.95th=[51643], 00:36:27.190 | 99.99th=[52167] 00:36:27.190 bw ( KiB/s): min=39168, max=48384, per=39.81%, avg=44556.80, stdev=2172.99, samples=20 00:36:27.190 iops : min= 306, max= 378, avg=348.10, stdev=16.98, samples=20 00:36:27.190 lat (msec) : 10=82.60%, 20=17.26%, 50=0.06%, 100=0.09% 00:36:27.190 cpu : usr=93.10%, sys=6.64%, ctx=22, majf=0, minf=199 00:36:27.190 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:27.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.190 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.190 issued rwts: total=3483,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:27.190 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:27.190 00:36:27.190 Run status group 0 (all jobs): 00:36:27.190 READ: bw=109MiB/s (115MB/s), 24.0MiB/s-43.3MiB/s (25.1MB/s-45.5MB/s), io=1098MiB (1151MB), run=10004-10048msec 00:36:27.190 12:21:02 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:27.190 12:21:02 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:36:27.190 12:21:02 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:36:27.190 12:21:02 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:27.190 12:21:02 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:36:27.190 12:21:02 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:27.190 12:21:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.190 12:21:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:27.190 12:21:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:27.190 12:21:02 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:27.190 12:21:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.190 12:21:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:27.190 12:21:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:27.190 00:36:27.190 real 0m11.326s 00:36:27.190 user 0m42.311s 00:36:27.190 sys 0m2.117s 00:36:27.190 12:21:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:27.190 12:21:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:27.190 ************************************ 00:36:27.190 END TEST fio_dif_digest 00:36:27.190 ************************************ 00:36:27.190 12:21:02 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:27.190 12:21:02 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:36:27.190 12:21:02 nvmf_dif -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:27.190 12:21:02 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:36:27.190 12:21:02 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:27.190 12:21:02 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:36:27.190 12:21:02 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:27.190 12:21:02 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:27.190 rmmod nvme_tcp 00:36:27.190 rmmod nvme_fabrics 00:36:27.190 rmmod nvme_keyring 00:36:27.190 12:21:02 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:27.190 12:21:02 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:36:27.191 12:21:02 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:36:27.191 12:21:02 nvmf_dif -- nvmf/common.sh@515 -- # '[' -n 1272969 ']' 00:36:27.191 12:21:02 nvmf_dif -- nvmf/common.sh@516 -- # killprocess 1272969 00:36:27.191 12:21:02 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 1272969 ']' 00:36:27.191 12:21:02 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 1272969 00:36:27.191 12:21:02 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:36:27.191 12:21:02 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:27.191 12:21:02 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1272969 00:36:27.191 12:21:02 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:27.191 12:21:02 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:27.191 12:21:02 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1272969' 00:36:27.191 killing process with pid 1272969 00:36:27.191 12:21:02 nvmf_dif -- common/autotest_common.sh@969 -- # kill 1272969 00:36:27.191 12:21:02 nvmf_dif -- common/autotest_common.sh@974 -- # wait 1272969 00:36:27.191 12:21:02 nvmf_dif -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:36:27.191 12:21:02 nvmf_dif -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:29.740 Waiting for block devices as requested 00:36:29.740 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:29.740 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:29.740 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:29.740 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:29.740 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:30.000 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:30.000 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:30.000 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:30.261 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:36:30.261 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:30.521 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:30.521 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:30.521 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:30.781 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:30.781 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:30.781 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:31.042 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:31.302 12:21:07 nvmf_dif -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:31.302 12:21:07 nvmf_dif -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:31.302 12:21:07 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:36:31.302 12:21:07 nvmf_dif -- nvmf/common.sh@789 -- # iptables-save 00:36:31.302 12:21:07 nvmf_dif -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:31.302 12:21:07 nvmf_dif -- nvmf/common.sh@789 -- # iptables-restore 00:36:31.302 12:21:07 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:31.302 12:21:07 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:31.302 12:21:07 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:31.302 12:21:07 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:31.302 12:21:07 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:33.213 12:21:09 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:33.213 00:36:33.213 real 1m18.827s 00:36:33.213 user 8m4.737s 00:36:33.213 sys 0m22.297s 00:36:33.213 12:21:09 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:33.213 12:21:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:33.213 ************************************ 00:36:33.213 END TEST nvmf_dif 00:36:33.213 ************************************ 00:36:33.474 12:21:09 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:33.475 12:21:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:33.475 12:21:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:33.475 12:21:09 -- common/autotest_common.sh@10 -- # set +x 00:36:33.475 ************************************ 00:36:33.475 START TEST nvmf_abort_qd_sizes 00:36:33.475 ************************************ 00:36:33.475 12:21:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:33.475 * Looking for test storage... 00:36:33.475 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:33.475 12:21:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:33.475 12:21:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:36:33.475 12:21:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:33.475 12:21:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:33.475 12:21:10 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:33.475 12:21:10 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:33.475 12:21:10 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:33.475 12:21:10 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:36:33.475 12:21:10 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:36:33.475 12:21:10 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:36:33.475 12:21:10 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:36:33.475 12:21:10 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:36:33.475 12:21:10 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:36:33.475 12:21:10 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:36:33.475 12:21:10 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:33.475 12:21:10 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:36:33.475 12:21:10 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:36:33.475 12:21:10 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:33.475 12:21:10 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:33.475 12:21:10 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:36:33.475 12:21:10 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:36:33.475 12:21:10 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:33.475 12:21:10 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:36:33.475 12:21:10 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:36:33.475 12:21:10 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:36:33.475 12:21:10 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:36:33.737 12:21:10 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:33.737 12:21:10 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:36:33.737 12:21:10 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:36:33.737 12:21:10 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:33.737 12:21:10 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:33.737 12:21:10 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:36:33.737 12:21:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:33.737 12:21:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:33.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:33.737 --rc genhtml_branch_coverage=1 00:36:33.737 --rc genhtml_function_coverage=1 00:36:33.737 --rc genhtml_legend=1 00:36:33.737 --rc geninfo_all_blocks=1 00:36:33.737 --rc geninfo_unexecuted_blocks=1 00:36:33.737 00:36:33.737 ' 00:36:33.737 12:21:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:33.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:33.737 --rc genhtml_branch_coverage=1 00:36:33.737 --rc genhtml_function_coverage=1 00:36:33.737 --rc genhtml_legend=1 00:36:33.737 --rc geninfo_all_blocks=1 00:36:33.737 --rc geninfo_unexecuted_blocks=1 00:36:33.737 00:36:33.737 ' 00:36:33.737 12:21:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:33.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:33.737 --rc genhtml_branch_coverage=1 00:36:33.737 --rc genhtml_function_coverage=1 00:36:33.737 --rc genhtml_legend=1 00:36:33.737 --rc geninfo_all_blocks=1 00:36:33.737 --rc geninfo_unexecuted_blocks=1 00:36:33.737 00:36:33.737 ' 00:36:33.737 12:21:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:33.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:33.737 --rc genhtml_branch_coverage=1 00:36:33.737 --rc genhtml_function_coverage=1 00:36:33.737 --rc genhtml_legend=1 00:36:33.737 --rc geninfo_all_blocks=1 00:36:33.737 --rc geninfo_unexecuted_blocks=1 00:36:33.737 00:36:33.737 ' 00:36:33.737 12:21:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:33.737 12:21:10 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:36:33.737 12:21:10 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:33.737 12:21:10 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:33.737 12:21:10 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:33.737 12:21:10 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:33.737 12:21:10 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:33.737 12:21:10 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:33.737 12:21:10 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:33.737 12:21:10 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:33.737 12:21:10 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:33.737 12:21:10 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:33.737 12:21:10 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:33.737 12:21:10 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:33.737 12:21:10 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:33.737 12:21:10 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:33.737 12:21:10 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:33.738 12:21:10 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:33.738 12:21:10 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:33.738 12:21:10 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:36:33.738 12:21:10 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:33.738 12:21:10 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:33.738 12:21:10 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:33.738 12:21:10 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.738 12:21:10 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.738 12:21:10 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.738 12:21:10 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:36:33.738 12:21:10 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.738 12:21:10 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:36:33.738 12:21:10 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:33.738 12:21:10 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:33.738 12:21:10 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:33.738 12:21:10 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:33.738 12:21:10 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:33.738 12:21:10 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:33.738 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:33.738 12:21:10 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:33.738 12:21:10 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:33.738 12:21:10 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:33.738 12:21:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:33.738 12:21:10 nvmf_abort_qd_sizes -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:36:33.738 12:21:10 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:33.738 12:21:10 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # prepare_net_devs 00:36:33.738 12:21:10 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # local -g is_hw=no 00:36:33.738 12:21:10 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # remove_spdk_ns 00:36:33.738 12:21:10 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:33.738 12:21:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:33.738 12:21:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:33.738 12:21:10 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:36:33.738 12:21:10 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:36:33.738 12:21:10 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:36:33.738 12:21:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:41.875 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:41.875 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:36:41.875 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:41.875 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:41.876 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:41.876 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:41.876 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:41.876 12:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:41.876 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:41.876 12:21:17 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:41.876 12:21:17 nvmf_abort_qd_sizes -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:36:41.876 12:21:17 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # is_hw=yes 00:36:41.876 12:21:17 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:36:41.876 12:21:17 nvmf_abort_qd_sizes -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:36:41.876 12:21:17 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:36:41.876 12:21:17 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:41.876 12:21:17 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:41.876 12:21:17 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:41.876 12:21:17 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:41.876 12:21:17 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:41.876 12:21:17 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:41.876 12:21:17 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:41.876 12:21:17 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:41.876 12:21:17 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:41.876 12:21:17 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:41.876 12:21:17 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:41.876 12:21:17 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:41.876 12:21:17 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:41.876 12:21:17 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:41.876 12:21:17 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:41.876 12:21:17 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:41.876 12:21:17 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:41.876 12:21:17 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:41.876 12:21:17 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:41.876 12:21:17 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:41.876 12:21:17 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:41.876 12:21:17 nvmf_abort_qd_sizes -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:41.876 12:21:17 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:41.876 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:41.876 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:36:41.876 00:36:41.876 --- 10.0.0.2 ping statistics --- 00:36:41.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:41.876 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:36:41.876 12:21:17 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:41.876 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:41.876 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:36:41.876 00:36:41.876 --- 10.0.0.1 ping statistics --- 00:36:41.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:41.876 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:36:41.876 12:21:17 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:41.876 12:21:17 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # return 0 00:36:41.876 12:21:17 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:36:41.876 12:21:17 nvmf_abort_qd_sizes -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:44.420 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:44.420 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:44.420 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:44.420 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:44.420 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:44.420 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:44.420 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:44.420 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:44.420 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:44.420 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:44.420 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:44.420 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:44.420 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:44.420 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:44.420 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:44.420 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:44.420 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:36:44.992 12:21:21 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:44.992 12:21:21 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:36:44.992 12:21:21 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:36:44.992 12:21:21 nvmf_abort_qd_sizes -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:44.992 12:21:21 nvmf_abort_qd_sizes -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:36:44.992 12:21:21 nvmf_abort_qd_sizes -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:36:44.992 12:21:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:44.992 12:21:21 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:36:44.992 12:21:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:44.992 12:21:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:44.992 12:21:21 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # nvmfpid=1293266 00:36:44.992 12:21:21 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # waitforlisten 1293266 00:36:44.992 12:21:21 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:44.992 12:21:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 1293266 ']' 00:36:44.992 12:21:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:44.992 12:21:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:44.992 12:21:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:44.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:44.992 12:21:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:44.992 12:21:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:44.992 [2024-10-21 12:21:21.481188] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:36:44.992 [2024-10-21 12:21:21.481249] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:44.992 [2024-10-21 12:21:21.569743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:45.253 [2024-10-21 12:21:21.625159] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:45.253 [2024-10-21 12:21:21.625211] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:45.253 [2024-10-21 12:21:21.625220] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:45.253 [2024-10-21 12:21:21.625228] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:45.253 [2024-10-21 12:21:21.625234] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:45.253 [2024-10-21 12:21:21.627670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:45.253 [2024-10-21 12:21:21.627831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:45.253 [2024-10-21 12:21:21.627962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:45.253 [2024-10-21 12:21:21.627962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:45.825 12:21:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:45.826 12:21:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:36:45.826 12:21:22 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:36:45.826 12:21:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:45.826 12:21:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:45.826 12:21:22 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:45.826 12:21:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:45.826 12:21:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:45.826 12:21:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:45.826 12:21:22 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:36:45.826 12:21:22 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:36:45.826 12:21:22 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:36:45.826 12:21:22 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:36:45.826 12:21:22 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:36:45.826 12:21:22 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:36:45.826 12:21:22 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:36:45.826 12:21:22 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:36:45.826 12:21:22 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:36:45.826 12:21:22 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:36:45.826 12:21:22 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:36:45.826 12:21:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:36:45.826 12:21:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:36:45.826 12:21:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:45.826 12:21:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:45.826 12:21:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:45.826 12:21:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:45.826 ************************************ 00:36:45.826 START TEST spdk_target_abort 00:36:45.826 ************************************ 00:36:45.826 12:21:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:36:45.826 12:21:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:45.826 12:21:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:36:45.826 12:21:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.826 12:21:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:46.398 spdk_targetn1 00:36:46.398 12:21:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:46.398 12:21:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:46.398 12:21:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:46.398 12:21:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:46.398 [2024-10-21 12:21:22.708818] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:46.398 12:21:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:46.398 12:21:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:46.398 12:21:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:46.398 12:21:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:46.398 12:21:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:46.398 12:21:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:46.398 12:21:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:46.398 12:21:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:46.398 12:21:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:46.398 12:21:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:36:46.398 12:21:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:46.398 12:21:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:46.398 [2024-10-21 12:21:22.744632] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:46.398 12:21:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:46.398 12:21:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:36:46.398 12:21:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:46.398 12:21:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:46.398 12:21:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:46.398 12:21:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:46.398 12:21:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:46.398 12:21:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:46.398 12:21:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:46.398 12:21:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:46.398 12:21:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:46.398 12:21:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:46.398 12:21:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:46.398 12:21:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:46.399 12:21:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:46.399 12:21:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:46.399 12:21:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:46.399 12:21:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:46.399 12:21:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:46.399 12:21:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:46.399 12:21:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:46.399 12:21:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:46.399 [2024-10-21 12:21:22.913011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:80 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:36:46.399 [2024-10-21 12:21:22.913065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:36:46.399 [2024-10-21 12:21:22.927984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:456 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:36:46.399 [2024-10-21 12:21:22.928016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:003c p:1 m:0 dnr:0 00:36:46.399 [2024-10-21 12:21:22.943856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:896 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:36:46.399 [2024-10-21 12:21:22.943888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0074 p:1 m:0 dnr:0 00:36:46.659 [2024-10-21 12:21:22.998975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2552 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:36:46.659 [2024-10-21 12:21:22.999010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.659 [2024-10-21 12:21:22.999054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:2544 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:36:46.659 [2024-10-21 12:21:22.999064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.659 [2024-10-21 12:21:23.006902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:2792 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:36:46.659 [2024-10-21 12:21:23.006930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:46.659 [2024-10-21 12:21:23.045895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:3960 len:8 PRP1 0x200004abe000 PRP2 0x0 00:36:46.659 [2024-10-21 12:21:23.045926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00f1 p:0 m:0 dnr:0 00:36:49.961 Initializing NVMe Controllers 00:36:49.961 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:49.961 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:49.961 Initialization complete. Launching workers. 00:36:49.961 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10979, failed: 7 00:36:49.961 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2179, failed to submit 8807 00:36:49.961 success 768, unsuccessful 1411, failed 0 00:36:49.961 12:21:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:49.961 12:21:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:49.961 [2024-10-21 12:21:26.064665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:181 nsid:1 lba:488 len:8 PRP1 0x200004e50000 PRP2 0x0 00:36:49.961 [2024-10-21 12:21:26.064704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:181 cdw0:0 sqhd:0048 p:1 m:0 dnr:0 00:36:49.961 [2024-10-21 12:21:26.138465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:181 nsid:1 lba:2248 len:8 PRP1 0x200004e40000 PRP2 0x0 00:36:49.961 [2024-10-21 12:21:26.138494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:181 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:36:49.961 [2024-10-21 12:21:26.149440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:185 nsid:1 lba:2408 len:8 PRP1 0x200004e48000 PRP2 0x0 00:36:49.961 [2024-10-21 12:21:26.149463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:185 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:36:49.961 [2024-10-21 12:21:26.173295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:179 nsid:1 lba:2976 len:8 PRP1 0x200004e4a000 PRP2 0x0 00:36:49.961 [2024-10-21 12:21:26.173327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:179 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:36:50.221 [2024-10-21 12:21:26.744274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:176 nsid:1 lba:15872 len:8 PRP1 0x200004e5a000 PRP2 0x0 00:36:50.221 [2024-10-21 12:21:26.744304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:176 cdw0:0 sqhd:00c1 p:0 m:0 dnr:0 00:36:52.763 Initializing NVMe Controllers 00:36:52.764 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:52.764 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:52.764 Initialization complete. Launching workers. 00:36:52.764 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8634, failed: 5 00:36:52.764 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1219, failed to submit 7420 00:36:52.764 success 334, unsuccessful 885, failed 0 00:36:52.764 12:21:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:52.764 12:21:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:55.307 [2024-10-21 12:21:31.829199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:142 nsid:1 lba:275160 len:8 PRP1 0x200004adc000 PRP2 0x0 00:36:55.307 [2024-10-21 12:21:31.829246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:142 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:56.248 Initializing NVMe Controllers 00:36:56.248 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:56.248 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:56.248 Initialization complete. Launching workers. 00:36:56.248 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 43766, failed: 1 00:36:56.248 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2762, failed to submit 41005 00:36:56.248 success 603, unsuccessful 2159, failed 0 00:36:56.248 12:21:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:56.248 12:21:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:56.248 12:21:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:56.248 12:21:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:56.248 12:21:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:56.248 12:21:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:56.248 12:21:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:58.227 12:21:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:58.227 12:21:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1293266 00:36:58.227 12:21:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 1293266 ']' 00:36:58.227 12:21:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 1293266 00:36:58.227 12:21:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:36:58.227 12:21:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:58.227 12:21:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1293266 00:36:58.227 12:21:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:58.227 12:21:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:58.227 12:21:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1293266' 00:36:58.227 killing process with pid 1293266 00:36:58.227 12:21:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 1293266 00:36:58.227 12:21:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 1293266 00:36:58.227 00:36:58.227 real 0m12.126s 00:36:58.227 user 0m49.415s 00:36:58.227 sys 0m2.009s 00:36:58.227 12:21:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:58.227 12:21:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:58.227 ************************************ 00:36:58.227 END TEST spdk_target_abort 00:36:58.227 ************************************ 00:36:58.227 12:21:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:58.227 12:21:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:58.227 12:21:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:58.227 12:21:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:58.227 ************************************ 00:36:58.227 START TEST kernel_target_abort 00:36:58.227 ************************************ 00:36:58.227 12:21:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:36:58.227 12:21:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:58.227 12:21:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@767 -- # local ip 00:36:58.227 12:21:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:58.227 12:21:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:58.227 12:21:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:58.227 12:21:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:58.227 12:21:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:58.227 12:21:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:58.227 12:21:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:58.227 12:21:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:58.227 12:21:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:58.227 12:21:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:58.227 12:21:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:58.227 12:21:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:36:58.227 12:21:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:58.227 12:21:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:58.227 12:21:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:58.227 12:21:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # local block nvme 00:36:58.227 12:21:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:36:58.227 12:21:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # modprobe nvmet 00:36:58.227 12:21:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:58.227 12:21:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:01.532 Waiting for block devices as requested 00:37:01.532 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:01.532 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:01.793 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:01.793 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:01.793 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:02.055 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:02.055 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:02.055 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:02.317 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:02.317 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:02.576 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:02.576 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:02.576 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:02.837 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:02.837 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:02.837 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:03.098 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:03.360 12:21:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:37:03.360 12:21:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:37:03.360 12:21:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:37:03.360 12:21:39 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:37:03.360 12:21:39 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:37:03.360 12:21:39 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:37:03.360 12:21:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:37:03.360 12:21:39 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:37:03.360 12:21:39 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:37:03.360 No valid GPT data, bailing 00:37:03.360 12:21:39 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:37:03.360 12:21:39 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:37:03.360 12:21:39 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:37:03.360 12:21:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:37:03.360 12:21:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:37:03.360 12:21:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:03.360 12:21:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:03.360 12:21:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:37:03.360 12:21:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:37:03.360 12:21:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:37:03.360 12:21:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:37:03.360 12:21:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:37:03.360 12:21:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:37:03.360 12:21:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo tcp 00:37:03.360 12:21:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 4420 00:37:03.360 12:21:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo ipv4 00:37:03.360 12:21:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:37:03.360 12:21:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:37:03.360 00:37:03.360 Discovery Log Number of Records 2, Generation counter 2 00:37:03.360 =====Discovery Log Entry 0====== 00:37:03.360 trtype: tcp 00:37:03.360 adrfam: ipv4 00:37:03.360 subtype: current discovery subsystem 00:37:03.360 treq: not specified, sq flow control disable supported 00:37:03.360 portid: 1 00:37:03.360 trsvcid: 4420 00:37:03.360 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:37:03.360 traddr: 10.0.0.1 00:37:03.360 eflags: none 00:37:03.360 sectype: none 00:37:03.360 =====Discovery Log Entry 1====== 00:37:03.360 trtype: tcp 00:37:03.360 adrfam: ipv4 00:37:03.360 subtype: nvme subsystem 00:37:03.360 treq: not specified, sq flow control disable supported 00:37:03.360 portid: 1 00:37:03.360 trsvcid: 4420 00:37:03.360 subnqn: nqn.2016-06.io.spdk:testnqn 00:37:03.360 traddr: 10.0.0.1 00:37:03.360 eflags: none 00:37:03.360 sectype: none 00:37:03.360 12:21:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:37:03.360 12:21:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:03.360 12:21:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:03.621 12:21:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:37:03.621 12:21:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:03.621 12:21:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:03.621 12:21:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:03.621 12:21:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:03.621 12:21:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:03.621 12:21:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:03.621 12:21:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:03.621 12:21:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:03.621 12:21:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:03.621 12:21:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:03.621 12:21:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:37:03.621 12:21:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:03.621 12:21:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:37:03.621 12:21:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:03.621 12:21:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:03.621 12:21:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:03.621 12:21:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:06.924 Initializing NVMe Controllers 00:37:06.924 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:06.924 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:06.924 Initialization complete. Launching workers. 00:37:06.924 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67392, failed: 0 00:37:06.924 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 67392, failed to submit 0 00:37:06.924 success 0, unsuccessful 67392, failed 0 00:37:06.924 12:21:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:06.924 12:21:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:10.226 Initializing NVMe Controllers 00:37:10.226 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:10.226 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:10.226 Initialization complete. Launching workers. 00:37:10.226 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 117684, failed: 0 00:37:10.226 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29658, failed to submit 88026 00:37:10.226 success 0, unsuccessful 29658, failed 0 00:37:10.226 12:21:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:10.226 12:21:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:12.772 Initializing NVMe Controllers 00:37:12.772 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:12.772 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:12.772 Initialization complete. Launching workers. 00:37:12.772 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 145771, failed: 0 00:37:12.772 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36486, failed to submit 109285 00:37:12.772 success 0, unsuccessful 36486, failed 0 00:37:12.772 12:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:37:12.772 12:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:37:12.772 12:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # echo 0 00:37:12.772 12:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:12.772 12:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:12.772 12:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:12.772 12:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:12.772 12:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:37:12.772 12:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:37:12.772 12:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:16.980 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:16.980 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:16.980 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:16.980 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:16.980 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:16.980 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:16.980 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:16.980 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:16.980 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:16.980 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:16.980 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:16.980 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:16.980 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:16.980 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:16.980 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:16.980 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:18.368 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:18.630 00:37:18.630 real 0m20.494s 00:37:18.630 user 0m9.868s 00:37:18.630 sys 0m6.252s 00:37:18.630 12:21:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:18.630 12:21:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:18.630 ************************************ 00:37:18.630 END TEST kernel_target_abort 00:37:18.630 ************************************ 00:37:18.630 12:21:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:37:18.630 12:21:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:37:18.630 12:21:55 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # nvmfcleanup 00:37:18.630 12:21:55 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:37:18.630 12:21:55 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:18.630 12:21:55 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:37:18.630 12:21:55 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:18.630 12:21:55 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:18.630 rmmod nvme_tcp 00:37:18.630 rmmod nvme_fabrics 00:37:18.630 rmmod nvme_keyring 00:37:18.630 12:21:55 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:18.630 12:21:55 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:37:18.630 12:21:55 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:37:18.630 12:21:55 nvmf_abort_qd_sizes -- nvmf/common.sh@515 -- # '[' -n 1293266 ']' 00:37:18.630 12:21:55 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # killprocess 1293266 00:37:18.630 12:21:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 1293266 ']' 00:37:18.630 12:21:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 1293266 00:37:18.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1293266) - No such process 00:37:18.630 12:21:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 1293266 is not found' 00:37:18.630 Process with pid 1293266 is not found 00:37:18.630 12:21:55 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:37:18.630 12:21:55 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:22.841 Waiting for block devices as requested 00:37:22.842 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:22.842 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:22.842 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:22.842 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:22.842 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:22.842 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:22.842 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:22.842 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:22.842 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:23.102 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:23.102 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:23.102 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:23.364 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:23.364 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:23.364 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:23.625 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:23.625 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:23.886 12:22:00 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:37:23.886 12:22:00 nvmf_abort_qd_sizes -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:37:23.886 12:22:00 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:37:23.886 12:22:00 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-save 00:37:23.886 12:22:00 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-restore 00:37:23.886 12:22:00 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:37:23.886 12:22:00 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:23.886 12:22:00 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:23.886 12:22:00 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:23.886 12:22:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:23.886 12:22:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:26.433 12:22:02 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:26.433 00:37:26.433 real 0m52.612s 00:37:26.433 user 1m4.771s 00:37:26.433 sys 0m19.371s 00:37:26.433 12:22:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:26.433 12:22:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:26.433 ************************************ 00:37:26.433 END TEST nvmf_abort_qd_sizes 00:37:26.433 ************************************ 00:37:26.433 12:22:02 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:26.433 12:22:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:26.433 12:22:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:26.433 12:22:02 -- common/autotest_common.sh@10 -- # set +x 00:37:26.433 ************************************ 00:37:26.433 START TEST keyring_file 00:37:26.433 ************************************ 00:37:26.433 12:22:02 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:26.433 * Looking for test storage... 00:37:26.433 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:26.433 12:22:02 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:26.433 12:22:02 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:37:26.433 12:22:02 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:26.433 12:22:02 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:26.433 12:22:02 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:26.433 12:22:02 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:26.433 12:22:02 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:26.433 12:22:02 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:37:26.433 12:22:02 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:37:26.433 12:22:02 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:37:26.433 12:22:02 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:37:26.433 12:22:02 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:37:26.433 12:22:02 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:37:26.433 12:22:02 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:37:26.433 12:22:02 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:26.433 12:22:02 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:37:26.433 12:22:02 keyring_file -- scripts/common.sh@345 -- # : 1 00:37:26.433 12:22:02 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:26.433 12:22:02 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:26.433 12:22:02 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:37:26.433 12:22:02 keyring_file -- scripts/common.sh@353 -- # local d=1 00:37:26.433 12:22:02 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:26.433 12:22:02 keyring_file -- scripts/common.sh@355 -- # echo 1 00:37:26.433 12:22:02 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:37:26.433 12:22:02 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:37:26.433 12:22:02 keyring_file -- scripts/common.sh@353 -- # local d=2 00:37:26.433 12:22:02 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:26.433 12:22:02 keyring_file -- scripts/common.sh@355 -- # echo 2 00:37:26.433 12:22:02 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:37:26.433 12:22:02 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:26.433 12:22:02 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:26.433 12:22:02 keyring_file -- scripts/common.sh@368 -- # return 0 00:37:26.433 12:22:02 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:26.433 12:22:02 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:26.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:26.433 --rc genhtml_branch_coverage=1 00:37:26.433 --rc genhtml_function_coverage=1 00:37:26.433 --rc genhtml_legend=1 00:37:26.433 --rc geninfo_all_blocks=1 00:37:26.433 --rc geninfo_unexecuted_blocks=1 00:37:26.433 00:37:26.433 ' 00:37:26.433 12:22:02 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:26.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:26.433 --rc genhtml_branch_coverage=1 00:37:26.433 --rc genhtml_function_coverage=1 00:37:26.433 --rc genhtml_legend=1 00:37:26.433 --rc geninfo_all_blocks=1 00:37:26.433 --rc geninfo_unexecuted_blocks=1 00:37:26.433 00:37:26.433 ' 00:37:26.433 12:22:02 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:26.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:26.433 --rc genhtml_branch_coverage=1 00:37:26.433 --rc genhtml_function_coverage=1 00:37:26.433 --rc genhtml_legend=1 00:37:26.433 --rc geninfo_all_blocks=1 00:37:26.433 --rc geninfo_unexecuted_blocks=1 00:37:26.433 00:37:26.433 ' 00:37:26.433 12:22:02 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:26.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:26.433 --rc genhtml_branch_coverage=1 00:37:26.433 --rc genhtml_function_coverage=1 00:37:26.433 --rc genhtml_legend=1 00:37:26.433 --rc geninfo_all_blocks=1 00:37:26.433 --rc geninfo_unexecuted_blocks=1 00:37:26.433 00:37:26.433 ' 00:37:26.433 12:22:02 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:26.433 12:22:02 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:26.433 12:22:02 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:37:26.433 12:22:02 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:26.433 12:22:02 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:26.433 12:22:02 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:26.433 12:22:02 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:26.433 12:22:02 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:26.433 12:22:02 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:26.433 12:22:02 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:26.433 12:22:02 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:26.433 12:22:02 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:26.434 12:22:02 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:26.434 12:22:02 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:26.434 12:22:02 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:26.434 12:22:02 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:26.434 12:22:02 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:26.434 12:22:02 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:26.434 12:22:02 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:26.434 12:22:02 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:26.434 12:22:02 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:37:26.434 12:22:02 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:26.434 12:22:02 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:26.434 12:22:02 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:26.434 12:22:02 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:26.434 12:22:02 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:26.434 12:22:02 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:26.434 12:22:02 keyring_file -- paths/export.sh@5 -- # export PATH 00:37:26.434 12:22:02 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:26.434 12:22:02 keyring_file -- nvmf/common.sh@51 -- # : 0 00:37:26.434 12:22:02 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:26.434 12:22:02 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:26.434 12:22:02 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:26.434 12:22:02 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:26.434 12:22:02 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:26.434 12:22:02 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:26.434 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:26.434 12:22:02 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:26.434 12:22:02 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:26.434 12:22:02 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:26.434 12:22:02 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:26.434 12:22:02 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:26.434 12:22:02 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:26.434 12:22:02 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:37:26.434 12:22:02 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:37:26.434 12:22:02 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:37:26.434 12:22:02 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:26.434 12:22:02 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:26.434 12:22:02 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:26.434 12:22:02 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:26.434 12:22:02 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:26.434 12:22:02 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:26.434 12:22:02 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Qast2hb26Q 00:37:26.434 12:22:02 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:26.434 12:22:02 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:26.434 12:22:02 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:37:26.434 12:22:02 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:37:26.434 12:22:02 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:37:26.434 12:22:02 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:37:26.434 12:22:02 keyring_file -- nvmf/common.sh@731 -- # python - 00:37:26.434 12:22:02 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Qast2hb26Q 00:37:26.434 12:22:02 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Qast2hb26Q 00:37:26.434 12:22:02 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.Qast2hb26Q 00:37:26.434 12:22:02 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:37:26.434 12:22:02 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:26.434 12:22:02 keyring_file -- keyring/common.sh@17 -- # name=key1 00:37:26.434 12:22:02 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:26.434 12:22:02 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:26.434 12:22:02 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:26.434 12:22:02 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.8gr2CXvOim 00:37:26.434 12:22:02 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:26.434 12:22:02 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:26.434 12:22:02 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:37:26.434 12:22:02 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:37:26.434 12:22:02 keyring_file -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:37:26.434 12:22:02 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:37:26.434 12:22:02 keyring_file -- nvmf/common.sh@731 -- # python - 00:37:26.434 12:22:02 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.8gr2CXvOim 00:37:26.434 12:22:02 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.8gr2CXvOim 00:37:26.434 12:22:02 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.8gr2CXvOim 00:37:26.434 12:22:02 keyring_file -- keyring/file.sh@30 -- # tgtpid=1303715 00:37:26.434 12:22:02 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1303715 00:37:26.434 12:22:02 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:26.434 12:22:02 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1303715 ']' 00:37:26.434 12:22:02 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:26.434 12:22:02 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:26.434 12:22:02 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:26.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:26.434 12:22:02 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:26.434 12:22:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:26.434 [2024-10-21 12:22:02.986724] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:37:26.434 [2024-10-21 12:22:02.986800] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1303715 ] 00:37:26.695 [2024-10-21 12:22:03.069890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:26.695 [2024-10-21 12:22:03.122892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:27.267 12:22:03 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:27.267 12:22:03 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:37:27.267 12:22:03 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:37:27.267 12:22:03 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.267 12:22:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:27.267 [2024-10-21 12:22:03.817533] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:27.267 null0 00:37:27.267 [2024-10-21 12:22:03.849579] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:27.267 [2024-10-21 12:22:03.849982] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:27.527 12:22:03 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.527 12:22:03 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:27.527 12:22:03 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:37:27.527 12:22:03 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:27.527 12:22:03 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:37:27.527 12:22:03 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:27.527 12:22:03 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:37:27.527 12:22:03 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:27.527 12:22:03 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:27.527 12:22:03 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.527 12:22:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:27.527 [2024-10-21 12:22:03.881638] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:37:27.527 request: 00:37:27.527 { 00:37:27.527 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:37:27.527 "secure_channel": false, 00:37:27.527 "listen_address": { 00:37:27.527 "trtype": "tcp", 00:37:27.527 "traddr": "127.0.0.1", 00:37:27.527 "trsvcid": "4420" 00:37:27.527 }, 00:37:27.527 "method": "nvmf_subsystem_add_listener", 00:37:27.527 "req_id": 1 00:37:27.527 } 00:37:27.527 Got JSON-RPC error response 00:37:27.527 response: 00:37:27.527 { 00:37:27.527 "code": -32602, 00:37:27.527 "message": "Invalid parameters" 00:37:27.527 } 00:37:27.527 12:22:03 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:37:27.527 12:22:03 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:37:27.527 12:22:03 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:27.528 12:22:03 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:27.528 12:22:03 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:27.528 12:22:03 keyring_file -- keyring/file.sh@47 -- # bperfpid=1303805 00:37:27.528 12:22:03 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1303805 /var/tmp/bperf.sock 00:37:27.528 12:22:03 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1303805 ']' 00:37:27.528 12:22:03 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:37:27.528 12:22:03 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:27.528 12:22:03 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:27.528 12:22:03 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:27.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:27.528 12:22:03 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:27.528 12:22:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:27.528 [2024-10-21 12:22:03.943501] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:37:27.528 [2024-10-21 12:22:03.943568] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1303805 ] 00:37:27.528 [2024-10-21 12:22:04.024187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:27.528 [2024-10-21 12:22:04.077034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:28.470 12:22:04 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:28.470 12:22:04 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:37:28.470 12:22:04 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Qast2hb26Q 00:37:28.470 12:22:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Qast2hb26Q 00:37:28.470 12:22:04 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.8gr2CXvOim 00:37:28.470 12:22:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.8gr2CXvOim 00:37:28.731 12:22:05 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:37:28.731 12:22:05 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:37:28.731 12:22:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:28.731 12:22:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:28.731 12:22:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:28.991 12:22:05 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.Qast2hb26Q == \/\t\m\p\/\t\m\p\.\Q\a\s\t\2\h\b\2\6\Q ]] 00:37:28.991 12:22:05 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:37:28.991 12:22:05 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:37:28.991 12:22:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:28.991 12:22:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:28.991 12:22:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:28.991 12:22:05 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.8gr2CXvOim == \/\t\m\p\/\t\m\p\.\8\g\r\2\C\X\v\O\i\m ]] 00:37:28.991 12:22:05 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:37:28.991 12:22:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:28.991 12:22:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:28.991 12:22:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:28.991 12:22:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:28.991 12:22:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:29.252 12:22:05 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:37:29.252 12:22:05 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:37:29.252 12:22:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:29.252 12:22:05 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:29.252 12:22:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:29.252 12:22:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:29.252 12:22:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:29.512 12:22:05 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:37:29.513 12:22:05 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:29.513 12:22:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:29.513 [2024-10-21 12:22:06.105741] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:29.773 nvme0n1 00:37:29.773 12:22:06 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:37:29.773 12:22:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:29.773 12:22:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:29.773 12:22:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:29.773 12:22:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:29.773 12:22:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:30.034 12:22:06 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:37:30.034 12:22:06 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:37:30.034 12:22:06 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:30.034 12:22:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:30.034 12:22:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:30.034 12:22:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:30.034 12:22:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:30.034 12:22:06 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:37:30.034 12:22:06 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:30.295 Running I/O for 1 seconds... 00:37:31.237 18406.00 IOPS, 71.90 MiB/s 00:37:31.237 Latency(us) 00:37:31.237 [2024-10-21T10:22:07.832Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:31.237 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:37:31.237 nvme0n1 : 1.00 18465.08 72.13 0.00 0.00 6918.79 2307.41 14636.37 00:37:31.237 [2024-10-21T10:22:07.832Z] =================================================================================================================== 00:37:31.237 [2024-10-21T10:22:07.832Z] Total : 18465.08 72.13 0.00 0.00 6918.79 2307.41 14636.37 00:37:31.237 { 00:37:31.237 "results": [ 00:37:31.237 { 00:37:31.237 "job": "nvme0n1", 00:37:31.237 "core_mask": "0x2", 00:37:31.237 "workload": "randrw", 00:37:31.237 "percentage": 50, 00:37:31.237 "status": "finished", 00:37:31.237 "queue_depth": 128, 00:37:31.237 "io_size": 4096, 00:37:31.237 "runtime": 1.003841, 00:37:31.237 "iops": 18465.07564444967, 00:37:31.237 "mibps": 72.12920173613152, 00:37:31.237 "io_failed": 0, 00:37:31.237 "io_timeout": 0, 00:37:31.237 "avg_latency_us": 6918.7883642641345, 00:37:31.237 "min_latency_us": 2307.4133333333334, 00:37:31.237 "max_latency_us": 14636.373333333333 00:37:31.237 } 00:37:31.237 ], 00:37:31.237 "core_count": 1 00:37:31.237 } 00:37:31.237 12:22:07 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:31.237 12:22:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:31.497 12:22:07 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:37:31.497 12:22:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:31.498 12:22:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:31.498 12:22:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:31.498 12:22:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:31.498 12:22:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:31.498 12:22:08 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:37:31.498 12:22:08 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:37:31.498 12:22:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:31.498 12:22:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:31.498 12:22:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:31.498 12:22:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:31.498 12:22:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:31.759 12:22:08 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:37:31.759 12:22:08 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:31.759 12:22:08 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:37:31.759 12:22:08 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:31.759 12:22:08 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:37:31.759 12:22:08 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:31.759 12:22:08 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:37:31.759 12:22:08 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:31.759 12:22:08 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:31.759 12:22:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:32.020 [2024-10-21 12:22:08.412925] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:32.020 [2024-10-21 12:22:08.413485] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb5080 (107): Transport endpoint is not connected 00:37:32.020 [2024-10-21 12:22:08.414480] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb5080 (9): Bad file descriptor 00:37:32.020 [2024-10-21 12:22:08.415482] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:32.020 [2024-10-21 12:22:08.415489] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:32.020 [2024-10-21 12:22:08.415494] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:37:32.020 [2024-10-21 12:22:08.415500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:32.020 request: 00:37:32.020 { 00:37:32.020 "name": "nvme0", 00:37:32.020 "trtype": "tcp", 00:37:32.020 "traddr": "127.0.0.1", 00:37:32.020 "adrfam": "ipv4", 00:37:32.020 "trsvcid": "4420", 00:37:32.020 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:32.020 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:32.020 "prchk_reftag": false, 00:37:32.020 "prchk_guard": false, 00:37:32.020 "hdgst": false, 00:37:32.020 "ddgst": false, 00:37:32.020 "psk": "key1", 00:37:32.020 "allow_unrecognized_csi": false, 00:37:32.020 "method": "bdev_nvme_attach_controller", 00:37:32.020 "req_id": 1 00:37:32.020 } 00:37:32.020 Got JSON-RPC error response 00:37:32.020 response: 00:37:32.020 { 00:37:32.020 "code": -5, 00:37:32.020 "message": "Input/output error" 00:37:32.020 } 00:37:32.020 12:22:08 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:37:32.020 12:22:08 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:32.020 12:22:08 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:32.020 12:22:08 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:32.020 12:22:08 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:37:32.020 12:22:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:32.020 12:22:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:32.020 12:22:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:32.020 12:22:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:32.020 12:22:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:32.020 12:22:08 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:37:32.020 12:22:08 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:37:32.020 12:22:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:32.020 12:22:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:32.280 12:22:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:32.280 12:22:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:32.280 12:22:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:32.280 12:22:08 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:37:32.280 12:22:08 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:37:32.280 12:22:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:32.540 12:22:08 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:37:32.540 12:22:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:37:32.540 12:22:09 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:37:32.540 12:22:09 keyring_file -- keyring/file.sh@78 -- # jq length 00:37:32.540 12:22:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:32.800 12:22:09 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:37:32.800 12:22:09 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.Qast2hb26Q 00:37:32.800 12:22:09 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.Qast2hb26Q 00:37:32.800 12:22:09 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:37:32.800 12:22:09 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.Qast2hb26Q 00:37:32.800 12:22:09 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:37:32.800 12:22:09 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:32.800 12:22:09 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:37:32.800 12:22:09 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:32.800 12:22:09 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Qast2hb26Q 00:37:32.800 12:22:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Qast2hb26Q 00:37:33.061 [2024-10-21 12:22:09.445157] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Qast2hb26Q': 0100660 00:37:33.061 [2024-10-21 12:22:09.445175] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:37:33.061 request: 00:37:33.061 { 00:37:33.061 "name": "key0", 00:37:33.061 "path": "/tmp/tmp.Qast2hb26Q", 00:37:33.061 "method": "keyring_file_add_key", 00:37:33.061 "req_id": 1 00:37:33.061 } 00:37:33.061 Got JSON-RPC error response 00:37:33.061 response: 00:37:33.061 { 00:37:33.061 "code": -1, 00:37:33.061 "message": "Operation not permitted" 00:37:33.061 } 00:37:33.061 12:22:09 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:37:33.061 12:22:09 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:33.061 12:22:09 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:33.061 12:22:09 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:33.061 12:22:09 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.Qast2hb26Q 00:37:33.061 12:22:09 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Qast2hb26Q 00:37:33.061 12:22:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Qast2hb26Q 00:37:33.061 12:22:09 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.Qast2hb26Q 00:37:33.061 12:22:09 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:37:33.061 12:22:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:33.061 12:22:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:33.061 12:22:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:33.061 12:22:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:33.061 12:22:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:33.322 12:22:09 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:37:33.322 12:22:09 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:33.322 12:22:09 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:37:33.322 12:22:09 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:33.322 12:22:09 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:37:33.322 12:22:09 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:33.322 12:22:09 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:37:33.322 12:22:09 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:33.322 12:22:09 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:33.322 12:22:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:33.583 [2024-10-21 12:22:09.958465] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.Qast2hb26Q': No such file or directory 00:37:33.583 [2024-10-21 12:22:09.958478] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:37:33.583 [2024-10-21 12:22:09.958490] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:37:33.583 [2024-10-21 12:22:09.958496] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:37:33.583 [2024-10-21 12:22:09.958501] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:33.583 [2024-10-21 12:22:09.958506] bdev_nvme.c:6438:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:37:33.583 request: 00:37:33.583 { 00:37:33.583 "name": "nvme0", 00:37:33.583 "trtype": "tcp", 00:37:33.583 "traddr": "127.0.0.1", 00:37:33.583 "adrfam": "ipv4", 00:37:33.583 "trsvcid": "4420", 00:37:33.583 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:33.583 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:33.583 "prchk_reftag": false, 00:37:33.583 "prchk_guard": false, 00:37:33.583 "hdgst": false, 00:37:33.583 "ddgst": false, 00:37:33.583 "psk": "key0", 00:37:33.583 "allow_unrecognized_csi": false, 00:37:33.583 "method": "bdev_nvme_attach_controller", 00:37:33.583 "req_id": 1 00:37:33.583 } 00:37:33.583 Got JSON-RPC error response 00:37:33.583 response: 00:37:33.583 { 00:37:33.583 "code": -19, 00:37:33.583 "message": "No such device" 00:37:33.583 } 00:37:33.583 12:22:09 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:37:33.583 12:22:09 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:33.583 12:22:09 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:33.583 12:22:09 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:33.583 12:22:09 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:37:33.584 12:22:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:33.584 12:22:10 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:33.584 12:22:10 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:33.584 12:22:10 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:33.584 12:22:10 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:33.584 12:22:10 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:33.584 12:22:10 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:33.584 12:22:10 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.wptuVPxRLx 00:37:33.584 12:22:10 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:33.584 12:22:10 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:33.584 12:22:10 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:37:33.584 12:22:10 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:37:33.584 12:22:10 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:37:33.584 12:22:10 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:37:33.584 12:22:10 keyring_file -- nvmf/common.sh@731 -- # python - 00:37:33.845 12:22:10 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.wptuVPxRLx 00:37:33.845 12:22:10 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.wptuVPxRLx 00:37:33.845 12:22:10 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.wptuVPxRLx 00:37:33.845 12:22:10 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.wptuVPxRLx 00:37:33.845 12:22:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.wptuVPxRLx 00:37:33.845 12:22:10 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:33.845 12:22:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:34.105 nvme0n1 00:37:34.105 12:22:10 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:37:34.105 12:22:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:34.105 12:22:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:34.105 12:22:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:34.105 12:22:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:34.105 12:22:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:34.364 12:22:10 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:37:34.364 12:22:10 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:37:34.364 12:22:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:34.625 12:22:10 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:37:34.625 12:22:10 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:37:34.625 12:22:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:34.625 12:22:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:34.625 12:22:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:34.625 12:22:11 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:37:34.625 12:22:11 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:37:34.625 12:22:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:34.625 12:22:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:34.625 12:22:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:34.625 12:22:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:34.625 12:22:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:34.886 12:22:11 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:37:34.886 12:22:11 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:34.886 12:22:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:35.146 12:22:11 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:37:35.146 12:22:11 keyring_file -- keyring/file.sh@105 -- # jq length 00:37:35.146 12:22:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:35.146 12:22:11 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:37:35.146 12:22:11 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.wptuVPxRLx 00:37:35.146 12:22:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.wptuVPxRLx 00:37:35.408 12:22:11 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.8gr2CXvOim 00:37:35.408 12:22:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.8gr2CXvOim 00:37:35.670 12:22:12 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:35.670 12:22:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:35.931 nvme0n1 00:37:35.931 12:22:12 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:37:35.931 12:22:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:37:36.193 12:22:12 keyring_file -- keyring/file.sh@113 -- # config='{ 00:37:36.193 "subsystems": [ 00:37:36.193 { 00:37:36.193 "subsystem": "keyring", 00:37:36.193 "config": [ 00:37:36.193 { 00:37:36.193 "method": "keyring_file_add_key", 00:37:36.193 "params": { 00:37:36.193 "name": "key0", 00:37:36.193 "path": "/tmp/tmp.wptuVPxRLx" 00:37:36.193 } 00:37:36.193 }, 00:37:36.193 { 00:37:36.193 "method": "keyring_file_add_key", 00:37:36.193 "params": { 00:37:36.193 "name": "key1", 00:37:36.193 "path": "/tmp/tmp.8gr2CXvOim" 00:37:36.193 } 00:37:36.193 } 00:37:36.193 ] 00:37:36.193 }, 00:37:36.193 { 00:37:36.193 "subsystem": "iobuf", 00:37:36.193 "config": [ 00:37:36.193 { 00:37:36.193 "method": "iobuf_set_options", 00:37:36.193 "params": { 00:37:36.193 "small_pool_count": 8192, 00:37:36.193 "large_pool_count": 1024, 00:37:36.193 "small_bufsize": 8192, 00:37:36.193 "large_bufsize": 135168 00:37:36.193 } 00:37:36.193 } 00:37:36.193 ] 00:37:36.193 }, 00:37:36.193 { 00:37:36.193 "subsystem": "sock", 00:37:36.193 "config": [ 00:37:36.193 { 00:37:36.193 "method": "sock_set_default_impl", 00:37:36.193 "params": { 00:37:36.193 "impl_name": "posix" 00:37:36.193 } 00:37:36.193 }, 00:37:36.193 { 00:37:36.193 "method": "sock_impl_set_options", 00:37:36.194 "params": { 00:37:36.194 "impl_name": "ssl", 00:37:36.194 "recv_buf_size": 4096, 00:37:36.194 "send_buf_size": 4096, 00:37:36.194 "enable_recv_pipe": true, 00:37:36.194 "enable_quickack": false, 00:37:36.194 "enable_placement_id": 0, 00:37:36.194 "enable_zerocopy_send_server": true, 00:37:36.194 "enable_zerocopy_send_client": false, 00:37:36.194 "zerocopy_threshold": 0, 00:37:36.194 "tls_version": 0, 00:37:36.194 "enable_ktls": false 00:37:36.194 } 00:37:36.194 }, 00:37:36.194 { 00:37:36.194 "method": "sock_impl_set_options", 00:37:36.194 "params": { 00:37:36.194 "impl_name": "posix", 00:37:36.194 "recv_buf_size": 2097152, 00:37:36.194 "send_buf_size": 2097152, 00:37:36.194 "enable_recv_pipe": true, 00:37:36.194 "enable_quickack": false, 00:37:36.194 "enable_placement_id": 0, 00:37:36.194 "enable_zerocopy_send_server": true, 00:37:36.194 "enable_zerocopy_send_client": false, 00:37:36.194 "zerocopy_threshold": 0, 00:37:36.194 "tls_version": 0, 00:37:36.194 "enable_ktls": false 00:37:36.194 } 00:37:36.194 } 00:37:36.194 ] 00:37:36.194 }, 00:37:36.194 { 00:37:36.194 "subsystem": "vmd", 00:37:36.194 "config": [] 00:37:36.194 }, 00:37:36.194 { 00:37:36.194 "subsystem": "accel", 00:37:36.194 "config": [ 00:37:36.194 { 00:37:36.194 "method": "accel_set_options", 00:37:36.194 "params": { 00:37:36.194 "small_cache_size": 128, 00:37:36.194 "large_cache_size": 16, 00:37:36.194 "task_count": 2048, 00:37:36.194 "sequence_count": 2048, 00:37:36.194 "buf_count": 2048 00:37:36.194 } 00:37:36.194 } 00:37:36.194 ] 00:37:36.194 }, 00:37:36.194 { 00:37:36.194 "subsystem": "bdev", 00:37:36.194 "config": [ 00:37:36.194 { 00:37:36.194 "method": "bdev_set_options", 00:37:36.194 "params": { 00:37:36.194 "bdev_io_pool_size": 65535, 00:37:36.194 "bdev_io_cache_size": 256, 00:37:36.194 "bdev_auto_examine": true, 00:37:36.194 "iobuf_small_cache_size": 128, 00:37:36.194 "iobuf_large_cache_size": 16 00:37:36.194 } 00:37:36.194 }, 00:37:36.194 { 00:37:36.194 "method": "bdev_raid_set_options", 00:37:36.194 "params": { 00:37:36.194 "process_window_size_kb": 1024, 00:37:36.194 "process_max_bandwidth_mb_sec": 0 00:37:36.194 } 00:37:36.194 }, 00:37:36.194 { 00:37:36.194 "method": "bdev_iscsi_set_options", 00:37:36.194 "params": { 00:37:36.194 "timeout_sec": 30 00:37:36.194 } 00:37:36.194 }, 00:37:36.194 { 00:37:36.194 "method": "bdev_nvme_set_options", 00:37:36.194 "params": { 00:37:36.194 "action_on_timeout": "none", 00:37:36.194 "timeout_us": 0, 00:37:36.194 "timeout_admin_us": 0, 00:37:36.194 "keep_alive_timeout_ms": 10000, 00:37:36.194 "arbitration_burst": 0, 00:37:36.194 "low_priority_weight": 0, 00:37:36.194 "medium_priority_weight": 0, 00:37:36.194 "high_priority_weight": 0, 00:37:36.194 "nvme_adminq_poll_period_us": 10000, 00:37:36.194 "nvme_ioq_poll_period_us": 0, 00:37:36.194 "io_queue_requests": 512, 00:37:36.194 "delay_cmd_submit": true, 00:37:36.194 "transport_retry_count": 4, 00:37:36.194 "bdev_retry_count": 3, 00:37:36.194 "transport_ack_timeout": 0, 00:37:36.194 "ctrlr_loss_timeout_sec": 0, 00:37:36.194 "reconnect_delay_sec": 0, 00:37:36.194 "fast_io_fail_timeout_sec": 0, 00:37:36.194 "disable_auto_failback": false, 00:37:36.194 "generate_uuids": false, 00:37:36.194 "transport_tos": 0, 00:37:36.194 "nvme_error_stat": false, 00:37:36.194 "rdma_srq_size": 0, 00:37:36.194 "io_path_stat": false, 00:37:36.194 "allow_accel_sequence": false, 00:37:36.194 "rdma_max_cq_size": 0, 00:37:36.194 "rdma_cm_event_timeout_ms": 0, 00:37:36.194 "dhchap_digests": [ 00:37:36.194 "sha256", 00:37:36.194 "sha384", 00:37:36.194 "sha512" 00:37:36.194 ], 00:37:36.194 "dhchap_dhgroups": [ 00:37:36.194 "null", 00:37:36.194 "ffdhe2048", 00:37:36.194 "ffdhe3072", 00:37:36.194 "ffdhe4096", 00:37:36.194 "ffdhe6144", 00:37:36.194 "ffdhe8192" 00:37:36.194 ] 00:37:36.194 } 00:37:36.194 }, 00:37:36.194 { 00:37:36.194 "method": "bdev_nvme_attach_controller", 00:37:36.194 "params": { 00:37:36.194 "name": "nvme0", 00:37:36.194 "trtype": "TCP", 00:37:36.194 "adrfam": "IPv4", 00:37:36.194 "traddr": "127.0.0.1", 00:37:36.194 "trsvcid": "4420", 00:37:36.194 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:36.194 "prchk_reftag": false, 00:37:36.194 "prchk_guard": false, 00:37:36.194 "ctrlr_loss_timeout_sec": 0, 00:37:36.194 "reconnect_delay_sec": 0, 00:37:36.194 "fast_io_fail_timeout_sec": 0, 00:37:36.194 "psk": "key0", 00:37:36.194 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:36.194 "hdgst": false, 00:37:36.194 "ddgst": false, 00:37:36.194 "multipath": "multipath" 00:37:36.194 } 00:37:36.194 }, 00:37:36.194 { 00:37:36.194 "method": "bdev_nvme_set_hotplug", 00:37:36.194 "params": { 00:37:36.194 "period_us": 100000, 00:37:36.194 "enable": false 00:37:36.194 } 00:37:36.194 }, 00:37:36.194 { 00:37:36.194 "method": "bdev_wait_for_examine" 00:37:36.194 } 00:37:36.194 ] 00:37:36.194 }, 00:37:36.194 { 00:37:36.194 "subsystem": "nbd", 00:37:36.194 "config": [] 00:37:36.194 } 00:37:36.194 ] 00:37:36.194 }' 00:37:36.194 12:22:12 keyring_file -- keyring/file.sh@115 -- # killprocess 1303805 00:37:36.194 12:22:12 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1303805 ']' 00:37:36.194 12:22:12 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1303805 00:37:36.194 12:22:12 keyring_file -- common/autotest_common.sh@955 -- # uname 00:37:36.194 12:22:12 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:36.194 12:22:12 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1303805 00:37:36.194 12:22:12 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:36.194 12:22:12 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:36.194 12:22:12 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1303805' 00:37:36.194 killing process with pid 1303805 00:37:36.194 12:22:12 keyring_file -- common/autotest_common.sh@969 -- # kill 1303805 00:37:36.194 Received shutdown signal, test time was about 1.000000 seconds 00:37:36.194 00:37:36.194 Latency(us) 00:37:36.194 [2024-10-21T10:22:12.789Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:36.194 [2024-10-21T10:22:12.789Z] =================================================================================================================== 00:37:36.194 [2024-10-21T10:22:12.789Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:36.194 12:22:12 keyring_file -- common/autotest_common.sh@974 -- # wait 1303805 00:37:36.194 12:22:12 keyring_file -- keyring/file.sh@118 -- # bperfpid=1305619 00:37:36.194 12:22:12 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1305619 /var/tmp/bperf.sock 00:37:36.194 12:22:12 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1305619 ']' 00:37:36.194 12:22:12 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:36.194 12:22:12 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:36.194 12:22:12 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:37:36.194 12:22:12 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:36.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:36.194 12:22:12 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:36.194 12:22:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:36.194 12:22:12 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:37:36.194 "subsystems": [ 00:37:36.194 { 00:37:36.194 "subsystem": "keyring", 00:37:36.194 "config": [ 00:37:36.194 { 00:37:36.194 "method": "keyring_file_add_key", 00:37:36.194 "params": { 00:37:36.194 "name": "key0", 00:37:36.194 "path": "/tmp/tmp.wptuVPxRLx" 00:37:36.194 } 00:37:36.194 }, 00:37:36.194 { 00:37:36.194 "method": "keyring_file_add_key", 00:37:36.194 "params": { 00:37:36.194 "name": "key1", 00:37:36.194 "path": "/tmp/tmp.8gr2CXvOim" 00:37:36.194 } 00:37:36.194 } 00:37:36.194 ] 00:37:36.194 }, 00:37:36.194 { 00:37:36.194 "subsystem": "iobuf", 00:37:36.194 "config": [ 00:37:36.194 { 00:37:36.194 "method": "iobuf_set_options", 00:37:36.194 "params": { 00:37:36.194 "small_pool_count": 8192, 00:37:36.194 "large_pool_count": 1024, 00:37:36.194 "small_bufsize": 8192, 00:37:36.194 "large_bufsize": 135168 00:37:36.194 } 00:37:36.194 } 00:37:36.194 ] 00:37:36.194 }, 00:37:36.194 { 00:37:36.194 "subsystem": "sock", 00:37:36.194 "config": [ 00:37:36.194 { 00:37:36.194 "method": "sock_set_default_impl", 00:37:36.195 "params": { 00:37:36.195 "impl_name": "posix" 00:37:36.195 } 00:37:36.195 }, 00:37:36.195 { 00:37:36.195 "method": "sock_impl_set_options", 00:37:36.195 "params": { 00:37:36.195 "impl_name": "ssl", 00:37:36.195 "recv_buf_size": 4096, 00:37:36.195 "send_buf_size": 4096, 00:37:36.195 "enable_recv_pipe": true, 00:37:36.195 "enable_quickack": false, 00:37:36.195 "enable_placement_id": 0, 00:37:36.195 "enable_zerocopy_send_server": true, 00:37:36.195 "enable_zerocopy_send_client": false, 00:37:36.195 "zerocopy_threshold": 0, 00:37:36.195 "tls_version": 0, 00:37:36.195 "enable_ktls": false 00:37:36.195 } 00:37:36.195 }, 00:37:36.195 { 00:37:36.195 "method": "sock_impl_set_options", 00:37:36.195 "params": { 00:37:36.195 "impl_name": "posix", 00:37:36.195 "recv_buf_size": 2097152, 00:37:36.195 "send_buf_size": 2097152, 00:37:36.195 "enable_recv_pipe": true, 00:37:36.195 "enable_quickack": false, 00:37:36.195 "enable_placement_id": 0, 00:37:36.195 "enable_zerocopy_send_server": true, 00:37:36.195 "enable_zerocopy_send_client": false, 00:37:36.195 "zerocopy_threshold": 0, 00:37:36.195 "tls_version": 0, 00:37:36.195 "enable_ktls": false 00:37:36.195 } 00:37:36.195 } 00:37:36.195 ] 00:37:36.195 }, 00:37:36.195 { 00:37:36.195 "subsystem": "vmd", 00:37:36.195 "config": [] 00:37:36.195 }, 00:37:36.195 { 00:37:36.195 "subsystem": "accel", 00:37:36.195 "config": [ 00:37:36.195 { 00:37:36.195 "method": "accel_set_options", 00:37:36.195 "params": { 00:37:36.195 "small_cache_size": 128, 00:37:36.195 "large_cache_size": 16, 00:37:36.195 "task_count": 2048, 00:37:36.195 "sequence_count": 2048, 00:37:36.195 "buf_count": 2048 00:37:36.195 } 00:37:36.195 } 00:37:36.195 ] 00:37:36.195 }, 00:37:36.195 { 00:37:36.195 "subsystem": "bdev", 00:37:36.195 "config": [ 00:37:36.195 { 00:37:36.195 "method": "bdev_set_options", 00:37:36.195 "params": { 00:37:36.195 "bdev_io_pool_size": 65535, 00:37:36.195 "bdev_io_cache_size": 256, 00:37:36.195 "bdev_auto_examine": true, 00:37:36.195 "iobuf_small_cache_size": 128, 00:37:36.195 "iobuf_large_cache_size": 16 00:37:36.195 } 00:37:36.195 }, 00:37:36.195 { 00:37:36.195 "method": "bdev_raid_set_options", 00:37:36.195 "params": { 00:37:36.195 "process_window_size_kb": 1024, 00:37:36.195 "process_max_bandwidth_mb_sec": 0 00:37:36.195 } 00:37:36.195 }, 00:37:36.195 { 00:37:36.195 "method": "bdev_iscsi_set_options", 00:37:36.195 "params": { 00:37:36.195 "timeout_sec": 30 00:37:36.195 } 00:37:36.195 }, 00:37:36.195 { 00:37:36.195 "method": "bdev_nvme_set_options", 00:37:36.195 "params": { 00:37:36.195 "action_on_timeout": "none", 00:37:36.195 "timeout_us": 0, 00:37:36.195 "timeout_admin_us": 0, 00:37:36.195 "keep_alive_timeout_ms": 10000, 00:37:36.195 "arbitration_burst": 0, 00:37:36.195 "low_priority_weight": 0, 00:37:36.195 "medium_priority_weight": 0, 00:37:36.195 "high_priority_weight": 0, 00:37:36.195 "nvme_adminq_poll_period_us": 10000, 00:37:36.195 "nvme_ioq_poll_period_us": 0, 00:37:36.195 "io_queue_requests": 512, 00:37:36.195 "delay_cmd_submit": true, 00:37:36.195 "transport_retry_count": 4, 00:37:36.195 "bdev_retry_count": 3, 00:37:36.195 "transport_ack_timeout": 0, 00:37:36.195 "ctrlr_loss_timeout_sec": 0, 00:37:36.195 "reconnect_delay_sec": 0, 00:37:36.195 "fast_io_fail_timeout_sec": 0, 00:37:36.195 "disable_auto_failback": false, 00:37:36.195 "generate_uuids": false, 00:37:36.195 "transport_tos": 0, 00:37:36.195 "nvme_error_stat": false, 00:37:36.195 "rdma_srq_size": 0, 00:37:36.195 "io_path_stat": false, 00:37:36.195 "allow_accel_sequence": false, 00:37:36.195 "rdma_max_cq_size": 0, 00:37:36.195 "rdma_cm_event_timeout_ms": 0, 00:37:36.195 "dhchap_digests": [ 00:37:36.195 "sha256", 00:37:36.195 "sha384", 00:37:36.195 "sha512" 00:37:36.195 ], 00:37:36.195 "dhchap_dhgroups": [ 00:37:36.195 "null", 00:37:36.195 "ffdhe2048", 00:37:36.195 "ffdhe3072", 00:37:36.195 "ffdhe4096", 00:37:36.195 "ffdhe6144", 00:37:36.195 "ffdhe8192" 00:37:36.195 ] 00:37:36.195 } 00:37:36.195 }, 00:37:36.195 { 00:37:36.195 "method": "bdev_nvme_attach_controller", 00:37:36.195 "params": { 00:37:36.195 "name": "nvme0", 00:37:36.195 "trtype": "TCP", 00:37:36.195 "adrfam": "IPv4", 00:37:36.195 "traddr": "127.0.0.1", 00:37:36.195 "trsvcid": "4420", 00:37:36.195 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:36.195 "prchk_reftag": false, 00:37:36.195 "prchk_guard": false, 00:37:36.195 "ctrlr_loss_timeout_sec": 0, 00:37:36.195 "reconnect_delay_sec": 0, 00:37:36.195 "fast_io_fail_timeout_sec": 0, 00:37:36.195 "psk": "key0", 00:37:36.195 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:36.195 "hdgst": false, 00:37:36.195 "ddgst": false, 00:37:36.195 "multipath": "multipath" 00:37:36.195 } 00:37:36.195 }, 00:37:36.195 { 00:37:36.195 "method": "bdev_nvme_set_hotplug", 00:37:36.195 "params": { 00:37:36.195 "period_us": 100000, 00:37:36.195 "enable": false 00:37:36.195 } 00:37:36.195 }, 00:37:36.195 { 00:37:36.195 "method": "bdev_wait_for_examine" 00:37:36.195 } 00:37:36.195 ] 00:37:36.195 }, 00:37:36.195 { 00:37:36.195 "subsystem": "nbd", 00:37:36.195 "config": [] 00:37:36.195 } 00:37:36.195 ] 00:37:36.195 }' 00:37:36.195 [2024-10-21 12:22:12.768424] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:37:36.195 [2024-10-21 12:22:12.768483] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1305619 ] 00:37:36.456 [2024-10-21 12:22:12.843723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:36.456 [2024-10-21 12:22:12.872604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:36.456 [2024-10-21 12:22:13.015129] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:37.027 12:22:13 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:37.027 12:22:13 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:37:37.027 12:22:13 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:37:37.027 12:22:13 keyring_file -- keyring/file.sh@121 -- # jq length 00:37:37.027 12:22:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:37.361 12:22:13 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:37:37.361 12:22:13 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:37:37.361 12:22:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:37.361 12:22:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:37.361 12:22:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:37.361 12:22:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:37.361 12:22:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:37.361 12:22:13 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:37:37.361 12:22:13 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:37:37.361 12:22:13 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:37.361 12:22:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:37.361 12:22:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:37.361 12:22:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:37.361 12:22:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:37.682 12:22:14 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:37:37.682 12:22:14 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:37:37.682 12:22:14 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:37:37.682 12:22:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:37:37.682 12:22:14 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:37:37.682 12:22:14 keyring_file -- keyring/file.sh@1 -- # cleanup 00:37:37.682 12:22:14 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.wptuVPxRLx /tmp/tmp.8gr2CXvOim 00:37:37.683 12:22:14 keyring_file -- keyring/file.sh@20 -- # killprocess 1305619 00:37:37.683 12:22:14 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1305619 ']' 00:37:37.683 12:22:14 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1305619 00:37:37.683 12:22:14 keyring_file -- common/autotest_common.sh@955 -- # uname 00:37:37.683 12:22:14 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:37.683 12:22:14 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1305619 00:37:37.944 12:22:14 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:37.944 12:22:14 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:37.944 12:22:14 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1305619' 00:37:37.944 killing process with pid 1305619 00:37:37.944 12:22:14 keyring_file -- common/autotest_common.sh@969 -- # kill 1305619 00:37:37.944 Received shutdown signal, test time was about 1.000000 seconds 00:37:37.944 00:37:37.944 Latency(us) 00:37:37.944 [2024-10-21T10:22:14.539Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:37.944 [2024-10-21T10:22:14.539Z] =================================================================================================================== 00:37:37.944 [2024-10-21T10:22:14.539Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:37.944 12:22:14 keyring_file -- common/autotest_common.sh@974 -- # wait 1305619 00:37:37.944 12:22:14 keyring_file -- keyring/file.sh@21 -- # killprocess 1303715 00:37:37.944 12:22:14 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1303715 ']' 00:37:37.944 12:22:14 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1303715 00:37:37.944 12:22:14 keyring_file -- common/autotest_common.sh@955 -- # uname 00:37:37.944 12:22:14 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:37.944 12:22:14 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1303715 00:37:37.944 12:22:14 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:37.944 12:22:14 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:37.944 12:22:14 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1303715' 00:37:37.944 killing process with pid 1303715 00:37:37.944 12:22:14 keyring_file -- common/autotest_common.sh@969 -- # kill 1303715 00:37:37.944 12:22:14 keyring_file -- common/autotest_common.sh@974 -- # wait 1303715 00:37:38.205 00:37:38.205 real 0m12.100s 00:37:38.205 user 0m29.189s 00:37:38.205 sys 0m2.701s 00:37:38.205 12:22:14 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:38.205 12:22:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:38.205 ************************************ 00:37:38.205 END TEST keyring_file 00:37:38.205 ************************************ 00:37:38.205 12:22:14 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:37:38.205 12:22:14 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:38.205 12:22:14 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:37:38.205 12:22:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:38.205 12:22:14 -- common/autotest_common.sh@10 -- # set +x 00:37:38.205 ************************************ 00:37:38.205 START TEST keyring_linux 00:37:38.205 ************************************ 00:37:38.205 12:22:14 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:38.205 Joined session keyring: 553999319 00:37:38.466 * Looking for test storage... 00:37:38.466 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:38.466 12:22:14 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:38.466 12:22:14 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:37:38.466 12:22:14 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:38.466 12:22:14 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:38.466 12:22:14 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:38.466 12:22:14 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:38.466 12:22:14 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:38.466 12:22:14 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:37:38.466 12:22:14 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:37:38.466 12:22:14 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:37:38.466 12:22:14 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:37:38.466 12:22:14 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:37:38.466 12:22:14 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:37:38.466 12:22:14 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:37:38.466 12:22:14 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:38.466 12:22:14 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:37:38.466 12:22:14 keyring_linux -- scripts/common.sh@345 -- # : 1 00:37:38.466 12:22:14 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:38.466 12:22:14 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:38.466 12:22:14 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:37:38.466 12:22:14 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:37:38.466 12:22:14 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:38.466 12:22:14 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:37:38.466 12:22:14 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:37:38.466 12:22:14 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:37:38.466 12:22:14 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:37:38.467 12:22:14 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:38.467 12:22:14 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:37:38.467 12:22:14 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:37:38.467 12:22:14 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:38.467 12:22:14 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:38.467 12:22:14 keyring_linux -- scripts/common.sh@368 -- # return 0 00:37:38.467 12:22:14 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:38.467 12:22:14 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:38.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:38.467 --rc genhtml_branch_coverage=1 00:37:38.467 --rc genhtml_function_coverage=1 00:37:38.467 --rc genhtml_legend=1 00:37:38.467 --rc geninfo_all_blocks=1 00:37:38.467 --rc geninfo_unexecuted_blocks=1 00:37:38.467 00:37:38.467 ' 00:37:38.467 12:22:14 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:38.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:38.467 --rc genhtml_branch_coverage=1 00:37:38.467 --rc genhtml_function_coverage=1 00:37:38.467 --rc genhtml_legend=1 00:37:38.467 --rc geninfo_all_blocks=1 00:37:38.467 --rc geninfo_unexecuted_blocks=1 00:37:38.467 00:37:38.467 ' 00:37:38.467 12:22:14 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:38.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:38.467 --rc genhtml_branch_coverage=1 00:37:38.467 --rc genhtml_function_coverage=1 00:37:38.467 --rc genhtml_legend=1 00:37:38.467 --rc geninfo_all_blocks=1 00:37:38.467 --rc geninfo_unexecuted_blocks=1 00:37:38.467 00:37:38.467 ' 00:37:38.467 12:22:14 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:38.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:38.467 --rc genhtml_branch_coverage=1 00:37:38.467 --rc genhtml_function_coverage=1 00:37:38.467 --rc genhtml_legend=1 00:37:38.467 --rc geninfo_all_blocks=1 00:37:38.467 --rc geninfo_unexecuted_blocks=1 00:37:38.467 00:37:38.467 ' 00:37:38.467 12:22:14 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:38.467 12:22:14 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:38.467 12:22:14 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:37:38.467 12:22:14 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:38.467 12:22:14 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:38.467 12:22:14 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:38.467 12:22:14 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:38.467 12:22:14 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:38.467 12:22:14 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:38.467 12:22:14 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:38.467 12:22:14 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:38.467 12:22:14 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:38.467 12:22:14 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:38.467 12:22:14 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:38.467 12:22:14 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:38.467 12:22:14 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:38.467 12:22:14 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:38.467 12:22:14 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:38.467 12:22:14 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:38.467 12:22:14 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:38.467 12:22:14 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:37:38.467 12:22:14 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:38.467 12:22:14 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:38.467 12:22:14 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:38.467 12:22:14 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:38.467 12:22:14 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:38.467 12:22:14 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:38.467 12:22:14 keyring_linux -- paths/export.sh@5 -- # export PATH 00:37:38.467 12:22:14 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:38.467 12:22:14 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:37:38.467 12:22:14 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:38.467 12:22:14 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:38.467 12:22:14 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:38.467 12:22:14 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:38.467 12:22:14 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:38.467 12:22:14 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:38.467 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:38.467 12:22:14 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:38.467 12:22:14 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:38.467 12:22:14 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:38.467 12:22:14 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:38.467 12:22:14 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:38.467 12:22:14 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:38.467 12:22:14 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:37:38.467 12:22:14 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:37:38.467 12:22:14 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:37:38.467 12:22:14 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:37:38.467 12:22:14 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:38.467 12:22:14 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:37:38.467 12:22:14 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:38.467 12:22:14 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:38.467 12:22:14 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:37:38.467 12:22:14 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:38.467 12:22:14 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:38.467 12:22:14 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:37:38.467 12:22:14 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:37:38.467 12:22:14 keyring_linux -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:37:38.467 12:22:14 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:37:38.467 12:22:14 keyring_linux -- nvmf/common.sh@731 -- # python - 00:37:38.467 12:22:15 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:37:38.467 12:22:15 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:37:38.467 /tmp/:spdk-test:key0 00:37:38.467 12:22:15 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:37:38.467 12:22:15 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:38.467 12:22:15 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:37:38.467 12:22:15 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:38.467 12:22:15 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:38.467 12:22:15 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:37:38.467 12:22:15 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:38.467 12:22:15 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:38.467 12:22:15 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:37:38.467 12:22:15 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:37:38.467 12:22:15 keyring_linux -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:37:38.467 12:22:15 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:37:38.467 12:22:15 keyring_linux -- nvmf/common.sh@731 -- # python - 00:37:38.728 12:22:15 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:37:38.728 12:22:15 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:37:38.728 /tmp/:spdk-test:key1 00:37:38.728 12:22:15 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1306058 00:37:38.728 12:22:15 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1306058 00:37:38.728 12:22:15 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:38.728 12:22:15 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1306058 ']' 00:37:38.728 12:22:15 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:38.728 12:22:15 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:38.728 12:22:15 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:38.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:38.728 12:22:15 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:38.728 12:22:15 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:38.728 [2024-10-21 12:22:15.137396] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:37:38.728 [2024-10-21 12:22:15.137477] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1306058 ] 00:37:38.729 [2024-10-21 12:22:15.217839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:38.729 [2024-10-21 12:22:15.253771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:39.671 12:22:15 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:39.671 12:22:15 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:37:39.671 12:22:15 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:37:39.671 12:22:15 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.671 12:22:15 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:39.671 [2024-10-21 12:22:15.921888] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:39.671 null0 00:37:39.671 [2024-10-21 12:22:15.953947] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:39.671 [2024-10-21 12:22:15.954300] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:39.671 12:22:15 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.671 12:22:15 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:37:39.671 907795088 00:37:39.671 12:22:15 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:37:39.671 548152969 00:37:39.671 12:22:15 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1306391 00:37:39.671 12:22:15 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1306391 /var/tmp/bperf.sock 00:37:39.671 12:22:15 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:37:39.671 12:22:15 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1306391 ']' 00:37:39.671 12:22:15 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:39.671 12:22:15 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:39.671 12:22:15 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:39.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:39.671 12:22:15 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:39.671 12:22:15 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:39.671 [2024-10-21 12:22:16.029935] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:37:39.671 [2024-10-21 12:22:16.029983] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1306391 ] 00:37:39.671 [2024-10-21 12:22:16.106265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:39.671 [2024-10-21 12:22:16.136122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:39.671 12:22:16 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:39.671 12:22:16 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:37:39.671 12:22:16 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:37:39.671 12:22:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:37:39.931 12:22:16 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:37:39.931 12:22:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:40.193 12:22:16 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:40.193 12:22:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:40.193 [2024-10-21 12:22:16.702455] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:40.193 nvme0n1 00:37:40.454 12:22:16 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:37:40.454 12:22:16 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:37:40.454 12:22:16 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:40.454 12:22:16 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:40.454 12:22:16 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:40.454 12:22:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:40.454 12:22:16 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:37:40.454 12:22:16 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:40.454 12:22:16 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:37:40.454 12:22:16 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:37:40.454 12:22:16 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:40.454 12:22:16 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:37:40.455 12:22:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:40.715 12:22:17 keyring_linux -- keyring/linux.sh@25 -- # sn=907795088 00:37:40.715 12:22:17 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:37:40.715 12:22:17 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:40.715 12:22:17 keyring_linux -- keyring/linux.sh@26 -- # [[ 907795088 == \9\0\7\7\9\5\0\8\8 ]] 00:37:40.715 12:22:17 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 907795088 00:37:40.715 12:22:17 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:37:40.715 12:22:17 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:40.715 Running I/O for 1 seconds... 00:37:42.100 24333.00 IOPS, 95.05 MiB/s 00:37:42.100 Latency(us) 00:37:42.100 [2024-10-21T10:22:18.695Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:42.100 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:42.100 nvme0n1 : 1.01 24333.49 95.05 0.00 0.00 5244.67 4068.69 11960.32 00:37:42.100 [2024-10-21T10:22:18.695Z] =================================================================================================================== 00:37:42.100 [2024-10-21T10:22:18.695Z] Total : 24333.49 95.05 0.00 0.00 5244.67 4068.69 11960.32 00:37:42.100 { 00:37:42.100 "results": [ 00:37:42.100 { 00:37:42.100 "job": "nvme0n1", 00:37:42.100 "core_mask": "0x2", 00:37:42.100 "workload": "randread", 00:37:42.100 "status": "finished", 00:37:42.100 "queue_depth": 128, 00:37:42.100 "io_size": 4096, 00:37:42.100 "runtime": 1.005281, 00:37:42.100 "iops": 24333.494813887857, 00:37:42.100 "mibps": 95.05271411674944, 00:37:42.100 "io_failed": 0, 00:37:42.100 "io_timeout": 0, 00:37:42.100 "avg_latency_us": 5244.674993050445, 00:37:42.100 "min_latency_us": 4068.693333333333, 00:37:42.100 "max_latency_us": 11960.32 00:37:42.100 } 00:37:42.100 ], 00:37:42.100 "core_count": 1 00:37:42.100 } 00:37:42.100 12:22:18 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:42.100 12:22:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:42.100 12:22:18 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:37:42.100 12:22:18 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:37:42.100 12:22:18 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:42.100 12:22:18 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:42.100 12:22:18 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:42.100 12:22:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:42.100 12:22:18 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:37:42.100 12:22:18 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:42.100 12:22:18 keyring_linux -- keyring/linux.sh@23 -- # return 00:37:42.100 12:22:18 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:42.100 12:22:18 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:37:42.100 12:22:18 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:42.100 12:22:18 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:37:42.100 12:22:18 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:42.101 12:22:18 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:37:42.101 12:22:18 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:42.101 12:22:18 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:42.101 12:22:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:42.366 [2024-10-21 12:22:18.781672] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:42.366 [2024-10-21 12:22:18.782125] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2361e30 (107): Transport endpoint is not connected 00:37:42.366 [2024-10-21 12:22:18.783122] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2361e30 (9): Bad file descriptor 00:37:42.366 [2024-10-21 12:22:18.784123] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:42.366 [2024-10-21 12:22:18.784130] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:42.366 [2024-10-21 12:22:18.784135] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:37:42.366 [2024-10-21 12:22:18.784142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:42.366 request: 00:37:42.366 { 00:37:42.366 "name": "nvme0", 00:37:42.366 "trtype": "tcp", 00:37:42.366 "traddr": "127.0.0.1", 00:37:42.366 "adrfam": "ipv4", 00:37:42.366 "trsvcid": "4420", 00:37:42.366 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:42.366 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:42.366 "prchk_reftag": false, 00:37:42.366 "prchk_guard": false, 00:37:42.366 "hdgst": false, 00:37:42.366 "ddgst": false, 00:37:42.366 "psk": ":spdk-test:key1", 00:37:42.366 "allow_unrecognized_csi": false, 00:37:42.366 "method": "bdev_nvme_attach_controller", 00:37:42.366 "req_id": 1 00:37:42.366 } 00:37:42.366 Got JSON-RPC error response 00:37:42.366 response: 00:37:42.366 { 00:37:42.366 "code": -5, 00:37:42.366 "message": "Input/output error" 00:37:42.366 } 00:37:42.366 12:22:18 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:37:42.366 12:22:18 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:42.366 12:22:18 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:42.366 12:22:18 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:42.366 12:22:18 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:37:42.366 12:22:18 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:42.366 12:22:18 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:37:42.366 12:22:18 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:37:42.366 12:22:18 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:37:42.366 12:22:18 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:42.366 12:22:18 keyring_linux -- keyring/linux.sh@33 -- # sn=907795088 00:37:42.366 12:22:18 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 907795088 00:37:42.366 1 links removed 00:37:42.366 12:22:18 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:42.366 12:22:18 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:37:42.366 12:22:18 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:37:42.366 12:22:18 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:37:42.366 12:22:18 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:37:42.366 12:22:18 keyring_linux -- keyring/linux.sh@33 -- # sn=548152969 00:37:42.366 12:22:18 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 548152969 00:37:42.366 1 links removed 00:37:42.366 12:22:18 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1306391 00:37:42.366 12:22:18 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1306391 ']' 00:37:42.366 12:22:18 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1306391 00:37:42.366 12:22:18 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:37:42.366 12:22:18 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:42.366 12:22:18 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1306391 00:37:42.366 12:22:18 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:42.366 12:22:18 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:42.366 12:22:18 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1306391' 00:37:42.366 killing process with pid 1306391 00:37:42.366 12:22:18 keyring_linux -- common/autotest_common.sh@969 -- # kill 1306391 00:37:42.366 Received shutdown signal, test time was about 1.000000 seconds 00:37:42.366 00:37:42.366 Latency(us) 00:37:42.366 [2024-10-21T10:22:18.961Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:42.366 [2024-10-21T10:22:18.961Z] =================================================================================================================== 00:37:42.366 [2024-10-21T10:22:18.961Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:42.366 12:22:18 keyring_linux -- common/autotest_common.sh@974 -- # wait 1306391 00:37:42.630 12:22:18 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1306058 00:37:42.630 12:22:18 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1306058 ']' 00:37:42.630 12:22:18 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1306058 00:37:42.630 12:22:18 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:37:42.630 12:22:18 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:42.630 12:22:18 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1306058 00:37:42.630 12:22:19 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:42.630 12:22:19 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:42.630 12:22:19 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1306058' 00:37:42.630 killing process with pid 1306058 00:37:42.630 12:22:19 keyring_linux -- common/autotest_common.sh@969 -- # kill 1306058 00:37:42.630 12:22:19 keyring_linux -- common/autotest_common.sh@974 -- # wait 1306058 00:37:42.891 00:37:42.891 real 0m4.483s 00:37:42.891 user 0m8.110s 00:37:42.891 sys 0m1.410s 00:37:42.891 12:22:19 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:42.891 12:22:19 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:42.891 ************************************ 00:37:42.891 END TEST keyring_linux 00:37:42.891 ************************************ 00:37:42.891 12:22:19 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:37:42.891 12:22:19 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:37:42.891 12:22:19 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:37:42.891 12:22:19 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:37:42.891 12:22:19 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:37:42.891 12:22:19 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:37:42.891 12:22:19 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:37:42.891 12:22:19 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:37:42.891 12:22:19 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:37:42.891 12:22:19 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:37:42.891 12:22:19 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:37:42.891 12:22:19 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:37:42.891 12:22:19 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:37:42.891 12:22:19 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:37:42.891 12:22:19 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:37:42.891 12:22:19 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:37:42.892 12:22:19 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:37:42.892 12:22:19 -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:42.892 12:22:19 -- common/autotest_common.sh@10 -- # set +x 00:37:42.892 12:22:19 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:37:42.892 12:22:19 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:37:42.892 12:22:19 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:37:42.892 12:22:19 -- common/autotest_common.sh@10 -- # set +x 00:37:51.035 INFO: APP EXITING 00:37:51.035 INFO: killing all VMs 00:37:51.035 INFO: killing vhost app 00:37:51.035 INFO: EXIT DONE 00:37:54.463 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:37:54.463 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:37:54.463 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:37:54.463 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:37:54.463 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:37:54.463 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:37:54.463 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:37:54.463 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:37:54.463 0000:65:00.0 (144d a80a): Already using the nvme driver 00:37:54.463 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:37:54.463 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:37:54.463 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:37:54.463 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:37:54.463 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:37:54.463 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:37:54.463 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:37:54.463 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:37:57.763 Cleaning 00:37:57.763 Removing: /var/run/dpdk/spdk0/config 00:37:57.763 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:57.763 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:57.763 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:57.763 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:57.763 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:37:57.763 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:37:57.763 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:37:57.763 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:37:57.763 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:57.763 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:57.763 Removing: /var/run/dpdk/spdk1/config 00:37:57.763 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:37:57.763 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:37:57.763 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:37:57.763 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:37:57.763 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:37:57.763 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:37:57.763 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:37:57.763 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:37:57.763 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:37:57.763 Removing: /var/run/dpdk/spdk1/hugepage_info 00:37:57.763 Removing: /var/run/dpdk/spdk2/config 00:37:57.763 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:37:57.763 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:37:57.763 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:37:58.024 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:37:58.024 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:37:58.024 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:37:58.024 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:37:58.024 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:37:58.024 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:37:58.024 Removing: /var/run/dpdk/spdk2/hugepage_info 00:37:58.024 Removing: /var/run/dpdk/spdk3/config 00:37:58.024 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:37:58.024 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:37:58.024 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:37:58.024 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:37:58.024 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:37:58.024 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:37:58.024 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:37:58.024 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:37:58.024 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:37:58.024 Removing: /var/run/dpdk/spdk3/hugepage_info 00:37:58.024 Removing: /var/run/dpdk/spdk4/config 00:37:58.024 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:37:58.024 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:37:58.024 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:37:58.024 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:37:58.024 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:37:58.024 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:37:58.024 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:37:58.024 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:37:58.024 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:37:58.024 Removing: /var/run/dpdk/spdk4/hugepage_info 00:37:58.024 Removing: /dev/shm/bdev_svc_trace.1 00:37:58.024 Removing: /dev/shm/nvmf_trace.0 00:37:58.024 Removing: /dev/shm/spdk_tgt_trace.pid735474 00:37:58.024 Removing: /var/run/dpdk/spdk0 00:37:58.024 Removing: /var/run/dpdk/spdk1 00:37:58.024 Removing: /var/run/dpdk/spdk2 00:37:58.024 Removing: /var/run/dpdk/spdk3 00:37:58.024 Removing: /var/run/dpdk/spdk4 00:37:58.024 Removing: /var/run/dpdk/spdk_pid1000868 00:37:58.024 Removing: /var/run/dpdk/spdk_pid1001584 00:37:58.024 Removing: /var/run/dpdk/spdk_pid1002303 00:37:58.024 Removing: /var/run/dpdk/spdk_pid1007356 00:37:58.024 Removing: /var/run/dpdk/spdk_pid1014056 00:37:58.024 Removing: /var/run/dpdk/spdk_pid1014057 00:37:58.024 Removing: /var/run/dpdk/spdk_pid1014058 00:37:58.024 Removing: /var/run/dpdk/spdk_pid1018743 00:37:58.024 Removing: /var/run/dpdk/spdk_pid1029093 00:37:58.024 Removing: /var/run/dpdk/spdk_pid1034395 00:37:58.024 Removing: /var/run/dpdk/spdk_pid1041582 00:37:58.024 Removing: /var/run/dpdk/spdk_pid1043051 00:37:58.024 Removing: /var/run/dpdk/spdk_pid1044612 00:37:58.024 Removing: /var/run/dpdk/spdk_pid1046143 00:37:58.024 Removing: /var/run/dpdk/spdk_pid1051906 00:37:58.024 Removing: /var/run/dpdk/spdk_pid1056945 00:37:58.024 Removing: /var/run/dpdk/spdk_pid1066041 00:37:58.024 Removing: /var/run/dpdk/spdk_pid1066045 00:37:58.024 Removing: /var/run/dpdk/spdk_pid1071201 00:37:58.024 Removing: /var/run/dpdk/spdk_pid1071432 00:37:58.024 Removing: /var/run/dpdk/spdk_pid1071759 00:37:58.024 Removing: /var/run/dpdk/spdk_pid1072142 00:37:58.024 Removing: /var/run/dpdk/spdk_pid1072241 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1077812 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1078465 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1083931 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1087735 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1094117 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1100674 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1110910 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1119490 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1119545 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1143018 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1143699 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1144390 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1145060 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1146114 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1146796 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1147486 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1148161 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1153219 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1153556 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1160765 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1160974 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1167432 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1172518 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1184168 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1184927 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1190338 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1190689 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1195723 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1202451 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1205539 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1217688 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1228383 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1230391 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1231403 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1251579 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1256299 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1259492 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1267258 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1267272 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1273143 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1275471 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1277865 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1279155 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1281566 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1283093 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1293625 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1294252 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1294765 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1297605 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1298253 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1298878 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1303715 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1303805 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1305619 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1306058 00:37:58.284 Removing: /var/run/dpdk/spdk_pid1306391 00:37:58.284 Removing: /var/run/dpdk/spdk_pid733525 00:37:58.284 Removing: /var/run/dpdk/spdk_pid735474 00:37:58.284 Removing: /var/run/dpdk/spdk_pid736316 00:37:58.284 Removing: /var/run/dpdk/spdk_pid737363 00:37:58.284 Removing: /var/run/dpdk/spdk_pid737620 00:37:58.284 Removing: /var/run/dpdk/spdk_pid738768 00:37:58.545 Removing: /var/run/dpdk/spdk_pid738813 00:37:58.545 Removing: /var/run/dpdk/spdk_pid739246 00:37:58.545 Removing: /var/run/dpdk/spdk_pid740383 00:37:58.545 Removing: /var/run/dpdk/spdk_pid740969 00:37:58.545 Removing: /var/run/dpdk/spdk_pid741327 00:37:58.545 Removing: /var/run/dpdk/spdk_pid741660 00:37:58.545 Removing: /var/run/dpdk/spdk_pid742057 00:37:58.545 Removing: /var/run/dpdk/spdk_pid742457 00:37:58.545 Removing: /var/run/dpdk/spdk_pid742813 00:37:58.545 Removing: /var/run/dpdk/spdk_pid743123 00:37:58.545 Removing: /var/run/dpdk/spdk_pid743391 00:37:58.545 Removing: /var/run/dpdk/spdk_pid744621 00:37:58.545 Removing: /var/run/dpdk/spdk_pid747931 00:37:58.546 Removing: /var/run/dpdk/spdk_pid748252 00:37:58.546 Removing: /var/run/dpdk/spdk_pid748615 00:37:58.546 Removing: /var/run/dpdk/spdk_pid748629 00:37:58.546 Removing: /var/run/dpdk/spdk_pid749088 00:37:58.546 Removing: /var/run/dpdk/spdk_pid749324 00:37:58.546 Removing: /var/run/dpdk/spdk_pid749703 00:37:58.546 Removing: /var/run/dpdk/spdk_pid749988 00:37:58.546 Removing: /var/run/dpdk/spdk_pid750215 00:37:58.546 Removing: /var/run/dpdk/spdk_pid750415 00:37:58.546 Removing: /var/run/dpdk/spdk_pid750660 00:37:58.546 Removing: /var/run/dpdk/spdk_pid750786 00:37:58.546 Removing: /var/run/dpdk/spdk_pid751230 00:37:58.546 Removing: /var/run/dpdk/spdk_pid751580 00:37:58.546 Removing: /var/run/dpdk/spdk_pid751948 00:37:58.546 Removing: /var/run/dpdk/spdk_pid756504 00:37:58.546 Removing: /var/run/dpdk/spdk_pid761892 00:37:58.546 Removing: /var/run/dpdk/spdk_pid773691 00:37:58.546 Removing: /var/run/dpdk/spdk_pid774553 00:37:58.546 Removing: /var/run/dpdk/spdk_pid779782 00:37:58.546 Removing: /var/run/dpdk/spdk_pid780211 00:37:58.546 Removing: /var/run/dpdk/spdk_pid785751 00:37:58.546 Removing: /var/run/dpdk/spdk_pid792845 00:37:58.546 Removing: /var/run/dpdk/spdk_pid796136 00:37:58.546 Removing: /var/run/dpdk/spdk_pid808775 00:37:58.546 Removing: /var/run/dpdk/spdk_pid819574 00:37:58.546 Removing: /var/run/dpdk/spdk_pid821672 00:37:58.546 Removing: /var/run/dpdk/spdk_pid822858 00:37:58.546 Removing: /var/run/dpdk/spdk_pid844138 00:37:58.546 Removing: /var/run/dpdk/spdk_pid849142 00:37:58.546 Removing: /var/run/dpdk/spdk_pid906062 00:37:58.546 Removing: /var/run/dpdk/spdk_pid912442 00:37:58.546 Removing: /var/run/dpdk/spdk_pid919603 00:37:58.546 Removing: /var/run/dpdk/spdk_pid927381 00:37:58.546 Removing: /var/run/dpdk/spdk_pid927443 00:37:58.546 Removing: /var/run/dpdk/spdk_pid928456 00:37:58.546 Removing: /var/run/dpdk/spdk_pid929476 00:37:58.546 Removing: /var/run/dpdk/spdk_pid930540 00:37:58.546 Removing: /var/run/dpdk/spdk_pid931130 00:37:58.546 Removing: /var/run/dpdk/spdk_pid931223 00:37:58.546 Removing: /var/run/dpdk/spdk_pid931454 00:37:58.546 Removing: /var/run/dpdk/spdk_pid931570 00:37:58.546 Removing: /var/run/dpdk/spdk_pid931575 00:37:58.546 Removing: /var/run/dpdk/spdk_pid932576 00:37:58.546 Removing: /var/run/dpdk/spdk_pid933582 00:37:58.546 Removing: /var/run/dpdk/spdk_pid934587 00:37:58.546 Removing: /var/run/dpdk/spdk_pid935268 00:37:58.546 Removing: /var/run/dpdk/spdk_pid935270 00:37:58.546 Removing: /var/run/dpdk/spdk_pid935604 00:37:58.546 Removing: /var/run/dpdk/spdk_pid937042 00:37:58.546 Removing: /var/run/dpdk/spdk_pid938439 00:37:58.807 Removing: /var/run/dpdk/spdk_pid948694 00:37:58.807 Removing: /var/run/dpdk/spdk_pid983182 00:37:58.807 Removing: /var/run/dpdk/spdk_pid988590 00:37:58.807 Removing: /var/run/dpdk/spdk_pid990588 00:37:58.807 Removing: /var/run/dpdk/spdk_pid992908 00:37:58.807 Removing: /var/run/dpdk/spdk_pid993104 00:37:58.807 Removing: /var/run/dpdk/spdk_pid993311 00:37:58.807 Removing: /var/run/dpdk/spdk_pid993645 00:37:58.807 Removing: /var/run/dpdk/spdk_pid994356 00:37:58.807 Removing: /var/run/dpdk/spdk_pid996375 00:37:58.807 Removing: /var/run/dpdk/spdk_pid997509 00:37:58.807 Removing: /var/run/dpdk/spdk_pid998165 00:37:58.807 Clean 00:37:58.807 12:22:35 -- common/autotest_common.sh@1451 -- # return 0 00:37:58.807 12:22:35 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:37:58.807 12:22:35 -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:58.807 12:22:35 -- common/autotest_common.sh@10 -- # set +x 00:37:58.807 12:22:35 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:37:58.807 12:22:35 -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:58.807 12:22:35 -- common/autotest_common.sh@10 -- # set +x 00:37:58.807 12:22:35 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:58.807 12:22:35 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:37:58.807 12:22:35 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:37:58.807 12:22:35 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:37:58.807 12:22:35 -- spdk/autotest.sh@394 -- # hostname 00:37:58.807 12:22:35 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:37:59.068 geninfo: WARNING: invalid characters removed from testname! 00:38:25.647 12:23:00 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:27.557 12:23:03 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:28.939 12:23:05 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:30.850 12:23:06 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:32.233 12:23:08 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:34.143 12:23:10 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:35.526 12:23:11 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:35.526 12:23:11 -- common/autotest_common.sh@1690 -- $ [[ y == y ]] 00:38:35.526 12:23:11 -- common/autotest_common.sh@1691 -- $ lcov --version 00:38:35.526 12:23:11 -- common/autotest_common.sh@1691 -- $ awk '{print $NF}' 00:38:35.526 12:23:12 -- common/autotest_common.sh@1691 -- $ lt 1.15 2 00:38:35.526 12:23:12 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:38:35.526 12:23:12 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:38:35.526 12:23:12 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:38:35.526 12:23:12 -- scripts/common.sh@336 -- $ IFS=.-: 00:38:35.526 12:23:12 -- scripts/common.sh@336 -- $ read -ra ver1 00:38:35.526 12:23:12 -- scripts/common.sh@337 -- $ IFS=.-: 00:38:35.526 12:23:12 -- scripts/common.sh@337 -- $ read -ra ver2 00:38:35.526 12:23:12 -- scripts/common.sh@338 -- $ local 'op=<' 00:38:35.526 12:23:12 -- scripts/common.sh@340 -- $ ver1_l=2 00:38:35.526 12:23:12 -- scripts/common.sh@341 -- $ ver2_l=1 00:38:35.526 12:23:12 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:38:35.526 12:23:12 -- scripts/common.sh@344 -- $ case "$op" in 00:38:35.526 12:23:12 -- scripts/common.sh@345 -- $ : 1 00:38:35.526 12:23:12 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:38:35.526 12:23:12 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:35.526 12:23:12 -- scripts/common.sh@365 -- $ decimal 1 00:38:35.526 12:23:12 -- scripts/common.sh@353 -- $ local d=1 00:38:35.526 12:23:12 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:38:35.526 12:23:12 -- scripts/common.sh@355 -- $ echo 1 00:38:35.526 12:23:12 -- scripts/common.sh@365 -- $ ver1[v]=1 00:38:35.526 12:23:12 -- scripts/common.sh@366 -- $ decimal 2 00:38:35.526 12:23:12 -- scripts/common.sh@353 -- $ local d=2 00:38:35.526 12:23:12 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:38:35.526 12:23:12 -- scripts/common.sh@355 -- $ echo 2 00:38:35.526 12:23:12 -- scripts/common.sh@366 -- $ ver2[v]=2 00:38:35.526 12:23:12 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:38:35.526 12:23:12 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:38:35.526 12:23:12 -- scripts/common.sh@368 -- $ return 0 00:38:35.526 12:23:12 -- common/autotest_common.sh@1692 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:35.526 12:23:12 -- common/autotest_common.sh@1704 -- $ export 'LCOV_OPTS= 00:38:35.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:35.526 --rc genhtml_branch_coverage=1 00:38:35.526 --rc genhtml_function_coverage=1 00:38:35.526 --rc genhtml_legend=1 00:38:35.526 --rc geninfo_all_blocks=1 00:38:35.526 --rc geninfo_unexecuted_blocks=1 00:38:35.526 00:38:35.526 ' 00:38:35.526 12:23:12 -- common/autotest_common.sh@1704 -- $ LCOV_OPTS=' 00:38:35.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:35.526 --rc genhtml_branch_coverage=1 00:38:35.526 --rc genhtml_function_coverage=1 00:38:35.526 --rc genhtml_legend=1 00:38:35.526 --rc geninfo_all_blocks=1 00:38:35.526 --rc geninfo_unexecuted_blocks=1 00:38:35.526 00:38:35.526 ' 00:38:35.526 12:23:12 -- common/autotest_common.sh@1705 -- $ export 'LCOV=lcov 00:38:35.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:35.526 --rc genhtml_branch_coverage=1 00:38:35.526 --rc genhtml_function_coverage=1 00:38:35.526 --rc genhtml_legend=1 00:38:35.526 --rc geninfo_all_blocks=1 00:38:35.526 --rc geninfo_unexecuted_blocks=1 00:38:35.526 00:38:35.526 ' 00:38:35.526 12:23:12 -- common/autotest_common.sh@1705 -- $ LCOV='lcov 00:38:35.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:35.526 --rc genhtml_branch_coverage=1 00:38:35.526 --rc genhtml_function_coverage=1 00:38:35.526 --rc genhtml_legend=1 00:38:35.526 --rc geninfo_all_blocks=1 00:38:35.526 --rc geninfo_unexecuted_blocks=1 00:38:35.526 00:38:35.526 ' 00:38:35.526 12:23:12 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:35.526 12:23:12 -- scripts/common.sh@15 -- $ shopt -s extglob 00:38:35.526 12:23:12 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:38:35.526 12:23:12 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:35.526 12:23:12 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:35.526 12:23:12 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:35.526 12:23:12 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:35.526 12:23:12 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:35.526 12:23:12 -- paths/export.sh@5 -- $ export PATH 00:38:35.527 12:23:12 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:35.527 12:23:12 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:38:35.527 12:23:12 -- common/autobuild_common.sh@486 -- $ date +%s 00:38:35.527 12:23:12 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1729506192.XXXXXX 00:38:35.527 12:23:12 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1729506192.mnteHi 00:38:35.527 12:23:12 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:38:35.527 12:23:12 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:38:35.527 12:23:12 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:38:35.527 12:23:12 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:38:35.527 12:23:12 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:38:35.527 12:23:12 -- common/autobuild_common.sh@502 -- $ get_config_params 00:38:35.527 12:23:12 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:38:35.527 12:23:12 -- common/autotest_common.sh@10 -- $ set +x 00:38:35.527 12:23:12 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:38:35.527 12:23:12 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:38:35.527 12:23:12 -- pm/common@17 -- $ local monitor 00:38:35.527 12:23:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:35.527 12:23:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:35.527 12:23:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:35.527 12:23:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:35.527 12:23:12 -- pm/common@21 -- $ date +%s 00:38:35.527 12:23:12 -- pm/common@25 -- $ sleep 1 00:38:35.527 12:23:12 -- pm/common@21 -- $ date +%s 00:38:35.527 12:23:12 -- pm/common@21 -- $ date +%s 00:38:35.527 12:23:12 -- pm/common@21 -- $ date +%s 00:38:35.527 12:23:12 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1729506192 00:38:35.527 12:23:12 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1729506192 00:38:35.527 12:23:12 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1729506192 00:38:35.527 12:23:12 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1729506192 00:38:35.788 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1729506192_collect-cpu-load.pm.log 00:38:35.788 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1729506192_collect-vmstat.pm.log 00:38:35.788 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1729506192_collect-cpu-temp.pm.log 00:38:35.788 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1729506192_collect-bmc-pm.bmc.pm.log 00:38:36.732 12:23:13 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:38:36.732 12:23:13 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:38:36.732 12:23:13 -- spdk/autopackage.sh@14 -- $ timing_finish 00:38:36.732 12:23:13 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:36.732 12:23:13 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:38:36.732 12:23:13 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:36.732 12:23:13 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:38:36.732 12:23:13 -- pm/common@29 -- $ signal_monitor_resources TERM 00:38:36.732 12:23:13 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:38:36.732 12:23:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:36.732 12:23:13 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:38:36.732 12:23:13 -- pm/common@44 -- $ pid=1319041 00:38:36.732 12:23:13 -- pm/common@50 -- $ kill -TERM 1319041 00:38:36.732 12:23:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:36.732 12:23:13 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:38:36.732 12:23:13 -- pm/common@44 -- $ pid=1319042 00:38:36.732 12:23:13 -- pm/common@50 -- $ kill -TERM 1319042 00:38:36.732 12:23:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:36.732 12:23:13 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:38:36.732 12:23:13 -- pm/common@44 -- $ pid=1319044 00:38:36.732 12:23:13 -- pm/common@50 -- $ kill -TERM 1319044 00:38:36.732 12:23:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:36.732 12:23:13 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:38:36.732 12:23:13 -- pm/common@44 -- $ pid=1319069 00:38:36.732 12:23:13 -- pm/common@50 -- $ sudo -E kill -TERM 1319069 00:38:36.732 + [[ -n 649024 ]] 00:38:36.732 + sudo kill 649024 00:38:36.743 [Pipeline] } 00:38:36.758 [Pipeline] // stage 00:38:36.764 [Pipeline] } 00:38:36.779 [Pipeline] // timeout 00:38:36.784 [Pipeline] } 00:38:36.799 [Pipeline] // catchError 00:38:36.805 [Pipeline] } 00:38:36.820 [Pipeline] // wrap 00:38:36.828 [Pipeline] } 00:38:36.844 [Pipeline] // catchError 00:38:36.854 [Pipeline] stage 00:38:36.856 [Pipeline] { (Epilogue) 00:38:36.870 [Pipeline] catchError 00:38:36.872 [Pipeline] { 00:38:36.886 [Pipeline] echo 00:38:36.888 Cleanup processes 00:38:36.894 [Pipeline] sh 00:38:37.184 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:37.184 1319219 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:38:37.184 1319740 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:37.200 [Pipeline] sh 00:38:37.488 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:37.488 ++ grep -v 'sudo pgrep' 00:38:37.488 ++ awk '{print $1}' 00:38:37.488 + sudo kill -9 1319219 00:38:37.502 [Pipeline] sh 00:38:37.790 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:50.035 [Pipeline] sh 00:38:50.324 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:50.324 Artifacts sizes are good 00:38:50.340 [Pipeline] archiveArtifacts 00:38:50.347 Archiving artifacts 00:38:50.507 [Pipeline] sh 00:38:50.799 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:38:50.815 [Pipeline] cleanWs 00:38:50.826 [WS-CLEANUP] Deleting project workspace... 00:38:50.826 [WS-CLEANUP] Deferred wipeout is used... 00:38:50.834 [WS-CLEANUP] done 00:38:50.836 [Pipeline] } 00:38:50.854 [Pipeline] // catchError 00:38:50.866 [Pipeline] sh 00:38:51.156 + logger -p user.info -t JENKINS-CI 00:38:51.167 [Pipeline] } 00:38:51.181 [Pipeline] // stage 00:38:51.186 [Pipeline] } 00:38:51.201 [Pipeline] // node 00:38:51.207 [Pipeline] End of Pipeline 00:38:51.247 Finished: SUCCESS